Sample records for beam dynamics algorithms

  1. Dynamic cone beam CT angiography of carotid and cerebral arteries using canine model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Weixing; Zhao Binghui; Conover, David

    2012-01-15

    Purpose: This research is designed to develop and evaluate a flat-panel detector-based dynamic cone beam CT system for dynamic angiography imaging, which is able to provide both dynamic functional information and dynamic anatomic information from one multirevolution cone beam CT scan. Methods: A dynamic cone beam CT scan acquired projections over four revolutions within a time window of 40 s after contrast agent injection through a femoral vein to cover the entire wash-in and wash-out phases. A dynamic cone beam CT reconstruction algorithm was utilized and a novel recovery method was developed to correct the time-enhancement curve of contrast flow.more » From the same data set, both projection-based subtraction and reconstruction-based subtraction approaches were utilized and compared to remove the background tissues and visualize the 3D vascular structure to provide the dynamic anatomic information. Results: Through computer simulations, the new recovery algorithm for dynamic time-enhancement curves was optimized and showed excellent accuracy to recover the actual contrast flow. Canine model experiments also indicated that the recovered time-enhancement curves from dynamic cone beam CT imaging agreed well with that of an IV-digital subtraction angiography (DSA) study. The dynamic vascular structures reconstructed using both projection-based subtraction and reconstruction-based subtraction were almost identical as the differences between them were comparable to the background noise level. At the enhancement peak, all the major carotid and cerebral arteries and the Circle of Willis could be clearly observed. Conclusions: The proposed dynamic cone beam CT approach can accurately recover the actual contrast flow, and dynamic anatomic imaging can be obtained with high isotropic 3D resolution. This approach is promising for diagnosis and treatment planning of vascular diseases and strokes.« less

  2. Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.

    PubMed

    Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao

    2017-01-01

    Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.

  3. A computational procedure for the dynamics of flexible beams within multibody systems. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Downer, Janice Diane

    1990-01-01

    The dynamic analysis of three dimensional elastic beams which experience large rotational and large deformational motions are examined. The beam motion is modeled using an inertial reference for the translational displacements and a body-fixed reference for the rotational quantities. Finite strain rod theories are then defined in conjunction with the beam kinematic description which accounts for the effects of stretching, bending, torsion, and transverse shear deformations. A convected coordinate representation of the Cauchy stress tensor and a conjugate strain definition is introduced to model the beam deformation. To treat the beam dynamics, a two-stage modification of the central difference algorithm is presented to integrate the translational coordinates and the angular velocity vector. The angular orientation is then obtained from the application of an implicit integration algorithm to the Euler parameter/angular velocity kinematical relation. The combined developments of the objective internal force computation with the dynamic solution procedures result in the computational preservation of total energy for undamped systems. The present methodology is also extended to model the dynamics of deployment/retrieval of the flexible members. A moving spatial grid corresponding to the configuration of a deployed rigid beam is employed as a reference for the dynamic variables. A transient integration scheme which accurately accounts for the deforming spatial grid is derived from a space-time finite element discretization of a Hamiltonian variational statement. The computational results of this general deforming finite element beam formulation are compared to reported results for a planar inverse-spaghetti problem.

  4. Injector Beam Dynamics for a High-Repetition Rate 4th-Generation Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadopoulos, C. F.; Corlett, J.; Emma, P.

    2013-05-20

    We report on the beam dynamics studies and optimization methods for a high repetition rate (1 MHz) photoinjector based on a VHF normal conducting electron source. The simultaneous goals of beamcompression and reservation of 6-dimensional beam brightness have to be achieved in the injector, in order to accommodate a linac driven FEL light source. For this, a parallel, multiobjective optimization algorithm is used. We discuss the relative merits of different injector design points, as well as the constraints imposed on the beam dynamics by technical considerations such as the high repetition rate.

  5. SU-F-T-527: A Novel Dynamic Multileaf Collimator Leaf-Sequencing Algorithm in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, J; Lin, H; Chow, J

    Purpose: A novel leaf-sequencing algorithm is developed for generating arbitrary beam intensity profiles in discrete levels using dynamic multileaf collimator (MLC). The efficiency of this dynamic MLC leaf-sequencing method was evaluated using external beam treatment plans delivered by intensity modulated radiation therapy technique. Methods: To qualify and validate this algorithm, integral test for the beam segment of MLC generated by the CORVUS treatment planning system was performed with clinical intensity map experiments. The treatment plans were optimized and the fluence maps for all photon beams were determined. This algorithm started with the algebraic expression for the area under the beammore » profile. The coefficients in the expression can be transformed into the specifications for the leaf-setting sequence. The leaf optimization procedure was then applied and analyzed for clinical relevant intensity profiles in cancer treatment. Results: The macrophysical effect of this method can be described by volumetric plan evaluation tools such as dose-volume histograms (DVHs). The DVH results are in good agreement compared to those from the CORVUS treatment planning system. Conclusion: We developed a dynamic MLC method to examine the stability of leaf speed including effects of acceleration and deceleration of leaf motion in order to make sure the stability of leaf speed did not affect the intensity profile generated. It was found that the mechanical requirements were better satisfied using this method. The Project is sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.« less

  6. Influence of foundation mass and surface roughness on dynamic response of beam on dynamic foundation subjected to the moving load

    NASA Astrophysics Data System (ADS)

    Tran Quoc, Tinh; Khong Trong, Toan; Luong Van, Hai

    2018-04-01

    In this paper, Improved Moving Element Method (IMEM) is used to analyze the dynamic response of Euler-Bernoulli beam structures on the dynamic foundation model subjected to the moving load. The effects of characteristic foundation model parameters such as Winkler stiffness, shear layer based on the Pasternak model, viscoelastic dashpot and characteristic parameter of mass on foundation. Beams are modeled by moving elements while the load is fixed. Based on the principle of the publicly virtual balancing and the theory of moving element method, the motion differential equation of the system is established and solved by means of the numerical integration based on the Newmark algorithm. The influence of mass on foundation and the roughness of the beam surface on the dynamic response of beam are examined in details.

  7. MULTI-OBJECTIVE ONLINE OPTIMIZATION OF BEAM LIFETIME AT APS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yipeng

    In this paper, online optimization of beam lifetime at the APS (Advanced Photon Source) storage ring is presented. A general genetic algorithm (GA) is developed and employed for some online optimizations in the APS storage ring. Sextupole magnets in 40 sectors of the APS storage ring are employed as variables for the online nonlinear beam dynamics optimization. The algorithm employs several optimization objectives and is designed to run with topup mode or beam current decay mode. Up to 50\\% improvement of beam lifetime is demonstrated, without affecting the transverse beam sizes and other relevant parameters. In some cases, the top-upmore » injection efficiency is also improved.« less

  8. Dynamic splitting of Gaussian pencil beams in heterogeneity-correction algorithms for radiotherapy with heavy charged particles.

    PubMed

    Kanematsu, Nobuyuki; Komori, Masataka; Yonai, Shunsuke; Ishizaki, Azusa

    2009-04-07

    The pencil-beam algorithm is valid only when elementary Gaussian beams are small enough compared to the lateral heterogeneity of a medium, which is not always true in actual radiotherapy with protons and ions. This work addresses a solution for the problem. We found approximate self-similarity of Gaussian distributions, with which Gaussian beams can split into narrower and deflecting daughter beams when their sizes have overreached lateral heterogeneity in the beam-transport calculation. The effectiveness was assessed in a carbon-ion beam experiment in the presence of steep range compensation, where the splitting calculation reproduced a detour effect amounting to about 10% in dose or as large as the lateral particle disequilibrium effect. The efficiency was analyzed in calculations for carbon-ion and proton radiations with a heterogeneous phantom model, where the beam splitting increased computing times by factors of 4.7 and 3.2. The present method generally improves the accuracy of the pencil-beam algorithm without severe inefficiency. It will therefore be useful for treatment planning and potentially other demanding applications.

  9. Modeling of composite beams and plates for static and dynamic analysis

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Atilgan, Ali R.; Lee, Bok Woo

    1990-01-01

    A rigorous theory and corresponding computational algorithms was developed for a variety of problems regarding the analysis of composite beams and plates. The modeling approach is intended to be applicable to both static and dynamic analysis of generally anisotropic, nonhomogeneous beams and plates. Development of a theory for analysis of the local deformation of plates was the major focus. Some work was performed on global deformation of beams. Because of the strong parallel between beams and plates, the two were treated together as thin bodies, especially in cases where it will clarify the meaning of certain terminology and the motivation behind certain mathematical operations.

  10. Variable-spot ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Lixiang; Qiu, Keqiang; Fu, Shaojun

    2016-03-01

    This paper introduces a new scheme of ion beam figuring (IBF), or rather variable-spot IBF, which is conducted at a constant scanning velocity with variable-spot ion beam collimated by a variable diaphragm. It aims at improving the reachability and adaptation of the figuring process within the limits of machine dynamics by varying the ion beam spot size instead of the scanning velocity. In contrast to the dwell time algorithm in the conventional IBF, the variable-spot IBF adopts a new algorithm, which consists of the scan path programming and the trajectory optimization using pattern search. In this algorithm, instead of the dwell time, a new concept, integral etching time, is proposed to interpret the process of variable-spot IBF. We conducted simulations to verify its feasibility and practicality. The simulation results indicate the variable-spot IBF is a promising alternative to the conventional approach.

  11. Dynamic Beam Solutions for Real-Time Simulation and Control Development of Flexible Rockets

    NASA Technical Reports Server (NTRS)

    Su, Weihua; King, Cecilia K.; Clark, Scott R.; Griffin, Edwin D.; Suhey, Jeffrey D.; Wolf, Michael G.

    2016-01-01

    In this study, flexible rockets are structurally represented by linear beams. Both direct and indirect solutions of beam dynamic equations are sought to facilitate real-time simulation and control development for flexible rockets. The direct solution is completed by numerically integrate the beam structural dynamic equation using an explicit Newmark-based scheme, which allows for stable and fast transient solutions to the dynamics of flexile rockets. Furthermore, in the real-time operation, the bending strain of the beam is measured by fiber optical sensors (FOS) at intermittent locations along the span, while both angular velocity and translational acceleration are measured at a single point by the inertial measurement unit (IMU). Another study in this paper is to find the analytical and numerical solutions of the beam dynamics based on the limited measurement data to facilitate the real-time control development. Numerical studies demonstrate the accuracy of these real-time solutions to the beam dynamics. Such analytical and numerical solutions, when integrated with data processing and control algorithms and mechanisms, have the potential to increase launch availability by processing flight data into the flexible launch vehicle's control system.

  12. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  13. Online optimization of storage ring nonlinear beam dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xiaobiao; Safranek, James

    2015-08-01

    We propose to optimize the nonlinear beam dynamics of existing and future storage rings with direct online optimization techniques. This approach may have crucial importance for the implementation of diffraction limited storage rings. In this paper considerations and algorithms for the online optimization approach are discussed. We have applied this approach to experimentally improve the dynamic aperture of the SPEAR3 storage ring with the robust conjugate direction search method and the particle swarm optimization method. The dynamic aperture was improved by more than 5 mm within a short period of time. Experimental setup and results are presented.

  14. Genetic algorithm based active vibration control for a moving flexible smart beam driven by a pneumatic rod cylinder

    NASA Astrophysics Data System (ADS)

    Qiu, Zhi-cheng; Shi, Ming-li; Wang, Bin; Xie, Zhuo-wei

    2012-05-01

    A rod cylinder based pneumatic driving scheme is proposed to suppress the vibration of a flexible smart beam. Pulse code modulation (PCM) method is employed to control the motion of the cylinder's piston rod for simultaneous positioning and vibration suppression. Firstly, the system dynamics model is derived using Hamilton principle. Its standard state-space representation is obtained for characteristic analysis, controller design, and simulation. Secondly, a genetic algorithm (GA) is applied to optimize and tune the control gain parameters adaptively based on the specific performance index. Numerical simulations are performed on the pneumatic driving elastic beam system, using the established model and controller with tuned gains by GA optimization process. Finally, an experimental setup for the flexible beam driven by a pneumatic rod cylinder is constructed. Experiments for suppressing vibrations of the flexible beam are conducted. Theoretical analysis, numerical simulation and experimental results demonstrate that the proposed pneumatic drive scheme and the adopted control algorithms are feasible. The large amplitude vibration of the first bending mode can be suppressed effectively.

  15. Symplectic modeling of beam loading in electromagnetic cavities

    DOE PAGES

    Abell, Dan T.; Cook, Nathan M.; Webb, Stephen D.

    2017-05-22

    Simulating beam loading in radio frequency accelerating structures is critical for understanding higher-order mode effects on beam dynamics, such as beam break-up instability in energy recovery linacs. Full wave simulations of beam loading in radio frequency structures are computationally expensive, and while reduced models can ignore essential physics, it can be difficult to generalize. Here, we present a self-consistent algorithm derived from the least-action principle which can model an arbitrary number of cavity eigenmodes and with a generic beam distribution. It has been implemented in our new Open Library for Investigating Vacuum Electronics (OLIVE).

  16. Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.

    PubMed

    Song, Xuegang; Zhang, Yuexin; Liang, Dakai

    2017-10-10

    This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.

  17. The dynamic micro computed tomography at SSRF

    NASA Astrophysics Data System (ADS)

    Chen, R.; Xu, L.; Du, G.; Deng, B.; Xie, H.; Xiao, T.

    2018-05-01

    Synchrotron radiation micro-computed tomography (SR-μCT) is a critical technique for quantitative characterizing the 3D internal structure of samples, recently the dynamic SR-μCT has been attracting vast attention since it can evaluate the three-dimensional structure evolution of a sample. A dynamic μCT method, which is based on monochromatic beam, was developed at the X-ray Imaging and Biomedical Application Beamline at Shanghai Synchrotron Radiation Facility, by combining the compressed sensing based CT reconstruction algorithm and hardware upgrade. The monochromatic beam based method can achieve quantitative information, and lower dose than the white beam base method in which the lower energy beam is absorbed by the sample rather than contribute to the final imaging signal. The developed method is successfully used to investigate the compression of the air sac during respiration in a bell cricket, providing new knowledge for further research on the insect respiratory system.

  18. Dynamic phasing of multichannel cw laser radiation by means of a stochastic gradient algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, V A; Volkov, M V; Garanin, S G

    2013-09-30

    The phasing of a multichannel laser beam by means of an iterative stochastic parallel gradient (SPG) algorithm has been numerically and experimentally investigated. The operation of the SPG algorithm is simulated, the acceptable range of amplitudes of probe phase shifts is found, and the algorithm parameters at which the desired Strehl number can be obtained with a minimum number of iterations are determined. An experimental bench with phase modulators based on lithium niobate, which are controlled by a multichannel electronic unit with a real-time microcontroller, has been designed. Phasing of 16 cw laser beams at a system response bandwidth ofmore » 3.7 kHz and phase thermal distortions in a frequency band of about 10 Hz is experimentally demonstrated. The experimental data are in complete agreement with the calculation results. (control of laser radiation parameters)« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassi, Gabriele; Blednykh, Alexei; Smalyuk, Victor

    A novel algorithm for self-consistent simulations of long-range wakefield effects has been developed and applied to the study of both longitudinal and transverse coupled-bunch instabilities at NSLS-II. The algorithm is implemented in the new parallel tracking code space (self-consistent parallel algorithm for collective effects) discussed in the paper. The code is applicable for accurate beam dynamics simulations in cases where both bunch-to-bunch and intrabunch motions need to be taken into account, such as chromatic head-tail effects on the coupled-bunch instability of a beam with a nonuniform filling pattern, or multibunch and single-bunch effects of a passive higher-harmonic cavity. The numericalmore » simulations have been compared with analytical studies. For a beam with an arbitrary filling pattern, intensity-dependent complex frequency shifts have been derived starting from a system of coupled Vlasov equations. The analytical formulas and numerical simulations confirm that the analysis is reduced to the formulation of an eigenvalue problem based on the known formulas of the complex frequency shifts for the uniform filling pattern case.« less

  20. Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways

    NASA Astrophysics Data System (ADS)

    Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia

    2018-06-01

    The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.

  1. Flat panel detector-based cone beam computed tomography with a circle-plus-two-arcs data acquisition orbit: preliminary phantom study.

    PubMed

    Ning, Ruola; Tang, Xiangyang; Conover, David; Yu, Rongfeng

    2003-07-01

    Cone beam computed tomography (CBCT) has been investigated in the past two decades due to its potential advantages over a fan beam CT. These advantages include (a) great improvement in data acquisition efficiency, spatial resolution, and spatial resolution uniformity, (b) substantially better utilization of x-ray photons generated by the x-ray tube compared to a fan beam CT, and (c) significant advancement in clinical three-dimensional (3D) CT applications. However, most studies of CBCT in the past are focused on cone beam data acquisition theories and reconstruction algorithms. The recent development of x-ray flat panel detectors (FPD) has made CBCT imaging feasible and practical. This paper reports a newly built flat panel detector-based CBCT prototype scanner and presents the results of the preliminary evaluation of the prototype through a phantom study. The prototype consisted of an x-ray tube, a flat panel detector, a GE 8800 CT gantry, a patient table and a computer system. The prototype was constructed by modifying a GE 8800 CT gantry such that both a single-circle cone beam acquisition orbit and a circle-plus-two-arcs orbit can be achieved. With a circle-plus-two-arcs orbit, a complete set of cone beam projection data can be obtained, consisting of a set of circle projections and a set of arc projections. Using the prototype scanner, the set of circle projections were acquired by rotating the x-ray tube and the FPD together on the gantry, and the set of arc projections were obtained by tilting the gantry while the x-ray tube and detector were at the 12 and 6 o'clock positions, respectively. A filtered backprojection exact cone beam reconstruction algorithm based on a circle-plus-two-arcs orbit was used for cone beam reconstruction from both the circle and arc projections. The system was first characterized in terms of the linearity and dynamic range of the detector. Then the uniformity, spatial resolution and low contrast resolution were assessed using different phantoms mainly in the central plane of the cone beam reconstruction. Finally, the reconstruction accuracy of using the circle-plus-two-arcs orbit and its related filtered backprojection cone beam volume CT reconstruction algorithm was evaluated with a specially designed disk phantom. The results obtained using the new cone beam acquisition orbit and the related reconstruction algorithm were compared to those obtained using a single-circle cone beam geometry and Feldkamp's algorithm in terms of reconstruction accuracy. The results of the study demonstrate that the circle-plus-two-arcs cone beam orbit is achievable in practice. Also, the reconstruction accuracy of cone beam reconstruction is significantly improved with the circle-plus-two-arcs orbit and its related exact CB-FPB algorithm, as compared to using a single circle cone beam orbit and Feldkamp's algorithm.

  2. Ef: Software for Nonrelativistic Beam Simulation by Particle-in-Cell Algorithm

    NASA Astrophysics Data System (ADS)

    Boytsov, A. Yu.; Bulychev, A. A.

    2018-04-01

    Understanding of particle dynamics is crucial in construction of electron guns, ion sources and other types of nonrelativistic beam devices. Apart from external guiding and focusing systems, a prominent role in evolution of such low-energy beams is played by particle-particle interaction. Numerical simulations taking into account these effects are typically accomplished by a well-known particle-in-cell method. In practice, for convenient work a simulation program should not only implement this method, but also support parallelization, provide integration with CAD systems and allow access to details of the simulation algorithm. To address the formulated requirements, development of a new open source code - Ef - has been started. It's current features and main functionality are presented. Comparison with several analytical models demonstrates good agreement between the numerical results and the theory. Further development plans are discussed.

  3. Effect of natural weathering conditions on the dynamic behavior of woven aramid composites

    NASA Astrophysics Data System (ADS)

    Kaya, A. I.; Kısa, M.; Özen, M.

    2018-02-01

    In this study, aging of woven aramid/epoxy composites under different natural conditions were studied. Composite beams were manufactured by Vacuum Assisted Resin Infusion Method (VARIM). Composites were cut into specimen according to ASTM D3039 and vibration tests. Elastic moduli of reference composites were found according to ASTM D3039 standard. Validation of methodology was performed numerically in Ansys software before aging process. An algorithm, which is predicated on FFT (Fast Fourier Transforms), was composed in Matlab to process output of vibration analysis data so as to identify natural frequencies of beams. Composites were aged for 12 months and various natural weathering aging conditions effects on woven aramid composite beams were surveyed through vibration analysis with 3 months interval. Five specimens of woven aramid beams were considered for dynamic tests and effect of aging on first three natural frequencies were determined.

  4. Self-consistent Simulations and Analysis of the Coupled-Bunch Instability for Arbitrary Multi-Bunch Configurations

    DOE PAGES

    Bassi, Gabriele; Blednykh, Alexei; Smalyuk, Victor

    2016-02-24

    A novel algorithm for self-consistent simulations of long-range wakefield effects has been developed and applied to the study of both longitudinal and transverse coupled-bunch instabilities at NSLS-II. The algorithm is implemented in the new parallel tracking code space (self-consistent parallel algorithm for collective effects) discussed in the paper. The code is applicable for accurate beam dynamics simulations in cases where both bunch-to-bunch and intrabunch motions need to be taken into account, such as chromatic head-tail effects on the coupled-bunch instability of a beam with a nonuniform filling pattern, or multibunch and single-bunch effects of a passive higher-harmonic cavity. The numericalmore » simulations have been compared with analytical studies. For a beam with an arbitrary filling pattern, intensity-dependent complex frequency shifts have been derived starting from a system of coupled Vlasov equations. The analytical formulas and numerical simulations confirm that the analysis is reduced to the formulation of an eigenvalue problem based on the known formulas of the complex frequency shifts for the uniform filling pattern case.« less

  5. Novel analytical model for optimizing the pull-in voltage in a flexured MEMS switch incorporating beam perforation effect

    NASA Astrophysics Data System (ADS)

    Guha, K.; Laskar, N. M.; Gogoi, H. J.; Borah, A. K.; Baishnab, K. L.; Baishya, S.

    2017-11-01

    This paper presents a new method for the design, modelling and optimization of a uniform serpentine meander based MEMS shunt capacitive switch with perforation on upper beam. The new approach is proposed to improve the Pull-in Voltage performance in a MEMS switch. First a new analytical model of the Pull-in Voltage is proposed using the modified Mejis-Fokkema capacitance model taking care of the nonlinear electrostatic force, the fringing field effect due to beam thickness and etched holes on the beam simultaneously followed by the validation of same with the simulated results of benchmark full 3D FEM solver CoventorWare in a wide range of structural parameter variations. It shows a good agreement with the simulated results. Secondly, an optimization method is presented to determine the optimum configuration of switch for achieving minimum Pull-in voltage considering the proposed analytical mode as objective function. Some high performance Evolutionary Optimization Algorithms have been utilized to obtain the optimum dimensions with less computational cost and complexity. Upon comparing the applied algorithms between each other, the Dragonfly Algorithm is found to be most suitable in terms of minimum Pull-in voltage and higher convergence speed. Optimized values are validated against the simulated results of CoventorWare which shows a very satisfactory results with a small deviation of 0.223 V. In addition to these, the paper proposes, for the first time, a novel algorithmic approach for uniform arrangement of square holes in a given beam area of RF MEMS switch for perforation. The algorithm dynamically accommodates all the square holes within a given beam area such that the maximum space is utilized. This automated arrangement of perforation holes will further improve the computational complexity and design accuracy of the complex design of perforated MEMS switch.

  6. Active vibration control of functionally graded beams with piezoelectric layers based on higher order shear deformation theory

    NASA Astrophysics Data System (ADS)

    Bendine, K.; Boukhoulda, F. B.; Nouari, M.; Satla, Z.

    2016-12-01

    This paper reports on a study of active vibration control of functionally graded beams with upper and lower surface-bonded piezoelectric layers. The model is based on higher-order shear deformation theory and implemented using the finite element method (FEM). The proprieties of the functionally graded beam (FGB) are graded along the thickness direction. The piezoelectric actuator provides a damping effect on the FGB by means of a velocity feedback control algorithm. A Matlab program has been developed for the FGB model and compared with ANSYS APDL. Using Newmark's method numerical solutions are obtained for the dynamic equations of FGB with piezoelectric layers. Numerical results show the effects of the constituent volume fraction and the influence the feedback control gain on the frequency and dynamic response of FGBs.

  7. Vertical dynamic deflection measurement in concrete beams with the Microsoft Kinect.

    PubMed

    Qi, Xiaojuan; Lichti, Derek; El-Badry, Mamdouh; Chow, Jacky; Ang, Kathleen

    2014-02-19

    The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinect's depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively.

  8. Vertical Dynamic Deflection Measurement in Concrete Beams with the Microsoft Kinect

    PubMed Central

    Qi, Xiaojuan; Lichti, Derek; El-Badry, Mamdouh; Chow, Jacky; Ang, Kathleen

    2014-01-01

    The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinect's depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively. PMID:24556668

  9. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugimoto, S; Inoue, T; Kurokawa, C

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbalmore » motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.« less

  10. Free vibration of functionally graded beams and frameworks using the dynamic stiffness method

    NASA Astrophysics Data System (ADS)

    Banerjee, J. R.; Ananthapuvirajah, A.

    2018-05-01

    The free vibration analysis of functionally graded beams (FGBs) and frameworks containing FGBs is carried out by applying the dynamic stiffness method and deriving the elements of the dynamic stiffness matrix in explicit algebraic form. The usually adopted rule that the material properties of the FGB vary continuously through the thickness according to a power law forms the fundamental basis of the governing differential equations of motion in free vibration. The differential equations are solved in closed analytical form when the free vibratory motion is harmonic. The dynamic stiffness matrix is then formulated by relating the amplitudes of forces to those of the displacements at the two ends of the beam. Next, the explicit algebraic expressions for the dynamic stiffness elements are derived with the help of symbolic computation. Finally the Wittrick-Williams algorithm is applied as solution technique to solve the free vibration problems of FGBs with uniform cross-section, stepped FGBs and frameworks consisting of FGBs. Some numerical results are validated against published results, but in the absence of published results for frameworks containing FGBs, consistency checks on the reliability of results are performed. The paper closes with discussion of results and conclusions.

  11. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y.; Borland, Michael

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  12. A closed-loop photon beam control study for the Advanced Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portmann, G.; Bengtsson, J.

    1993-05-01

    The third generation Advanced Light Source (ALS) will produce extremely bright photon beams using undulators and wigglers. In order to position the photon beams accurate to the micron level, a closed-loop feedback system is being developed. Using photon position monitors and dipole corrector magnets, a closed-loop system can automatically compensate for modeling uncertainties and exogenous disturbances. The following paper will present a dynamics model for the perturbations of the closed orbit of the electron beam in the ALS storage ring including the vacuum chamber magnetic field penetration effects. Using this reference model, two closed-loop feedback algorithms will be compared --more » a classical PI controller and a two degree-of-freedom approach. The two degree-of-freedom method provides superior disturbance rejection while maintaining the desired performance goals. Both methods will address the need to gain schedule the controller due to the time varying dynamics introduced by changing field strengths when scanning the insertion devices.« less

  13. CT cardiac imaging: evolution from 2D to 3D backprojection

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke

    2004-04-01

    The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.

  14. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  15. A real-time dynamic-MLC control algorithm for delivering IMRT to targets undergoing 2D rigid motion in the beam's eye view.

    PubMed

    McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech

    2008-09-01

    An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (ID) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated. However, the algorithm presented is robust in the sense that it does not rely on a high level of agreement between the target motion measured during treatment planning and delivery.

  16. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  17. Constrained multi-objective optimization of storage ring lattices

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2018-03-01

    The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.

  18. Quantitative high dynamic range beam profiling for fluorescence microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, T. J., E-mail: t.j.mitchell@dur.ac.uk; Saunter, C. D.; O’Nions, W.

    2014-10-15

    Modern developmental biology relies on optically sectioning fluorescence microscope techniques to produce non-destructive in vivo images of developing specimens at high resolution in three dimensions. As optimal performance of these techniques is reliant on the three-dimensional (3D) intensity profile of the illumination employed, the ability to directly record and analyze these profiles is of great use to the fluorescence microscopist or instrument builder. Though excitation beam profiles can be measured indirectly using a sample of fluorescent beads and recording the emission along the microscope detection path, we demonstrate an alternative approach where a miniature camera sensor is used directly withinmore » the illumination beam. Measurements taken using our approach are solely concerned with the illumination optics as the detection optics are not involved. We present a miniature beam profiling device and high dynamic range flux reconstruction algorithm that together are capable of accurately reproducing quantitative 3D flux maps over a large focal volume. Performance of this beam profiling system is verified within an optical test bench and demonstrated for fluorescence microscopy by profiling the low NA illumination beam of a single plane illumination microscope. The generality and success of this approach showcases a widely flexible beam amplitude diagnostic tool for use within the life sciences.« less

  19. Coherent control of plasma dynamics

    NASA Astrophysics Data System (ADS)

    He, Zhaohan

    2014-10-01

    The concept of coherent control - precise measurement or determination of a process through control of the phase of an applied oscillating field - has been applied to numerous systems with great success. Here, we demonstrate the use of coherent control on plasma dynamics in a laser wakefield electron acceleration experiment. A tightly focused femtosecond laser pulse (10 mJ, 35 fs) was used to generate electron beams by plasma wakefield acceleration in the density down ramp. The technique is based on optimization of the electron beam using a deformable mirror adaptive optical system with an iterative evolutionary genetic algorithm. The image of the electrons on a scintillator screen was processed and used in a fitness function as direct feedback for the optimization algorithm. This coherent manipulation of the laser wavefront leads to orders of magnitude improvement to the electron beam properties such as the peak charge and beam divergence. The laser beam optimized to generate the best electron beam was not the one with the ``best'' focal spot. When a particular wavefront of laser light interacts with plasma, it can affect the plasma wave structures and trapping conditions of the electrons in a complex way. For example, Raman forward scattering, envelope self-modulation, relativistic self-focusing, and relativistic self-phase modulation and many other nonlinear interactions modify both the pulse envelope and phase as the pulse propagates, in a way that cannot be easily predicted and that subsequently dictates the formation of plasma waves. The optimal wavefront could be successfully determined via the heuristic search under laser-plasma conditions that were not known a priori. Control and shaping of the electron energy distribution was found to be less effective, but was still possible. Particle-in-cell simulations were performed to show that the mode structure of the laser beam can affect the plasma wave structure and trapping conditions of electrons, which subsequently produces electron beams with a different divergence. The proof-of-principle demonstration of coherent control for plasmas opens new possibilities for future laser-based accelerators and their applications. This study should also enable a significantly improved understanding of the complex dynamics of laser plasma interactions. This work was supported by DARPA under Contract No. N66001-11-1-4208, the NSF under Contract No. 0935197 and MCubed at the University of Michigan.

  20. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    NASA Astrophysics Data System (ADS)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  1. Dynamic iterative beam hardening correction (DIBHC) in myocardial perfusion imaging using contrast-enhanced computed tomography.

    PubMed

    Stenner, Philip; Schmidt, Bernhard; Allmendinger, Thomas; Flohr, Thomas; Kachelrie, Marc

    2010-06-01

    In cardiac perfusion examinations with computed tomography (CT) large concentrations of iodine in the ventricle and in the descending aorta cause beam hardening artifacts that can lead to incorrect perfusion parameters. The aim of this study is to reduce these artifacts by performing an iterative correction and by accounting for the 3 materials soft tissue, bone, and iodine. Beam hardening corrections are either implemented as simple precorrections which cannot account for higher order beam hardening effects, or as iterative approaches that are based on segmenting the original image into material distribution images. Conventional segmentation algorithms fail to clearly distinguish between iodine and bone. Our new algorithm, DIBHC, calculates the time-dependent iodine distribution by analyzing the voxel changes of a cardiac perfusion examination (typically N approximately 15 electrocardiogram-correlated scans distributed over a total scan time up to T approximately 30 s). These voxel dynamics are due to changes in contrast agent. This prior information allows to precisely distinguish between bone and iodine and is key to DIBHC where each iteration consists of a multimaterial (soft tissue, bone, iodine) polychromatic forward projection, a raw data comparison and a filtered backprojection. Simulations with a semi-anthropomorphic dynamic phantom and clinical scans using a dual source CT scanner with 2 x 128 slices, a tube voltage of 100 kV, a tube current of 180 mAs, and a rotation time of 0.28 seconds have been carried out. The uncorrected images suffer from beam hardening artifacts that appear as dark bands connecting large concentrations of iodine in the ventricle, aorta, and bony structures. The CT-values of the affected tissue are usually underestimated by roughly 20 HU although deviations of up to 61 HU have been observed. For a quantitative evaluation circular regions of interest have been analyzed. After application of DIBHC the mean values obtained deviate by only 1 HU for the simulations and the corrected values show an increase of up to 61 HU for the measurements. One iteration of DIBHC greatly reduces the beam hardening artifacts induced by the contrast agent dynamics (and those due to bone) now allowing for an improved assessment of contrast agent uptake in the myocardium which is essential for determining myocardial perfusion.

  2. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    PubMed

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  3. About improving efficiency of the P3 M algorithms when computing the inter-particle forces in beam dynamics

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2017-03-01

    In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.

  4. General Theory and Algorithms for the Non-Casual Inversion, Slewing and Control of Space-Based Articulated Structures

    DTIC Science & Technology

    1993-10-01

    Structures: Simultaneous Trajectory Tracking and Vibration Reduction ... 10 3 . Buckling Control of a Flexible Beam Using Piezoelectric Actuators...bounded solution for the inverse dynamic torque has to be non-causal. Bayo, et. al. [ 3 ], extended the inverse dynamics to planar, multiple-link systems...presented by &ayo and Moulin [4] for the single link system, with provisions for 3 extension to multiple link systems. An equivalent time domain approach for

  5. Beam orientation optimization for intensity-modulated radiation therapy using mixed integer programming

    NASA Astrophysics Data System (ADS)

    Yang, Ruijie; Dai, Jianrong; Yang, Yong; Hu, Yimin

    2006-08-01

    The purpose of this study is to extend an algorithm proposed for beam orientation optimization in classical conformal radiotherapy to intensity-modulated radiation therapy (IMRT) and to evaluate the algorithm's performance in IMRT scenarios. In addition, the effect of the candidate pool of beam orientations, in terms of beam orientation resolution and starting orientation, on the optimized beam configuration, plan quality and optimization time is also explored. The algorithm is based on the technique of mixed integer linear programming in which binary and positive float variables are employed to represent candidates for beam orientation and beamlet weights in beam intensity maps. Both beam orientations and beam intensity maps are simultaneously optimized in the algorithm with a deterministic method. Several different clinical cases were used to test the algorithm and the results show that both target coverage and critical structures sparing were significantly improved for the plans with optimized beam orientations compared to those with equi-spaced beam orientations. The calculation time was less than an hour for the cases with 36 binary variables on a PC with a Pentium IV 2.66 GHz processor. It is also found that decreasing beam orientation resolution to 10° greatly reduced the size of the candidate pool of beam orientations without significant influence on the optimized beam configuration and plan quality, while selecting different starting orientations had large influence. Our study demonstrates that the algorithm can be applied to IMRT scenarios, and better beam orientation configurations can be obtained using this algorithm. Furthermore, the optimization efficiency can be greatly increased through proper selection of beam orientation resolution and starting beam orientation while guaranteeing the optimized beam configurations and plan quality.

  6. SU-E-T-202: Impact of Monte Carlo Dose Calculation Algorithm On Prostate SBRT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venencia, C; Garrigo, E; Cardenas, J

    2014-06-01

    Purpose: The purpose of this work was to quantify the dosimetric impact of using Monte Carlo algorithm on pre calculated SBRT prostate treatment with pencil beam dose calculation algorithm. Methods: A 6MV photon beam produced by a Novalis TX (BrainLAB-Varian) linear accelerator equipped with HDMLC was used. Treatment plans were done using 9 fields with Iplanv4.5 (BrainLAB) and dynamic IMRT modality. Institutional SBRT protocol uses a total dose to the prostate of 40Gy in 5 fractions, every other day. Dose calculation is done by pencil beam (2mm dose resolution), heterogeneity correction and dose volume constraint (UCLA) for PTV D95%=40Gy andmore » D98%>39.2Gy, Rectum V20Gy<50%, V32Gy<20%, V36Gy<10% and V40Gy<5%, Bladder V20Gy<40% and V40Gy<10%, femoral heads V16Gy<5%, penile bulb V25Gy<3cc, urethra and overlap region between PTV and PRV Rectum Dmax<42Gy. 10 SBRT treatments plans were selected and recalculated using Monte Carlo with 2mm spatial resolution and mean variance of 2%. DVH comparisons between plans were done. Results: The average difference between PTV doses constraints were within 2%. However 3 plans have differences higher than 3% which does not meet the D98% criteria (>39.2Gy) and should have been renormalized. Dose volume constraint differences for rectum, bladder, femoral heads and penile bulb were les than 2% and within tolerances. Urethra region and overlapping between PTV and PRV Rectum shows increment of dose in all plans. The average difference for urethra region was 2.1% with a maximum of 7.8% and for the overlapping region 2.5% with a maximum of 8.7%. Conclusion: Monte Carlo dose calculation on dynamic IMRT treatments could affects on plan normalization. Dose increment in critical region of urethra and PTV overlapping region with PTV could have clinical consequences which need to be studied. The use of Monte Carlo dose calculation algorithm is limited because inverse planning dose optimization use only pencil beam.« less

  7. Transverse and Quantum Effects in Light Control by Light; (A) Parallel Beams: Pump Dynamics for Three Level Superfluorescence; and (B) Counterflow Beams: An Algorithm for Transverse, Full Transient Effects in Optical Bi-Stability in a Fabryperot Cavity.

    DTIC Science & Technology

    1983-01-01

    The resolution of the compu- and also leads to an expression for "dz,"*. tational grid is thereby defined according to e the actual requirements of...computational economy are achieved simultaneously by redistributing the computational grid points according to the physical requirements of the problem...computational Eulerian grid points according to implemented using a two-dimensionl time- the physical requirements of the nonlinear dependent finite

  8. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  9. BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework

    DOE PAGES

    Wang, Qi; Sprague, Michael A.; Jonkman, Jason; ...

    2017-03-14

    Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less

  10. BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qi; Sprague, Michael A.; Jonkman, Jason

    Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less

  11. Optimal design of a smart post-buckled beam actuator using bat algorithm: simulations and experiments

    NASA Astrophysics Data System (ADS)

    Mallick, Rajnish; Ganguli, Ranjan; Kumar, Ravi

    2017-05-01

    The optimized design of a smart post-buckled beam actuator (PBA) is performed in this study. A smart material based piezoceramic stack actuator is used as a prime-mover to drive the buckled beam actuator. Piezoceramic actuators are high force, small displacement devices; they possess high energy density and have high bandwidth. In this study, bench top experiments are conducted to investigate the angular tip deflections due to the PBA. A new design of a linear-to-linear motion amplification device (LX-4) is developed to circumvent the small displacement handicap of piezoceramic stack actuators. LX-4 enhances the piezoceramic actuator mechanical leverage by a factor of four. The PBA model is based on dynamic elastic stability and is analyzed using the Mathieu-Hill equation. A formal optimization is carried out using a newly developed meta-heuristic nature inspired algorithm, named as the bat algorithm (BA). The BA utilizes the echolocation capability of bats. An optimized PBA in conjunction with LX-4 generates end rotations of the order of 15° at the output end. The optimized PBA design incurs less weight and induces large end rotations, which will be useful in development of various mechanical and aerospace devices, such as helicopter trailing edge flaps, micro and nano aerial vehicles and other robotic systems.

  12. [Accurate 3D free-form registration between fan-beam CT and cone-beam CT].

    PubMed

    Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun

    2012-06-01

    Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.

  13. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    PubMed

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.

  14. Null Steering of Adaptive Beamforming Using Linear Constraint Minimum Variance Assisted by Particle Swarm Optimization, Dynamic Mutated Artificial Immune System, and Gravitational Search Algorithm

    PubMed Central

    Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  15. Amplitude Control of Solid-State Modulators for Precision Fast Kicker Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, J A; Anaya, R M; Caporaso, G C

    2002-11-15

    A solid-state modulator with very fast rise and fall times, pulse width agility, and multi-pulse burst and intra-pulse amplitude adjustment capability for use with high speed electron beam kickers has been designed and tested at LLNL. The modulator uses multiple solid-state modules stacked in an inductive-adder configuration. Amplitude adjustment is provided by controlling individual modules in the adder, and is used to compensate for transverse e-beam motion as well as the dynamic response and beam-induced steering effects associated with the kicker structure. A control algorithm calculates a voltage based on measured e-beam displacement and adjusts the modulator to regulate beammore » centroid position. This paper presents design details of amplitude control along with measured performance data from kicker operation on the ETA-II accelerator at LLNL.« less

  16. Generation of dark hollow beam via coherent combination based on adaptive optics.

    PubMed

    Zheng, Yi; Wang, Xiaohua; Shen, Feng; Li, Xinyang

    2010-12-20

    A novel method for generating a dark hollow beam (DHB) is proposed and studied both theoretically and experimentally. A coherent combination technique for laser arrays is implemented based on adaptive optics (AO). A beam arraying structure and an active segmented mirror are designed and described. Piston errors are extracted by a zero-order interference detection system with the help of a custom-made photo-detectors array. An algorithm called the extremum approach is adopted to calculate feedback control signals. A dynamic piston error is imported by LiNbO3 to test the capability of the AO servo. In a closed loop the stable and clear DHB is obtained. The experimental results confirm the feasibility of the concept.

  17. A computational procedure for multibody systems including flexible beam dynamics

    NASA Technical Reports Server (NTRS)

    Downer, J. D.; Park, K. C.; Chiou, J. C.

    1990-01-01

    A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.

  18. Three-Dimensional Electron Beam Dose Calculations.

    NASA Astrophysics Data System (ADS)

    Shiu, Almon Sowchee

    The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry. A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems.

  19. TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelover, E; Wang, D; Hill, P

    2014-06-15

    Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS.more » Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stathakis, S; Defoor, D; Saenz, D

    Purpose: Stereotactic radiosurgery (SRS) outcomes are related to the delivered dose to the target and to surrounding tissue. We have commissioned a Monte Carlo based dose calculation algorithm to recalculated the delivered dose planned using pencil beam calculation dose engine. Methods: Twenty consecutive previously treated patients have been selected for this study. All plans were generated using the iPlan treatment planning system (TPS) and calculated using the pencil beam algorithm. Each patient plan consisted of 1 to 3 targets and treated using dynamically conformal arcs or intensity modulated beams. Multi-target treatments were delivered using multiple isocenters, one for each target.more » These plans were recalculated for the purpose of this study using a single isocenter. The CT image sets along with the plan, doses and structures were DICOM exported to Monaco TPS and the dose was recalculated using the same voxel resolution and monitor units. Benchmark data was also generated prior to patient calculations to assess the accuracy of the two TPS against measurements using a micro ionization chamber in solid water. Results: Good agreement, within −0.4% for Monaco and +2.2% for iPlan were observed for measurements in water phantom. Doses in patient geometry revealed up to 9.6% differences for single target plans and 9.3% for multiple-target-multiple-isocenter plans. The average dose differences for multi-target-single-isocenter plans were approximately 1.4%. Similar differences were observed for the OARs and integral dose. Conclusion: Accuracy of the beam is crucial for the dose calculation especially in the case of small fields such as those used in SRS treatments. A superior dose calculation algorithm such as Monte Carlo, with properly commissioned beam models, which is unaffected by the lack of electronic equilibrium should be preferred for the calculation of small fields to improve accuracy.« less

  1. On improving the algorithm efficiency in the particle-particle force calculations

    NASA Astrophysics Data System (ADS)

    Kozynchenko, Alexander I.; Kozynchenko, Sergey A.

    2016-09-01

    The problem of calculating inter-particle forces in the particle-particle (PP) simulation models takes an important place in scientific computing. Such simulation models are used in diverse scientific applications arising in astrophysics, plasma physics, particle accelerators, etc., where the long-range forces are considered. The inverse-square laws such as Coulomb's law of electrostatic forces and Newton's law of universal gravitation are the examples of laws pertaining to the long-range forces. The standard naïve PP method outlined, for example, by Hockney and Eastwood [1] is straightforward, processing all pairs of particles in a double nested loop. The PP algorithm provides the best accuracy of all possible methods, but its computational complexity is O (Np2), where Np is a total number of particles involved. Too low efficiency of the PP algorithm seems to be the challenging issue in some cases where the high accuracy is required. An example can be taken from the charged particle beam dynamics where, under computing the own space charge of the beam, so-called macro-particles are used (see e.g., Humphries Jr. [2], Kozynchenko and Svistunov [3]).

  2. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Assmann, R

    2004-06-08

    The feasibility of future linear colliders depends on achieving very tight alignment and steering tolerances. All proposals (NLC, JLC, CLIC, TESLA and S-BAND) currently require a total emittance growth in the main linac of less than 30-100% [1]. This should be compared with a 100% emittance growth in the much smaller SLC linac [2]. Major advances in alignment and beam steering techniques beyond those used in the SLC are necessary for the next generation of linear colliders. In this paper, we present an experimental study of quadrupole alignment with a dispersion-free steering algorithm. A closely related method (wakefield-free steering) takesmore » into account wakefield effects [3]. However, this method can not be studied at the SLC. The requirements for future linear colliders lead to new and unconventional ideas about alignment and beam steering. For example, no dipole correctors are foreseen for the standard trajectory correction in the NLC [4]; beam steering will be done by moving the quadrupole positions with magnet movers. This illustrates the close symbiosis between alignment, beam steering and beam dynamics that will emerge. It is no longer possible to consider the accelerator alignment as static with only a few surveys and realignments per year. The alignment in future linear colliders will be a dynamic process in which the whole linac, with thousands of beam-line elements, is aligned in a few hours or minutes, while the required accuracy of about 5 pm for the NLC quadrupole alignment [4] is a factor of 20 higher than in existing accelerators. The major task in alignment and steering is the accurate determination of the optimum beam-line position. Ideally one would like all elements to be aligned along a straight line. However, this is not practical. Instead a ''smooth curve'' is acceptable as long as its wavelength is much longer than the betatron wavelength of the accelerated beam. Conventional alignment methods are limited in accuracy by errors in the survey and the fiducials. Beam-based alignment methods ideally only depend upon the BPM resolution and generally provide much better precision. Many of those techniques are described in other contributions to this workshop. In this paper we describe our experiences with a dispersion-free steering algorithm for linacs. This algorithm was first suggested by Raubenheimer and Ruth in 1990 [5]. It h as been studied in simulations for NLC [5], TESLA [6], the S-BAND proposal [7] and CLIC [8]. The dispersion-free steering technique can be applied to the whole linac at once and returns the alignment (or trajectory) that minimizes the dispersive emittance growth of the beam. Thus it allows an extremely fast alignment of the beam-line. As we will show dispersion-free steering is only sensitive to quadrupole misalignments. Wakefield-free steering [3] as mentioned before is a closely related technique that minimizes the emittance growth caused by both dispersion and wakefields. Due to hardware limitations (i.e. insufficient relative range of power supplies) we could not study this method experimentally in the SLC. However, its systematics are very similar to those of dispersion-free steering. The studies of dispersion-free steering which are presented made extensive use of the unique potential of the SLC as the only operating linear collider. We used it to study the performance and problems of advanced beam-based optimization tools in a real beam-line environment and on a large scale. We should mention that the SLC has utilized beam-based alignment for years [9], using the difference of electron and positron trajectories. This method, however, cannot be used in future linear colliders. The goal of our work is to demonstrate the performance of advanced beam-based alignment techniques in linear colliders and to anticipate possible reality-related problems. Those can then be solved in the design state for the next generation of linear colliders.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pang, Xiaoying; Rybarcyk, Larry

    HPSim is a GPU-accelerated online multi-particle beam dynamics simulation tool for ion linacs. It was originally developed for use on the Los Alamos 800-MeV proton linac. It is a “z-code” that contains typical linac beam transport elements. The linac RF-gap transformation utilizes transit-time-factors to calculate the beam acceleration therein. The space-charge effects are computed using the 2D SCHEFF (Space CHarge EFFect) algorithm, which calculates the radial and longitudinal space charge forces for cylindrically symmetric beam distributions. Other space- charge routines to be incorporated include the 3D PICNIC and a 3D Poisson solver. HPSim can simulate beam dynamics in drift tubemore » linacs (DTLs) and coupled cavity linacs (CCLs). Elliptical superconducting cavity (SC) structures will also be incorporated into the code. The computational core of the code is written in C++ and accelerated using the NVIDIA CUDA technology. Users access the core code, which is wrapped in Python/C APIs, via Pythons scripts that enable ease-of-use and automation of the simulations. The overall linac description including the EPICS PV machine control parameters is kept in an SQLite database that also contains calibration and conversion factors required to transform the machine set points into model values used in the simulation.« less

  4. Dynamics of submicron aerosol droplets in a robust optical trap formed by multiple Bessel beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanopulos, Ioannis; Theoretical and Physical Chemistry Institute, National Hellenic Research Foundation, Athens 11635; Luckhaus, David

    In this paper, we model the three-dimensional escape dynamics of single submicron-sized aerosol droplets in optical multiple Bessel beam traps. Trapping in counter-propagating Bessel beams (CPBBs) is compared with a newly proposed quadruple Bessel beam (QBB) trap, which consists of two perpendicularly arranged CPBB traps. Calculations are performed for perfectly and imperfectly aligned traps. Mie-theory and finite-difference time-domain methods are used to calculate the optical forces. The droplet escape kinetics are obtained from the solution of the Langevin equation using a Verlet algorithm. Provided the traps are perfectly aligned, the calculations indicate very long lifetimes for droplets trapped either inmore » the CPBB or in the QBB trap. However, minor misalignments that are hard to control experimentally already severely diminish the stability of the CPBB trap. By contrast, such minor misalignments hardly affect the extended droplet lifetimes in a QBB trap. The QBB trap is found to be a stable, robust optical trap, which should enable the experimental investigation of submicron droplets with radii down to 100 nm. Optical binding between two droplets and its potential role in preventing coagulation when loading a CPBB trap is briefly addressed.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  6. Adaptive optics compensation of orbital angular momentum beams with a modified Gerchberg-Saxton-based phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun

    2017-12-01

    Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.

  7. Optimization of beam orientation in radiotherapy using planar geometry

    NASA Astrophysics Data System (ADS)

    Haas, O. C. L.; Burnham, K. J.; Mills, J. A.

    1998-08-01

    This paper proposes a new geometrical formulation of the coplanar beam orientation problem combined with a hybrid multiobjective genetic algorithm. The approach is demonstrated by optimizing the beam orientation in two dimensions, with the objectives being formulated using planar geometry. The traditional formulation of the objectives associated with the organs at risk has been modified to account for the use of complex dose delivery techniques such as beam intensity modulation. The new algorithm attempts to replicate the approach of a treatment planner whilst reducing the amount of computation required. Hybrid genetic search operators have been developed to improve the performance of the genetic algorithm by exploiting problem-specific features. The multiobjective genetic algorithm is formulated around the concept of Pareto optimality which enables the algorithm to search in parallel for different objectives. When the approach is applied without constraining the number of beams, the solution produces an indication of the minimum number of beams required. It is also possible to obtain non-dominated solutions for various numbers of beams, thereby giving the clinicians a choice in terms of the number of beams as well as in the orientation of these beams.

  8. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    NASA Astrophysics Data System (ADS)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  9. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE PAGES

    Thieberger, Peter; Gassner, D.; Hulsart, R.; ...

    2018-04-25

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  10. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thieberger, Peter; Gassner, D.; Hulsart, R.

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  11. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets.

    PubMed

    Thieberger, P; Gassner, D; Hulsart, R; Michnoff, R; Miller, T; Minty, M; Sorrell, Z; Bartnik, A

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  12. Performance of a high resolution cavity beam position monitor system

    NASA Astrophysics Data System (ADS)

    Walston, Sean; Boogert, Stewart; Chung, Carl; Fitsos, Pete; Frisch, Joe; Gronberg, Jeff; Hayano, Hitoshi; Honda, Yosuke; Kolomensky, Yury; Lyapin, Alexey; Malton, Stephen; May, Justin; McCormick, Douglas; Meller, Robert; Miller, David; Orimoto, Toyoko; Ross, Marc; Slater, Mark; Smith, Steve; Smith, Tonee; Terunuma, Nobuhiro; Thomson, Mark; Urakawa, Junji; Vogel, Vladimir; Ward, David; White, Glen

    2007-07-01

    It has been estimated that an RF cavity Beam Position Monitor (BPM) could provide a position measurement resolution of less than 1 nm. We have developed a high resolution cavity BPM and associated electronics. A triplet comprised of these BPMs was installed in the extraction line of the Accelerator Test Facility (ATF) at the High Energy Accelerator Research Organization (KEK) for testing with its ultra-low emittance beam. The three BPMs were each rigidly mounted inside an alignment frame on six variable-length struts which could be used to move the BPMs in position and angle. We have developed novel methods for extracting the position and tilt information from the BPM signals including a robust calibration algorithm which is immune to beam jitter. To date, we have demonstrated a position resolution of 15.6 nm and a tilt resolution of 2.1 μrad over a dynamic range of approximately ±20 μm.

  13. Dose calculation algorithm of fast fine-heterogeneity correction for heavy charged particle radiotherapy.

    PubMed

    Kanematsu, Nobuyuki

    2011-04-01

    This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Noninvasive hemoglobin measurement using dynamic spectrum

    NASA Astrophysics Data System (ADS)

    Yi, Xiaoqing; Li, Gang; Lin, Ling

    2017-08-01

    Spectroscopy methods for noninvasive hemoglobin (Hgb) measurement are interfered by individual difference and particular weak signal. In order to address these problems, we have put forward a series of improvement methods based on dynamic spectrum (DS), including instrument design, spectrum extraction algorithm, and modeling approach. The instrument adopts light sources composed of eight laser diodes with the wavelength range from 600 nm to 1100 nm and records photoplethysmography signals at eight wavelengths synchronously. In order to simplify the optical design, we modulate the light sources with orthogonal square waves and design the corresponding demodulation algorithm, instead of adopting a beam-splitting system. A newly designed algorithm named difference accumulation has been proved to be effective in improving the accuracy of dynamic spectrum extraction. 220 subjects are involved in the clinical experiment. An extreme learning machine calibration model between the DS data and the Hgb levels is established. Correlation coefficient and root-mean-square error of prediction sets are 0.8645 and 8.48 g/l, respectively. The results indicate that the Hgb level can be derived by this approach noninvasively with acceptable precision and accuracy. It is expected to achieve a clinic application in the future.

  15. PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.

    PubMed

    Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge

    2005-08-01

    Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.

  16. A firefly algorithm for optimum design of new-generation beams

    NASA Astrophysics Data System (ADS)

    Erdal, F.

    2017-06-01

    This research addresses the minimum weight design of new-generation steel beams with sinusoidal openings using a metaheuristic search technique, namely the firefly method. The proposed algorithm is also used to compare the optimum design results of sinusoidal web-expanded beams with steel castellated and cellular beams. Optimum design problems of all beams are formulated according to the design limitations stipulated by the Steel Construction Institute. The design methods adopted in these publications are consistent with BS 5950 specifications. The formulation of the design problem considering the above-mentioned limitations turns out to be a discrete programming problem. The design algorithms based on the technique select the optimum universal beam sections, dimensional properties of sinusoidal, hexagonal and circular holes, and the total number of openings along the beam as design variables. Furthermore, this selection is also carried out such that the behavioural limitations are satisfied. Numerical examples are presented, where the suggested algorithm is implemented to achieve the minimum weight design of these beams subjected to loading combinations.

  17. Trapping photons on the line: controllable dynamics of a quantum walk

    NASA Astrophysics Data System (ADS)

    Xue, Peng; Qin, Hao; Tang, Bao

    2014-04-01

    Optical interferometers comprising birefringent-crystal beam displacers, wave plates, and phase shifters serve as stable devices for simulating quantum information processes such as heralded coined quantum walks. Quantum walks are important for quantum algorithms, universal quantum computing circuits, quantum transport in complex systems, and demonstrating intriguing nonlinear dynamical quantum phenomena. We introduce fully controllable polarization-independent phase shifters in optical pathes in order to realize site-dependent phase defects. The effectiveness of our interferometer is demonstrated through realizing single-photon quantum-walk dynamics in one dimension. By applying site-dependent phase defects, the translational symmetry of an ideal standard quantum walk is broken resulting in localization effect in a quantum walk architecture. The walk is realized for different site-dependent phase defects and coin settings, indicating the strength of localization signature depends on the level of phase due to site-dependent phase defects and coin settings and opening the way for the implementation of a quantum-walk-based algorithm.

  18. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, J; Lin, H; Chow, J

    2015-06-15

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensitymore » of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.« less

  19. Chaotic dynamics of flexible Euler-Bernoulli beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awrejcewicz, J., E-mail: awrejcew@p.lodz.pl; Krysko, A. V., E-mail: anton.krysko@gmail.com; Kutepov, I. E., E-mail: iekutepov@gmail.com

    2013-12-15

    Mathematical modeling and analysis of spatio-temporal chaotic dynamics of flexible simple and curved Euler-Bernoulli beams are carried out. The Kármán-type geometric non-linearity is considered. Algorithms reducing partial differential equations which govern the dynamics of studied objects and associated boundary value problems are reduced to the Cauchy problem through both Finite Difference Method with the approximation of O(c{sup 2}) and Finite Element Method. The obtained Cauchy problem is solved via the fourth and sixth-order Runge-Kutta methods. Validity and reliability of the results are rigorously discussed. Analysis of the chaotic dynamics of flexible Euler-Bernoulli beams for a series of boundary conditions ismore » carried out with the help of the qualitative theory of differential equations. We analyze time histories, phase and modal portraits, autocorrelation functions, the Poincaré and pseudo-Poincaré maps, signs of the first four Lyapunov exponents, as well as the compression factor of the phase volume of an attractor. A novel scenario of transition from periodicity to chaos is obtained, and a transition from chaos to hyper-chaos is illustrated. In particular, we study and explain the phenomenon of transition from symmetric to asymmetric vibrations. Vibration-type charts are given regarding two control parameters: amplitude q{sub 0} and frequency ω{sub p} of the uniformly distributed periodic excitation. Furthermore, we detected and illustrated how the so called temporal-space chaos is developed following the transition from regular to chaotic system dynamics.« less

  20. Evaluation of Laser Based Alignment Algorithms Under Additive Random and Diffraction Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClay, W A; Awwal, A; Wilhelmsen, K

    2004-09-30

    The purpose of the automatic alignment algorithm at the National Ignition Facility (NIF) is to determine the position of a laser beam based on the position of beam features from video images. The position information obtained is used to command motors and attenuators to adjust the beam lines to the desired position, which facilitates the alignment of all 192 beams. One of the goals of the algorithm development effort is to ascertain the performance, reliability, and uncertainty of the position measurement. This paper describes a method of evaluating the performance of algorithms using Monte Carlo simulation. In particular we showmore » the application of this technique to the LM1{_}LM3 algorithm, which determines the position of a series of two beam light sources. The performance of the algorithm was evaluated for an ensemble of over 900 simulated images with varying image intensities and noise counts, as well as varying diffraction noise amplitude and frequency. The performance of the algorithm on the image data set had a tolerance well beneath the 0.5-pixel system requirement.« less

  1. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  2. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  3. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  4. Coherent control of plasma dynamics by feedback-optimized wavefront manipulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Z.-H.; Hou, B.; Gao, G.

    2015-05-15

    Plasmas generated by an intense laser pulse can support coherent structures such as large amplitude wakefield that can affect the outcome of an experiment. We investigate the coherent control of plasma dynamics by feedback-optimized wavefront manipulation using a deformable mirror. The experimental outcome is directly used as feedback in an evolutionary algorithm for optimization of the phase front of the driving laser pulse. In this paper, we applied this method to two different experiments: (i) acceleration of electrons in laser driven plasma waves and (ii) self-compression of optical pulses induced by ionization nonlinearity. The manipulation of the laser wavefront leadsmore » to orders of magnitude improvement to electron beam properties such as the peak charge, beam divergence, and transverse emittance. The demonstration of coherent control for plasmas opens new possibilities for future laser-based accelerators and their applications.« less

  5. NORTICA—a new code for cyclotron analysis

    NASA Astrophysics Data System (ADS)

    Gorelov, D.; Johnson, D.; Marti, F.

    2001-12-01

    The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.

  6. Investigation of photon beam models in heterogeneous media of modern radiotherapy.

    PubMed

    Ding, W; Johnston, P N; Wong, T P Y; Bubb, I F

    2004-06-01

    This study investigates the performance of photon beam models in dose calculations involving heterogeneous media in modern radiotherapy. Three dose calculation algorithms implemented in the CMS FOCUS treatment planning system have been assessed and validated using ionization chambers, thermoluminescent dosimeters (TLDs) and film. The algorithms include the multigrid superposition (MGS) algorithm, fast Fourier Transform Convolution (FFTC) algorithm and Clarkson algorithm. Heterogeneous phantoms used in the study consist of air cavities, lung analogue and an anthropomorphic phantom. Depth dose distributions along the central beam axis for 6 MV and 10 MV photon beams with field sizes of 5 cm x 5 cm and 10 cm x 10 cm were measured in the air cavity phantoms and lung analogue phantom. Point dose measurements were performed in the anthropomorphic phantom. Calculated results with three dose calculation algorithms were compared with measured results. In the air cavity phantoms, the maximum dose differences between the algorithms and the measurements were found at the distal surface of the air cavity with a 10 MV photon beam and a 5 cm x 5 cm field size. The differences were 3.8%. 24.9% and 27.7% for the MGS. FFTC and Clarkson algorithms. respectively. Experimental measurements of secondary electron build-up range beyond the air cavity showed an increase with decreasing field size, increasing energy and increasing air cavity thickness. The maximum dose differences in the lung analogue with 5 cm x 5 cm field size were found to be 0.3%. 4.9% and 6.9% for the MGS. FFTC and Clarkson algorithms with a 6 MV photon beam and 0.4%. 6.3% and 9.1% with a 10 MV photon beam, respectively. In the anthropomorphic phantom, the dose differences between calculations using the MGS algorithm and measurements with TLD rods were less than +/-4.5% for 6 MV and 10 MV photon beams with 10 cm x 10 cm field size and 6 MV photon beam with 5 cm x 5 cm field size, and within +/-7.5% for 10 MV with 5 cm x 5 cm field size, respectively. The FFTC and Clarkson algorithms overestimate doses at all dose points in the lung of the anthropomorphic phantom. In conclusion, the MGS is the most accurate dose calculation algorithm of investigated photon beam models. It is strongly recommended for implementation in modern radiotherapy with multiple small fields when heterogeneous media are in the treatment fields.

  7. Testing of the analytical anisotropic algorithm for photon dose calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka

    2006-11-15

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimizationmore » algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d{sub max}. The electron contamination model was found to be suboptimal to model the dose around d{sub max}, especially for physical wedges at smaller source to phantom distances. For the asymmetric field verification, absolute dose difference of up to 4% were observed for the most extreme asymmetries. Compared to the SPB, the penumbra modeling is considerably improved (1%, 1 mm). At the interface between solid water and cork, profiles show a better agreement with AAA. Depth dose curves in the cork are substantially better with AAA than with SPB. Improvements are more pronounced for 18 MV than for 6 MV. Point dose measurements in the thoracic phantom are mostly within 5%. In general, we can conclude that, compared to SPB, AAA improves the accuracy of dose calculations. Particular progress was made with respect to the penumbra and low dose regions. In heterogeneous materials, improvements are substantial and more pronounced for high (18 MV) than for low (6 MV) energies.« less

  8. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase-D

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Amin, N.; Fung, J.

    1989-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) was found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach was found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through continued analysis of maximum SJNR met with limited success. A method to acquire and track an incoming laser beam is proposed.

  9. Direct Simulation of Friction Forces for Heavy Ions Interacting with a Warm Magnetized Electron Distribution

    NASA Astrophysics Data System (ADS)

    Bruhwiler, D. L.; Busby, R.; Fedotov, A. V.; Ben-Zvi, I.; Cary, J. R.; Stoltz, P.; Burov, A.; Litvinenko, V. N.; Messmer, P.; Abell, D.; Nieter, C.

    2005-06-01

    A proposed luminosity upgrade to RHIC includes a novel electron cooling section, which would use ˜55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. High-current bunched electron beams are required for the RHIC cooler, resulting in very high transverse temperatures and relatively low values for the magnetized cooling logarithm. The accuracy of analytical formulae in this regime requires careful examination. Simulations of the friction coefficient, using the VORPAL code, for single gold ions passing once through the interaction region, are compared with theoretical calculations. Charged particles are advanced using a fourth-order Hermite predictor-corrector algorithm. The fields in the beam frame are obtained from direct calculation of Coulomb's law, which is more efficient than multipole-type algorithms for less than ˜106 particles. Because the interaction time is so short, it is necessary to suppress the diffusive aspect of the ion dynamics through the careful use of positrons in the simulations.

  10. A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz

    2012-09-10

    Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.

  11. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    NASA Astrophysics Data System (ADS)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  12. Characterisation of mega-voltage electron pencil beam dose distributions: viability of a measurement-based approach.

    PubMed

    Barnes, M P; Ebert, M A

    2008-03-01

    The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.

  13. Photoinjector optimization using a derivative-free, model-based trust-region algorithm for the Argonne Wakefield Accelerator

    NASA Astrophysics Data System (ADS)

    Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.

    2017-07-01

    Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.

  14. Modeling of beam customization devices in the pencil-beam splitting algorithm for heavy charged particle radiotherapy.

    PubMed

    Kanematsu, Nobuyuki

    2011-03-07

    A broad-beam-delivery system for radiotherapy with protons or ions often employs multiple collimators and a range-compensating filter, which offer complex and potentially useful beam customization. It is however difficult for conventional pencil-beam algorithms to deal with fine structures of these devices due to beam-size growth during transport. This study aims to avoid the difficulty with a novel computational model. The pencil beams are initially defined at the range-compensating filter with angular-acceptance correction for upstream collimation followed by stopping and scattering. They are individually transported with possible splitting near the aperture edge of a downstream collimator to form a sharp field edge. The dose distribution for a carbon-ion beam was calculated and compared with existing experimental data. The penumbra sizes of various collimator edges agreed between them to a submillimeter level. This beam-customization model will be used in the greater framework of the pencil-beam splitting algorithm for accurate and efficient patient dose calculation.

  15. A simple algorithm for beam profile diagnostics using a thermographic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katagiri, Ken; Hojo, Satoru; Honma, Toshihiro

    2014-03-15

    A new algorithm for digital image processing apparatuses is developed to evaluate profiles of high-intensity DC beams from temperature images of irradiated thin foils. Numerical analyses are performed to examine the reliability of the algorithm. To simulate the temperature images acquired by a thermographic camera, temperature distributions are numerically calculated for 20 MeV proton beams with different parameters. Noise in the temperature images which is added by the camera sensor is also simulated to account for its effect. Using the algorithm, beam profiles are evaluated from the simulated temperature images and compared with exact solutions. We find that niobium ismore » an appropriate material for the thin foil used in the diagnostic system. We also confirm that the algorithm is adaptable over a wide beam current range of 0.11–214 μA, even when employing a general-purpose thermographic camera with rather high noise (ΔT{sub NETD} ≃ 0.3 K; NETD: noise equivalent temperature difference)« less

  16. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Warren

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC code QuickPIC. We have also used our suite of PIC codes to make scientific discovery. Highlights include supporting FACET experiments which achieved the milestones of showing high beam loading and energy transfer efficiency from a drive electron beam to a witness electron beam and the discovery of a self-loading regime a for high gradient acceleration of a positron beam. Both of these experimental milestones were published in Nature together with supporting QuickPIC simulation results. Simulation results from QuickPIC were used on the cover of Nature in one case. We are also making progress on using highly resolved QuickPIC simulations to show that ion motion may not lead to catastrophic emittance growth for tightly focused electron bunches loaded into nonlinear wakefields. This could mean that fully self-consistent beam loading scenarios are possible. This work remains in progress. OSIRIS simulations were used to discover how 200 MeV electron rings are formed in LWFA experiments, on how to generate electrons that have a series of bunches on nanometer scale, and how to transport electron beams from (into) plasma sections into (from) conventional beam optic sections.« less

  17. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase E

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Fung, J.

    1990-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) has been found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach has been found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through further analysis of maximum SJNR met with limited success. The investigation of a new angle detector for spatial tracking in heterodyne laser space communications was completed.

  18. Variable beam dose rate and DMLC IMRT to moving body anatomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papiez, Lech; Abolfath, Ramin M.

    2008-11-15

    Derivation of formulas relating leaf speeds and beam dose rates for delivering planned intensity profiles to static and moving targets in dynamic multileaf collimator (DMLC) intensity modulated radiation therapy (IMRT) is presented. The analysis of equations determining algorithms for DMLC IMRT delivery under a variable beam dose rate reveals a multitude of possible delivery strategies for a given intensity map and for any given target motion patterns. From among all equivalent delivery strategies for DMLC IMRT treatments specific subclasses of strategies can be selected to provide deliveries that are particularly suitable for clinical applications providing existing delivery devices are used.more » Special attention is devoted to the subclass of beam dose rate variable DMLC delivery strategies to moving body anatomy that generalize existing techniques of such deliveries in Varian DMLC irradiation methodology to static body anatomy. Few examples of deliveries from this subclass of DMLC IMRT irradiations are investigated to illustrate the principle and show practical benefits of proposed techniques.« less

  19. Short-Scan Fan-Beam Algorithms for Cr

    NASA Astrophysics Data System (ADS)

    Naparstek, Abraham

    1980-06-01

    Several short-scan reconstruction algorithms of the convolution type for fan-beam projections are presented and discussed. Their derivation fran new, exact integral representation formulas is outlined, and the performance of same of these algorithms is demonstrated with the aid of simulation results.

  20. Optimization methodology for the global 10 Hz orbit feedback in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chuyu; Hulsart, R.; Mernick, K.

    To combat beam oscillations induced by triplet vibrations at the Relativistic Heavy Ion Collider (RHIC), a global orbit feedback system was developed and applied at injection and top energy in 2011, and during beam acceleration in 2012. Singular Value Decomposition (SVD) was employed to determine the strengths and currents of the applied corrections. The feedback algorithm was optimized for different magnetic configurations (lattices) at fixed beam energies and during beam acceleration. While the orbit feedback performed well since its inception, corrector current transients and feedback-induced beam oscillations were observed during the polarized proton program in 2015. In this paper, wemore » present the feedback algorithm, the optimization of the algorithm for various lattices and the solution adopted to mitigate the observed current transients during beam acceleration.« less

  1. Optimization methodology for the global 10 Hz orbit feedback in RHIC

    DOE PAGES

    Liu, Chuyu; Hulsart, R.; Mernick, K.; ...

    2018-05-08

    To combat beam oscillations induced by triplet vibrations at the Relativistic Heavy Ion Collider (RHIC), a global orbit feedback system was developed and applied at injection and top energy in 2011, and during beam acceleration in 2012. Singular Value Decomposition (SVD) was employed to determine the strengths and currents of the applied corrections. The feedback algorithm was optimized for different magnetic configurations (lattices) at fixed beam energies and during beam acceleration. While the orbit feedback performed well since its inception, corrector current transients and feedback-induced beam oscillations were observed during the polarized proton program in 2015. In this paper, wemore » present the feedback algorithm, the optimization of the algorithm for various lattices and the solution adopted to mitigate the observed current transients during beam acceleration.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Suh, T; Chung, J

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUsmore » and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.« less

  3. Noise elimination algorithm for modal analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, X. X., E-mail: baoxingxian@upc.edu.cn; Li, C. L.; Xiong, C. B.

    2015-07-27

    Modal analysis is an ongoing interdisciplinary physical issue. Modal parameters estimation is applied to determine the dynamic characteristics of structures under vibration excitation. Modal analysis is more challenging for the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of structured low rank approximation combined with the complex exponential method to estimate the modal parameters. Physical experiments using a steel cantilever beam with ten accelerometers mounted, excited by an impulse load, demonstrate that this method can significantly eliminate noise from measured signals and accurately identify the modal frequencies and damping ratios. This study provides amore » fundamental mechanism of noise elimination using structured low rank approximation in physical fields.« less

  4. Dynamic X-ray diffraction sampling for protein crystal positioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarborough, Nicole M.; Godaliyadda, G. M. Dilshan P.; Ye, Dong Hye

    A sparse supervised learning approach for dynamic sampling (SLADS) is described for dose reduction in diffraction-based protein crystal positioning. Crystal centering is typically a prerequisite for macromolecular diffraction at synchrotron facilities, with X-ray diffraction mapping growing in popularity as a mechanism for localization. In X-ray raster scanning, diffraction is used to identify the crystal positions based on the detection of Bragg-like peaks in the scattering patterns; however, this additional X-ray exposure may result in detectable damage to the crystal prior to data collection. Dynamic sampling, in which preceding measurements inform the next most information-rich location to probe for image reconstruction,more » significantly reduced the X-ray dose experienced by protein crystals during positioning by diffraction raster scanning. The SLADS algorithm implemented herein is designed for single-pixel measurements and can select a new location to measure. In each step of SLADS, the algorithm selects the pixel, which, when measured, maximizes the expected reduction in distortion given previous measurements. Ground-truth diffraction data were obtained for a 5 µm-diameter beam and SLADS reconstructed the image sampling 31% of the total volume and only 9% of the interior of the crystal greatly reducing the X-ray dosage on the crystal. Furthermore, by usingin situtwo-photon-excited fluorescence microscopy measurements as a surrogate for diffraction imaging with a 1 µm-diameter beam, the SLADS algorithm enabled image reconstruction from a 7% sampling of the total volume and 12% sampling of the interior of the crystal. When implemented into the beamline at Argonne National Laboratory, without ground-truth images, an acceptable reconstruction was obtained with 3% of the image sampled and approximately 5% of the crystal. The incorporation of SLADS into X-ray diffraction acquisitions has the potential to significantly minimize the impact of X-ray exposure on the crystal by limiting the dose and area exposed for image reconstruction and crystal positioning using data collection hardware present in most macromolecular crystallography end-stations.« less

  5. Dynamic X-ray diffraction sampling for protein crystal positioning

    DOE PAGES

    Scarborough, Nicole M.; Godaliyadda, G. M. Dilshan P.; Ye, Dong Hye; ...

    2017-01-01

    A sparse supervised learning approach for dynamic sampling (SLADS) is described for dose reduction in diffraction-based protein crystal positioning. Crystal centering is typically a prerequisite for macromolecular diffraction at synchrotron facilities, with X-ray diffraction mapping growing in popularity as a mechanism for localization. In X-ray raster scanning, diffraction is used to identify the crystal positions based on the detection of Bragg-like peaks in the scattering patterns; however, this additional X-ray exposure may result in detectable damage to the crystal prior to data collection. Dynamic sampling, in which preceding measurements inform the next most information-rich location to probe for image reconstruction,more » significantly reduced the X-ray dose experienced by protein crystals during positioning by diffraction raster scanning. The SLADS algorithm implemented herein is designed for single-pixel measurements and can select a new location to measure. In each step of SLADS, the algorithm selects the pixel, which, when measured, maximizes the expected reduction in distortion given previous measurements. Ground-truth diffraction data were obtained for a 5 µm-diameter beam and SLADS reconstructed the image sampling 31% of the total volume and only 9% of the interior of the crystal greatly reducing the X-ray dosage on the crystal. Furthermore, by usingin situtwo-photon-excited fluorescence microscopy measurements as a surrogate for diffraction imaging with a 1 µm-diameter beam, the SLADS algorithm enabled image reconstruction from a 7% sampling of the total volume and 12% sampling of the interior of the crystal. When implemented into the beamline at Argonne National Laboratory, without ground-truth images, an acceptable reconstruction was obtained with 3% of the image sampled and approximately 5% of the crystal. The incorporation of SLADS into X-ray diffraction acquisitions has the potential to significantly minimize the impact of X-ray exposure on the crystal by limiting the dose and area exposed for image reconstruction and crystal positioning using data collection hardware present in most macromolecular crystallography end-stations.« less

  6. Dynamic X-ray diffraction sampling for protein crystal positioning

    PubMed Central

    Scarborough, Nicole M.; Godaliyadda, G. M. Dilshan P.; Ye, Dong Hye; Kissick, David J.; Zhang, Shijie; Newman, Justin A.; Sheedlo, Michael J.; Chowdhury, Azhad U.; Fischetti, Robert F.; Das, Chittaranjan; Buzzard, Gregery T.; Bouman, Charles A.; Simpson, Garth J.

    2017-01-01

    A sparse supervised learning approach for dynamic sampling (SLADS) is described for dose reduction in diffraction-based protein crystal positioning. Crystal centering is typically a prerequisite for macromolecular diffraction at synchrotron facilities, with X-ray diffraction mapping growing in popularity as a mechanism for localization. In X-ray raster scanning, diffraction is used to identify the crystal positions based on the detection of Bragg-like peaks in the scattering patterns; however, this additional X-ray exposure may result in detectable damage to the crystal prior to data collection. Dynamic sampling, in which preceding measurements inform the next most information-rich location to probe for image reconstruction, significantly reduced the X-ray dose experienced by protein crystals during positioning by diffraction raster scanning. The SLADS algorithm implemented herein is designed for single-pixel measurements and can select a new location to measure. In each step of SLADS, the algorithm selects the pixel, which, when measured, maximizes the expected reduction in distortion given previous measurements. Ground-truth diffraction data were obtained for a 5 µm-diameter beam and SLADS reconstructed the image sampling 31% of the total volume and only 9% of the interior of the crystal greatly reducing the X-ray dosage on the crystal. Using in situ two-photon-excited fluorescence microscopy measurements as a surrogate for diffraction imaging with a 1 µm-diameter beam, the SLADS algorithm enabled image reconstruction from a 7% sampling of the total volume and 12% sampling of the interior of the crystal. When implemented into the beamline at Argonne National Laboratory, without ground-truth images, an acceptable reconstruction was obtained with 3% of the image sampled and approximately 5% of the crystal. The incorporation of SLADS into X-ray diffraction acquisitions has the potential to significantly minimize the impact of X-ray exposure on the crystal by limiting the dose and area exposed for image reconstruction and crystal positioning using data collection hardware present in most macromolecular crystallography end-stations. PMID:28009558

  7. Dynamic X-ray diffraction sampling for protein crystal positioning.

    PubMed

    Scarborough, Nicole M; Godaliyadda, G M Dilshan P; Ye, Dong Hye; Kissick, David J; Zhang, Shijie; Newman, Justin A; Sheedlo, Michael J; Chowdhury, Azhad U; Fischetti, Robert F; Das, Chittaranjan; Buzzard, Gregery T; Bouman, Charles A; Simpson, Garth J

    2017-01-01

    A sparse supervised learning approach for dynamic sampling (SLADS) is described for dose reduction in diffraction-based protein crystal positioning. Crystal centering is typically a prerequisite for macromolecular diffraction at synchrotron facilities, with X-ray diffraction mapping growing in popularity as a mechanism for localization. In X-ray raster scanning, diffraction is used to identify the crystal positions based on the detection of Bragg-like peaks in the scattering patterns; however, this additional X-ray exposure may result in detectable damage to the crystal prior to data collection. Dynamic sampling, in which preceding measurements inform the next most information-rich location to probe for image reconstruction, significantly reduced the X-ray dose experienced by protein crystals during positioning by diffraction raster scanning. The SLADS algorithm implemented herein is designed for single-pixel measurements and can select a new location to measure. In each step of SLADS, the algorithm selects the pixel, which, when measured, maximizes the expected reduction in distortion given previous measurements. Ground-truth diffraction data were obtained for a 5 µm-diameter beam and SLADS reconstructed the image sampling 31% of the total volume and only 9% of the interior of the crystal greatly reducing the X-ray dosage on the crystal. Using in situ two-photon-excited fluorescence microscopy measurements as a surrogate for diffraction imaging with a 1 µm-diameter beam, the SLADS algorithm enabled image reconstruction from a 7% sampling of the total volume and 12% sampling of the interior of the crystal. When implemented into the beamline at Argonne National Laboratory, without ground-truth images, an acceptable reconstruction was obtained with 3% of the image sampled and approximately 5% of the crystal. The incorporation of SLADS into X-ray diffraction acquisitions has the potential to significantly minimize the impact of X-ray exposure on the crystal by limiting the dose and area exposed for image reconstruction and crystal positioning using data collection hardware present in most macromolecular crystallography end-stations.

  8. Time resolving beam position measurement and analysis of beam unstable movement in PSR

    NASA Astrophysics Data System (ADS)

    Aleksandrov, A. V.

    2000-11-01

    Precise measurement of beam centroid movement is very important for understanding the fast transverse instability in the Los Alamos Proton Storage Ring (PSR). Proton bunch in the PSR is long thus different parts of the bunch can have different betatron phase and move differently therefore time resolving position measurement is needed. Wide band strip line BPM can be adequate if proper processing algorithm is used. In this work we present the results of the analysis of unstable transverse beam motion using time resolving processing algorithm. Suggested algorithm allows to calculate transverse position of different parts of the beam on each turn, then beam centroid movement on successive turns can be developed in series of plane travelling waves in the beam frame of reference thus providing important information on instability development. Some general features of fast transverse instability, unknown before, are discovered.

  9. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Nosych, Andriy A.; Bruce, Roderik; Gasior, Marek; Mirarchi, Daniele; Redaelli, Stefano; Salvachua, Belen; Wollmann, Daniel

    2014-02-01

    Collimators with embedded beam position monitor (BPM) button electrodes will be installed in the Large Hadron Collider (LHC) during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  10. Performance of a Nanometer Resolution BPM System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walston, S.; Chung, C.; Fitsos, P.

    2007-04-24

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Acceleratormore » Test Facility (ATF) for testing with its ultra-low emittance beam. The three BPMs are rigidly mounted inside an alignment frame on variable-length struts which allow movement in position and angle. We have developed novel methods for extracting the position and tilt information from the BPM signals including a calibration algorithm which is immune to beam jitter. To date, we have been able to demonstrate a resolution of approximately 20 nm over a dynamic range of +/- 20 microns. We report on the progress of these ongoing tests.« less

  11. Studies Of Coherent Synchrotron Radiation And Longitudinal Space Charge In The Jefferson Lab FEL Driver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tennant, Christopher D.; Douglas, David R.; Li, Rui

    2014-12-01

    The Jefferson Laboratory IR FEL Driver provides an ideal test bed for studying a variety of beam dynamical effects. Recent studies focused on characterizing the impact of coherent synchrotron radiation (CSR) with the goal of benchmarking measurements with simulation. Following measurements to characterize the beam, we quantitatively characterized energy extraction via CSR by measuring beam position at a dispersed location as a function of bunch compression. In addition to operating with the beam on the rising part of the linac RF waveform, measurements were also made while accelerating on the falling part. For each, the full compression point was movedmore » along the backleg of the machine and the response of the beam (distribution, extracted energy) measured. Initial results of start-to-end simulations using a 1D CSR algorithm show remarkably good agreement with measurements. A subsequent experiment established lasing with the beam accelerated on the falling side of the RF waveform in conjunction with positive momentum compaction (R56) to compress the bunch. The success of this experiment motivated the design of a modified CEBAF-style arc with control of CSR and microbunching effects.« less

  12. Three-dimensional simulation of beam propagation and heat transfer in static gas Cs DPALs using wave optics and fluid dynamics models

    NASA Astrophysics Data System (ADS)

    Waichman, Karol; Barmashenko, Boris D.; Rosenwaks, Salman

    2017-10-01

    Analysis of beam propagation, kinetic and fluid dynamic processes in Cs diode pumped alkali lasers (DPALs), using wave optics model and gasdynamic code, is reported. The analysis is based on a three-dimensional, time-dependent computational fluid dynamics (3D CFD) model. The Navier-Stokes equations for momentum, heat and mass transfer are solved by a commercial Ansys FLUENT solver based on the finite volume discretization technique. The CFD code which solves the gas conservation equations includes effects of natural convection and temperature diffusion of the species in the DPAL mixture. The DPAL kinetic processes in the Cs/He/C2H6 gas mixture dealt with in this paper involve the three lowest energy levels of Cs, (1) 62S1/2, (2) 62P1/2 and (3) 62P3/2. The kinetic processes include absorption due to the 1->3 D2 transition followed by relaxation the 3 to 2 fine structure levels and stimulated emission due to the 2->1 D1 transition. Collisional quenching of levels 2 and 3 and spontaneous emission from these levels are also considered. The gas flow conservation equations are coupled to fast-Fourier-transform algorithm for transverse mode propagation to obtain a solution of the scalar paraxial propagation equation for the laser beam. The wave propagation equation is solved by the split-step beam propagation method where the gain and refractive index in the DPAL medium affect the wave amplitude and phase. Using the CFD and beam propagation models, the gas flow pattern and spatial distributions of the pump and laser intensities in the resonator were calculated for end-pumped Cs DPAL. The laser power, DPAL medium temperature and the laser beam quality were calculated as a function of pump power. The results of the theoretical model for laser power were compared to experimental results of Cs DPAL.

  13. Toward dynamic lumbar punctures guidance based on single element synthetic tracked aperture ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Lin, Melissa; Kim, Younsu; Paredes, Mateo; Kannan, Karun; Patel, Nisu; Moghekar, Abhay; Durr, Nicholas J.; Boctor, Emad M.

    2017-03-01

    Lumbar punctures (LPs) are interventional procedures used to collect cerebrospinal fluid (CSF), a bodily fluid needed to diagnose central nervous system disorders. Most lumbar punctures are performed blindly without imaging guidance. Because the target window is small, physicians can only accurately palpate the appropriate space about 30% of the time and perform a successful procedure after an average of three attempts. Although various forms of imaging based guidance systems have been developed to aid in this procedure, these systems complicate the procedure by including independent image modalities and requiring image-to-needle registration to guide the needle insertion. Here, we propose a simple and direct needle insertion platform utilizing a single ultrasound element within the needle through dynamic sensing and imaging. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a potential obstacle such as bone, but also visually locate structures by combining transducer location tracking and back projection based tracked synthetic aperture beam-forming algorithm. The concept of the system was validated through simulation first, which revealed the tolerance to realistic error. Then, the initial prototype of the single element transducer was built into a 14G needle, and was mounted on a holster equipped with a rotation tracking encoder. We experimentally evaluated the system using a metal wire phantom mimicking high reflection bone structures and an actual spine bone phantom with both the controlled motion and freehand scanning. An ultrasound image corresponding to the model phantom structure was reconstructed using the beam-forming algorithm, and the resolution was improved compared to without beam-forming. These results demonstrated the proposed system has the potential to be used as an ultrasound imaging system for lumbar puncture procedures.

  14. NOTE: A BPF-type algorithm for CT with a curved PI detector

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-01

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  15. A BPF-type algorithm for CT with a curved PI detector.

    PubMed

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-21

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  16. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, E; Li, X; Moreau, M

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on themore » geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.« less

  17. Investigation of the optimum location of external markers for patient setup accuracy enhancement at external beam radiotherapy

    PubMed Central

    Torshabi, Ahmad Esmaili; Nankali, Saber

    2016-01-01

    In external beam radiotherapy, one of the most common and reliable methods for patient geometrical setup and/or predicting the tumor location is use of external markers. In this study, the main challenging issue is increasing the accuracy of patient setup by investigating external markers location. Since the location of each external marker may yield different patient setup accuracy, it is important to assess different locations of external markers using appropriate selective algorithms. To do this, two commercially available algorithms entitled a) canonical correlation analysis (CCA) and b) principal component analysis (PCA) were proposed as input selection algorithms. They work on the basis of maximum correlation coefficient and minimum variance between given datasets. The proposed input selection algorithms work in combination with an adaptive neuro‐fuzzy inference system (ANFIS) as a correlation model to give patient positioning information as output. Our proposed algorithms provide input file of ANFIS correlation model accurately. The required dataset for this study was prepared by means of a NURBS‐based 4D XCAT anthropomorphic phantom that can model the shape and structure of complex organs in human body along with motion information of dynamic organs. Moreover, a database of four real patients undergoing radiation therapy for lung cancers was utilized in this study for validation of proposed strategy. Final analyzed results demonstrate that input selection algorithms can reasonably select specific external markers from those areas of the thorax region where root mean square error (RMSE) of ANFIS model has minimum values at that given area. It is also found that the selected marker locations lie closely in those areas where surface point motion has a large amplitude and a high correlation. PACS number(s): 87.55.km, 87.55.N PMID:27929479

  18. A precise integration method for solving coupled vehicle-track dynamics with nonlinear wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Gao, Q.; Tan, S. J.; Zhong, W. X.

    2012-10-01

    A new method is proposed as a solution for the large-scale coupled vehicle-track dynamic model with nonlinear wheel-rail contact. The vehicle is simplified as a multi-rigid-body model, and the track is treated as a three-layer beam model. In the track model, the rail is assumed to be an Euler-Bernoulli beam supported by discrete sleepers. The vehicle model and the track model are coupled using Hertzian nonlinear contact theory, and the contact forces of the vehicle subsystem and the track subsystem are approximated by the Lagrange interpolation polynomial. The response of the large-scale coupled vehicle-track model is calculated using the precise integration method. A more efficient algorithm based on the periodic property of the track is applied to calculate the exponential matrix and certain matrices related to the solution of the track subsystem. Numerical examples demonstrate the computational accuracy and efficiency of the proposed method.

  19. Characterization of the International Linear Collider damping ring optics

    NASA Astrophysics Data System (ADS)

    Shanks, J.; Rubin, D. L.; Sagan, D.

    2014-10-01

    A method is presented for characterizing the emittance dilution and dynamic aperture for an arbitrary closed lattice that includes guide field magnet errors, multipole errors and misalignments. This method, developed and tested at the Cornell Electron Storage Ring Test Accelerator (CesrTA), has been applied to the damping ring lattice for the International Linear Collider (ILC). The effectiveness of beam based emittance tuning is limited by beam position monitor (BPM) measurement errors, number of corrector magnets and their placement, and correction algorithm. The specifications for damping ring magnet alignment, multipole errors, number of BPMs, and precision in BPM measurements are shown to be consistent with the required emittances and dynamic aperture. The methodology is then used to determine the minimum number of position monitors that is required to achieve the emittance targets, and how that minimum depends on the location of the BPMs. Similarly, the maximum tolerable multipole errors are evaluated. Finally, the robustness of each BPM configuration with respect to random failures is explored.

  20. Topology Control in Aerial Multi-Beam Directional Networks

    DTIC Science & Technology

    2017-04-24

    underlying challenges to topology control in multi -beam direction networks. Two topology control algorithms are developed: a centralized algorithm...main beam, the gain is negligible. Thus, for topology control in a multi -beam system, two nodes that are being simultaneously transmitted to or...the network. As the network size is larger than the communication range, even the original network will require some multi -hop traffic. The second two

  1. Evaluation of beam tracking strategies for the THOR-CSW solar wind instrument

    NASA Astrophysics Data System (ADS)

    De Keyser, Johan; Lavraud, Benoit; Prech, Lubomir; Neefs, Eddy; Berkenbosch, Sophie; Beeckman, Bram; Maggiolo, Romain; Fedorov, Andrei; Baruah, Rituparna; Wong, King-Wah; Amoros, Carine; Mathon, Romain; Génot, Vincent

    2017-04-01

    We compare different beam tracking strategies for the Cold Solar Wind (CSW) plasma spectrometer on the ESA M4 THOR mission candidate. The goal is to intelligently select the energy and angular windows the instrument is sampling and to adapt these windows as the solar wind properties evolve, with the aim to maximize the velocity distribution acquisition rate while maintaining excellent energy and angular resolution. Using synthetic data constructed using high-cadence measurements by the Faraday cup instrument on the Spektr-R mission (30 ms resolution), we test the performance of energy beam tracking with or without angular beam tracking. The algorithm can be fed both by data acquired by the plasma spectrometer during the previous measurement cycle, or by data from another instrument, in casu the Faraday Cup (FAR) instrument foreseen on THOR. We verify how these beam tracking algorithms behave for different sizes of the energy and angular windows, and for different data integration times, in order to assess the limitations of the algorithm and to avoid situations in which the algorithm loses track of the beam.

  2. Development of activity pencil beam algorithm using measured distribution data of positron emitter nuclei generated by proton irradiation of targets containing (12)C, (16)O, and (40)Ca nuclei in preparation of clinical application.

    PubMed

    Miyatake, Aya; Nishio, Teiji; Ogino, Takashi

    2011-10-01

    The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.

  3. Slew maneuvers of Spacecraft Control Laboratory Experiment (SCOLE)

    NASA Technical Reports Server (NTRS)

    Kakad, Yogendra P.

    1992-01-01

    This is the final report on the dynamics and control of slew maneuvers of the Spacecraft Control Laboratory Experiment (SCOLE) test facility. The report documents the basic dynamical equation derivations for an arbitrary large angle slew maneuver as well as the basic decentralized slew maneuver control algorithm. The set of dynamical equations incorporate rigid body slew maneuver and three dimensional vibrations of the complete assembly comprising the rigid shuttle, the flexible beam, and the reflector with an offset mass. The analysis also includes kinematic nonlinearities of the entire assembly during the maneuver and the dynamics of the interactions between the rigid shuttle and the flexible appendage. The equations are simplified and evaluated numerically to include the first ten flexible modes to yield a model for designing control systems to perform slew maneuvers. The control problem incorporates the nonlinear dynamical equations and is expressed in terms of a two point boundary value problem.

  4. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  5. Incoherent beam combining based on the momentum SPGD algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng

    2018-05-01

    Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.

  6. Simulating an underwater vehicle self-correcting guidance system with Simulink

    NASA Astrophysics Data System (ADS)

    Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe

    2008-09-01

    Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.

  7. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  8. Active vibration control for piezoelectricity cantilever beam: an adaptive feedforward control method

    NASA Astrophysics Data System (ADS)

    Zhu, Qiao; Yue, Jun-Zhou; Liu, Wei-Qun; Wang, Xu-Dong; Chen, Jun; Hu, Guang-Di

    2017-04-01

    This work is focused on the active vibration control of piezoelectric cantilever beam, where an adaptive feedforward controller (AFC) is utilized to reject the vibration with unknown multiple frequencies. First, the experiment setup and its mathematical model are introduced. Due to that the channel between the disturbance and the vibration output is unknown in practice, a concept of equivalent input disturbance (EID) is employed to put an equivalent disturbance into the input channel. In this situation, the vibration control can be achieved by setting the control input be the identified EID. Then, for the EID with known multiple frequencies, the AFC is introduced to perfectly reject the vibration but is sensitive to the frequencies. In order to accurately identify the unknown frequencies of EID in presence of the random disturbances and un-modeled nonlinear dynamics, the time-frequency-analysis (TFA) method is employed to precisely identify the unknown frequencies. Consequently, a TFA-based AFC algorithm is proposed to the active vibration control with unknown frequencies. Finally, four cases are given to illustrate the efficiency of the proposed TFA-based AFC algorithm by experiment.

  9. Vibration based algorithm for crack detection in cantilever beam containing two different types of cracks

    NASA Astrophysics Data System (ADS)

    Behzad, Mehdi; Ghadami, Amin; Maghsoodi, Ameneh; Michael Hale, Jack

    2013-11-01

    In this paper, a simple method for detection of multiple edge cracks in Euler-Bernoulli beams having two different types of cracks is presented based on energy equations. Each crack is modeled as a massless rotational spring using Linear Elastic Fracture Mechanics (LEFM) theory, and a relationship among natural frequencies, crack locations and stiffness of equivalent springs is demonstrated. In the procedure, for detection of m cracks in a beam, 3m equations and natural frequencies of healthy and cracked beam in two different directions are needed as input to the algorithm. The main accomplishment of the presented algorithm is the capability to detect the location, severity and type of each crack in a multi-cracked beam. Concise and simple calculations along with accuracy are other advantages of this method. A number of numerical examples for cantilever beams including one and two cracks are presented to validate the method.

  10. Scanning wind-vector scatterometers with two pencil beams

    NASA Technical Reports Server (NTRS)

    Kirimoto, T.; Moore, R. K.

    1984-01-01

    A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.

  11. Decoding algorithm for vortex communications receiver

    NASA Astrophysics Data System (ADS)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  12. Worldwide Ocean Optics Database (WOOD)

    DTIC Science & Technology

    2001-09-30

    user can obtain values computed from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error ...from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for...properties, including diffuse attenuation, beam attenuation, and scattering. The database shall be easy to use, Internet accessible, and frequently updated

  13. Real-time moving horizon estimation for a vibrating active cantilever

    NASA Astrophysics Data System (ADS)

    Abdollahpouri, Mohammad; Takács, Gergely; Rohaľ-Ilkiv, Boris

    2017-03-01

    Vibrating structures may be subject to changes throughout their operating lifetime due to a range of environmental and technical factors. These variations can be considered as parameter changes in the dynamic model of the structure, while their online estimates can be utilized in adaptive control strategies, or in structural health monitoring. This paper implements the moving horizon estimation (MHE) algorithm on a low-cost embedded computing device that is jointly observing the dynamic states and parameter variations of an active cantilever beam in real time. The practical behavior of this algorithm has been investigated in various experimental scenarios. It has been found, that for the given field of application, moving horizon estimation converges faster than the extended Kalman filter; moreover, it handles atypical measurement noise, sensor errors or other extreme changes, reliably. Despite its improved performance, the experiments demonstrate that the disadvantage of solving the nonlinear optimization problem in MHE is that it naturally leads to an increase in computational effort.

  14. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    PubMed Central

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463

  15. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.

    PubMed

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan

    2010-01-01

    Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  16. Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.

    PubMed

    Banakh, Victor A; Marakasov, Dimitrii A

    2007-10-01

    Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.

  17. Numerical phase retrieval from beam intensity measurements in three planes

    NASA Astrophysics Data System (ADS)

    Bruel, Laurent

    2003-05-01

    A system and method have been developed at CEA to retrieve phase information from multiple intensity measurements along a laser beam. The device has been patented. Commonly used devices for beam measurement provide phase and intensity information separately or with a rather poor resolution whereas the MIROMA method provides both at the same time, allowing direct use of the results in numerical models. Usual phase retrieval algorithms use two intensity measurements, typically the image plane and the focal plane (Gerschberg-Saxton algorithm) related by a Fourier transform, or the image plane and a lightly defocus plane (D.L. Misell). The principal drawback of such iterative algorithms is their inability to provide unambiguous convergence in all situations. The algorithms can stagnate on bad solutions and the error between measured and calculated intensities remains unacceptable. If three planes rather than two are used, the data redundancy created confers to the method good convergence capability and noise immunity. It provides an excellent agreement between intensity determined from the retrieved phase data set in the image plane and intensity measurements in any diffraction plane. The method employed for MIROMA is inspired from GS algorithm, replacing Fourier transforms by a beam-propagating kernel with gradient search accelerating techniques and special care for phase branch cuts. A fast one dimensional algorithm provides an initial guess for the iterative algorithm. Applications of the algorithm on synthetic data find out the best reconstruction planes that have to be chosen. Robustness and sensibility are evaluated. Results on collimated and distorted laser beams are presented.

  18. Region-of-interest image reconstruction in circular cone-beam microCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Seungryong; Bian, Junguo; Pelizzari, Charles A.

    2007-12-15

    Cone-beam microcomputed tomography (microCT) is one of the most popular choices for small animal imaging which is becoming an important tool for studying animal models with transplanted diseases. Region-of-interest (ROI) imaging techniques in CT, which can reconstruct an ROI image from the projection data set of the ROI, can be used not only for reducing imaging-radiation exposure to the subject and scatters to the detector but also for potentially increasing spatial resolution of the reconstructed images. Increasing spatial resolution in microCT images can facilitate improved accuracy in many assessment tasks. A method proposed previously for increasing CT image spatial resolutionmore » entails the exploitation of the geometric magnification in cone-beam CT. Due to finite detector size, however, this method can lead to data truncation for a large geometric magnification. The Feldkamp-Davis-Kress (FDK) algorithm yields images with artifacts when truncated data are used, whereas the recently developed backprojection filtration (BPF) algorithm is capable of reconstructing ROI images without truncation artifacts from truncated cone-beam data. We apply the BPF algorithm to reconstructing ROI images from truncated data of three different objects acquired by our circular cone-beam microCT system. Reconstructed images by use of the FDK and BPF algorithms from both truncated and nontruncated cone-beam data are compared. The results of the experimental studies demonstrate that, from certain truncated data, the BPF algorithm can reconstruct ROI images with quality comparable to that reconstructed from nontruncated data. In contrast, the FDK algorithm yields ROI images with truncation artifacts. Therefore, an implication of the studies is that, when truncated data are acquired with a configuration of a large geometric magnification, the BPF algorithm can be used for effective enhancement of the spatial resolution of a ROI image.« less

  19. A novel hybrid algorithm for the design of the phase diffractive optical elements for beam shaping

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbo; Wang, Jun; Dong, Xiucheng

    2013-02-01

    In this paper, a novel hybrid algorithm for the design of a phase diffractive optical elements (PDOE) is proposed. It combines the genetic algorithm (GA) with the transformable scale BFGS (Broyden, Fletcher, Goldfarb, Shanno) algorithm, the penalty function was used in the cost function definition. The novel hybrid algorithm has the global merits of the genetic algorithm as well as the local improvement capabilities of the transformable scale BFGS algorithm. We designed the PDOE using the conventional simulated annealing algorithm and the novel hybrid algorithm. To compare the performance of two algorithms, three indexes of the diffractive efficiency, uniformity error and the signal-to-noise ratio are considered in numerical simulation. The results show that the novel hybrid algorithm has good convergence property and good stability. As an application example, the PDOE was used for the Gaussian beam shaping; high diffractive efficiency, low uniformity error and high signal-to-noise were obtained. The PDOE can be used for high quality beam shaping such as inertial confinement fusion (ICF), excimer laser lithography, fiber coupling laser diode array, laser welding, etc. It shows wide application value.

  20. Extended volume coverage in helical cone-beam CT by using PI-line based BPF algorithm

    NASA Astrophysics Data System (ADS)

    Cho, Seungryong; Pan, Xiaochuan

    2007-03-01

    We compared data requirements of filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms based on PI-lines in helical cone-beam CT. Since the filtration process in FBP algorithm needs all the projection data of PI-lines for each view, the required detector size should be bigger than the size that can cover Tam-Danielsson (T-D) window to avoid data truncation. BPF algorithm, however, requires the projection data only within the T-D window, which means smaller detector size can be used to reconstruct the same image than that in FBP. In other words, a longer helical pitch can be obtained by using BPF algorithm without any truncation artifacts when a fixed detector size is given. The purpose of the work is to demonstrate numerically that extended volume coverage in helical cone-beam CT by using PI-line-based BPF algorithm can be achieved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorne, N; Kassaee, A

    Purpose: To develop an algorithm which can calculate the Full Width Half Maximum (FWHM) of a Proton Pencil Beam from a 2D dimensional ion chamber array (IBA Matrixx) with limited spatial resolution ( 7.6 mm inter chamber distance). The algorithm would allow beam FWHM measurements to be taken during daily QA without an appreciable time increase. Methods: Combinations of 147 MeV single spot beams were delivered onto an IBA Matrixx and concurrently on EBT3 films for a standard. Data were collected around the Bragg Peak region and evaluated by a custom MATLAB script based on our algorithm using a leastmore » squared analysis. A set of artificial data, modified with random noise, was also processed to test for robustness. Results: The Matlab script processed Matixx data shows acceptable agreement (within 5%) with film measurements with no single measurement differing by more than 1.8 mm. In cases where the spots show some degree of asymmetry, the algorithm is able to resolve the differences. The algorithm was able to process artificial data with noise up to 15% of the maximum value. Time assays of each measurement took less than 3 minutes to perform, indicating that such measurements may be efficiently added to daily QA treatment. Conclusion: The developed algorithm can be implemented in daily QA program for Proton Pencil Beam scanning beams (PBS) with Matrixx to extract spot size and position information. The developed algorithm may be extended to small field sizes in photon clinic.« less

  2. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  3. Current capabilities for simulating the extreme distortion of thin structures subjected to severe impacts

    NASA Technical Reports Server (NTRS)

    Key, Samuel W.

    1993-01-01

    The explicit transient dynamics technology in use today for simulating the impact and subsequent transient dynamic response of a structure has its origins in the 'hydrocodes' dating back to the late 1940's. The growth in capability in explicit transient dynamics technology parallels the growth in speed and size of digital computers. Computer software for simulating the explicit transient dynamic response of a structure is characterized by algorithms that use a large number of small steps. In explicit transient dynamics software there is a significant emphasis on speed and simplicity. The finite element technology used to generate the spatial discretization of a structure is based on a compromise between completeness of the representation for the physical processes modelled and speed in execution. That is, since it is expected in every calculation that the deformation will be finite and the material will be strained beyond the elastic range, the geometry and the associated gradient operators must be reconstructed, as well as complex stress-strain models evaluated at every time step. As a result, finite elements derived for explicit transient dynamics software use the simplest and barest constructions possible for computational efficiency while retaining an essential representation of the physical behavior. The best example of this technology is the four-node bending quadrilateral derived by Belytschko, Lin and Tsay. Today, the speed, memory capacity and availability of computer hardware allows a number of the previously used algorithms to be 'improved.' That is, it is possible with today's computing hardware to modify many of the standard algorithms to improve their representation of the physical process at the expense of added complexity and computational effort. The purpose is to review a number of these algorithms and identify the improvements possible. In many instances, both the older, faster version of the algorithm and the improved and somewhat slower version of the algorithm are found implemented together in software. Specifically, the following seven algorithmic items are examined: the invariant time derivatives of stress used in material models expressed in rate form; incremental objectivity and strain used in the numerical integration of the material models; the use of one-point element integration versus mean quadrature; shell elements used to represent the behavior of thin structural components; beam elements based on stress-resultant plasticity versus cross-section integration; the fidelity of elastic-plastic material models in their representation of ductile metals; and the use of Courant subcycling to reduce computational effort.

  4. Fast non-interferometric iterative phase retrieval for holographic data storage.

    PubMed

    Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi

    2017-12-11

    Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.

  5. Tilt angle measurement with a Gaussian-shaped laser beam tracking

    NASA Astrophysics Data System (ADS)

    Šarbort, Martin; Řeřucha, Šimon; Jedlička, Petr; Lazar, Josef; Číp, Ondrej

    2014-05-01

    We have addressed the challenge to carry out the angular tilt stabilization of a laser guiding mirror which is intended to route a laser beam with a high energy density. Such an application requires good angular accuracy as well as large operating range, long term stability and absolute positioning. We have designed an instrument for such a high precision angular tilt measurement based on a triangulation method where a laser beam with Gaussian profile is reflected off the stabilized mirror and detected by an image sensor. As the angular deflection of the mirror causes a change of the beam spot position, the principal task is to measure the position on the image chip surface. We have employed a numerical analysis of the Gaussian intensity pattern which uses the nonlinear regression algorithm. The feasibility and performance of the method were tested by numeric modeling as well as experimentally. The experimental results indicate that the assembled instrument achieves a measurement error of 0.13 microradian in the range +/-0.65 degrees over the period of one hour. This corresponds to the dynamic range of 1:170 000.

  6. Fiber-based coherent polarization beam combining with cascaded phase-locking and polarization-transforming controls

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Geng, Chao; Li, Feng; Huang, Guan; Li, Xinyang

    2018-05-01

    In this paper, the fiber-based coherent polarization beam combining (CPBC) with cascaded phase-locking (PL) and polarization-transforming (PT) controls was proposed to combine imbalanced input beams where the number of the input beams is not binary, in which the PL control was performed using the piezoelectric-ring fiber-optic phase compensator, and the PT control was realized by the dynamic polarization controller, simultaneously. The principle of the proposed CPBC was introduced. The performance of the proposed CPBC was analyzed in comparison with the CPBC based on PL control and the CPBC based on PT control. The basic experiment of CPBC of three laser beams was carried out to validate the feasibility of the proposed CPBC, where cascaded controls of PL and PT were implemented based on stochastic parallel gradient descent algorithm. Simulation and experimental results show that the proposed CPBC incorporates the advantages of the two previous CPBC schemes and performs well in the closed loop. Moreover, the expansibility and the application of the proposed CPBC were validated by scaling the CPBC to combine seven laser beams. We believe that the proposed fiber-based CPBC with cascaded PL and PT controls has great potential in free space optical communications employing the multi-aperture receiver with asymmetric structure.

  7. Dynamic Sensing Performance of a Point-Wise Fiber Bragg Grating Displacement Measurement System Integrated in an Active Structural Control System

    PubMed Central

    Chuang, Kuo-Chih; Liao, Heng-Tseng; Ma, Chien-Ching

    2011-01-01

    In this work, a fiber Bragg grating (FBG) sensing system which can measure the transient response of out-of-plane point-wise displacement responses is set up on a smart cantilever beam and the feasibility of its use as a feedback sensor in an active structural control system is studied experimentally. An FBG filter is employed in the proposed fiber sensing system to dynamically demodulate the responses obtained by the FBG displacement sensor with high sensitivity. For comparison, a laser Doppler vibrometer (LDV) is utilized simultaneously to verify displacement detection ability of the FBG sensing system. An optical full-field measurement technique called amplitude-fluctuation electronic speckle pattern interferometry (AF-ESPI) is used to provide full-field vibration mode shapes and resonant frequencies. To verify the dynamic demodulation performance of the FBG filter, a traditional FBG strain sensor calibrated with a strain gauge is first employed to measure the dynamic strain of impact-induced vibrations. Then, system identification of the smart cantilever beam is performed by FBG strain and displacement sensors. Finally, by employing a velocity feedback control algorithm, the feasibility of integrating the proposed FBG displacement sensing system in a collocated feedback system is investigated and excellent dynamic feedback performance is demonstrated. In conclusion, our experiments show that the FBG sensor is capable of performing dynamic displacement feedback and/or strain measurements with high sensitivity and resolution. PMID:22247683

  8. A method for solution of the Euler-Bernoulli beam equation in flexible-link robotic systems

    NASA Technical Reports Server (NTRS)

    Tzes, Anthony P.; Yurkovich, Stephen; Langer, F. Dieter

    1989-01-01

    An efficient numerical method for solving the partial differential equation (PDE) governing the flexible manipulator control dynamics is presented. A finite-dimensional model of the equation is obtained through discretization in both time and space coordinates by using finite-difference approximations to the PDE. An expert program written in the Macsyma symbolic language is utilized in order to embed the boundary conditions into the program, accounting for a mass carried at the tip of the manipulator. The advantages of the proposed algorithm are many, including the ability to (1) include any distributed actuation term in the partial differential equation, (2) provide distributed sensing of the beam displacement, (3) easily modify the boundary conditions through an expert program, and (4) modify the structure for running under a multiprocessor environment.

  9. Beam-steering efficiency optimization method based on a rapid-search algorithm for liquid crystal optical phased array.

    PubMed

    Xiao, Feng; Kong, Lingjiang; Chen, Jian

    2017-06-01

    A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.

  10. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  11. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penfold, S; Casiraghi, M; Dou, T

    2015-06-15

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide themore » system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm.« less

  12. Mitigation of beam fluctuation due to atmospheric turbulence and prediction of control quality using intelligent decision-making tools.

    PubMed

    Raj, A Arockia Bazil; Selvi, J Arputha Vijaya; Kumar, D; Sivakumaran, N

    2014-06-10

    In free-space optical link (FSOL), atmospheric turbulence causes fluctuations in both intensity and phase of the received beam and impairing link performance. The beam motion is one of the main causes for major power loss. This paper presents an investigation on the performance of two types of controller designed for aiming a laser beam to be at a particular spot under dynamic disturbances. The multiple experiment observability nonlinear input-output data mapping is used as the principal components for controllers design. The first design is based on the Taguchi method while the second is artificial neural network method. These controllers process the beam location information from a static linear map of 2D plane: optoelectronic position detector, as observer, and then generate the necessary outputs to steer the beam with a microelectromechanical mirror: fast steering mirror. The beam centroid is computed using monopulse algorithm. Evidence of suitability and effectiveness of the proposed controllers are comprehensively assessed and quantitatively measured in terms of coefficient of correlation, correction speed, control exactness, centroid displacement, and stability of the receiver signal through the experimental results from the FSO link setup established for the horizontal range of 0.5 km at an altitude of 15.25 m. The test field type is open flat terrain, grass, and few isolated obstacles.

  13. Electron Source based on Superconducting RF

    NASA Astrophysics Data System (ADS)

    Xin, Tianmu

    High-bunch-charge photoemission electron-sources operating in a Continuous Wave (CW) mode can provide high peak current as well as the high average current which are required for many advanced applications of accelerators facilities, for example, electron coolers for hadron beams, electron-ion colliders, and Free-Electron Lasers (FELs). Superconducting Radio Frequency (SRF) has many advantages over other electron-injector technologies, especially when it is working in CW mode as it offers higher repetition rate. An 112 MHz SRF electron photo-injector (gun) was developed at Brookhaven National Laboratory (BNL) to produce high-brightness and high-bunch-charge bunches for electron cooling experiments. The gun utilizes a Quarter-Wave Resonator (QWR) geometry for a compact structure and improved electron beam dynamics. The detailed RF design of the cavity, fundamental coupler and cathode stalk are presented in this work. A GPU accelerated code was written to improve the speed of simulation of multipacting, an important hurdle the SRF structure has to overcome in various locations. The injector utilizes high Quantum Efficiency (QE) multi-alkali photocathodes (K2CsSb) for generating electrons. The cathode fabrication system and procedure are also included in the thesis. Beam dynamic simulation of the injector was done with the code ASTRA. To find the optimized parameters of the cavities and beam optics, the author wrote a genetic algorithm Python script to search for the best solution in this high-dimensional parameter space. The gun was successfully commissioned and produced world record bunch charge and average current in an SRF photo-injector.

  14. SU-F-T-352: Development of a Knowledge Based Automatic Lung IMRT Planning Algorithm with Non-Coplanar Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, W; Wu, Q; Yuan, L

    Purpose: To improve the robustness of a knowledge based automatic lung IMRT planning method and to further validate the reliability of this algorithm by utilizing for the planning of clinical cases with non-coplanar beams. Methods: A lung IMRT planning method which automatically determines both plan optimization objectives and beam configurations with non-coplanar beams has been reported previously. A beam efficiency index map is constructed to guide beam angle selection in this algorithm. This index takes into account both the dose contributions from individual beams and the combined effect of multiple beams which is represented by a beam separation score. Wemore » studied the effect of this beam separation score on plan quality and determined the optimal weight for this score.14 clinical plans were re-planned with the knowledge-based algorithm. Significant dosimetric metrics for the PTV and OARs in the automatic plans are compared with those in the clinical plans by the two-sample t-test. In addition, a composite dosimetric quality index was defined to obtain the relationship between the plan quality and the beam separation score. Results: On average, we observed more than 15% reduction on conformity index and homogeneity index for PTV and V{sub 40}, V{sub 60} for heart while an 8% and 3% increase on V{sub 5}, V{sub 20} for lungs, respectively. The variation curve of the composite index as a function of angle spread score shows that 0.6 is the best value for the weight of the beam separation score. Conclusion: Optimal value for beam angle spread score in automatic lung IMRT planning is obtained. With this value, model can result in statistically the “best” achievable plans. This method can potentially improve the quality and planning efficiency for IMRT plans with no-coplanar angles.« less

  15. Dual energy approach for cone beam artifacts correction

    NASA Astrophysics Data System (ADS)

    Han, Chulhee; Choi, Shinkook; Lee, Changwoo; Baek, Jongduk

    2017-03-01

    Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.

  16. Technology achievements and projections for communication satellites of the future

    NASA Technical Reports Server (NTRS)

    Bagwell, J. W.

    1986-01-01

    Multibeam systems of the future using monolithic microwave integrated circuits to provide phase control and power gain are contrasted with discrete microwave power amplifiers from 10 to 75 W and their associated waveguide feeds, phase shifters and power splitters. Challenging new enabling technology areas include advanced electrooptical control and signal feeds. Large scale MMIC's will be used incorporating on chip control interfaces, latching, and phase and amplitude control with power levels of a few watts each. Beam forming algorithms for 80 to 90 deg. wide angle scanning and precise beam forming under wide ranging environments will be required. Satelllite systems using these dynamically reconfigured multibeam antenna systems will demand greater degrees of beam interconnectivity. Multiband and multiservice users will be interconnected through the same space platform. Monolithic switching arrays operating over a wide range of RF and IF frequencies are contrasted with current IF switch technology implemented discretely. Size, weight, and performance improvements by an order of magnitude are projected.

  17. Experimental verification of a 4D MLEM reconstruction algorithm used for in-beam PET measurements in particle therapy

    NASA Astrophysics Data System (ADS)

    Stützer, K.; Bert, C.; Enghardt, W.; Helmbrecht, S.; Parodi, K.; Priegnitz, M.; Saito, N.; Fiedler, F.

    2013-08-01

    In-beam positron emission tomography (PET) has been proven to be a reliable technique in ion beam radiotherapy for the in situ and non-invasive evaluation of the correct dose deposition in static tumour entities. In the presence of intra-fractional target motion an appropriate time-resolved (four-dimensional, 4D) reconstruction algorithm has to be used to avoid reconstructed activity distributions suffering from motion-related blurring artefacts and to allow for a dedicated dose monitoring. Four-dimensional reconstruction algorithms from diagnostic PET imaging that can properly handle the typically low counting statistics of in-beam PET data have been adapted and optimized for the characteristics of the double-head PET scanner BASTEI installed at GSI Helmholtzzentrum Darmstadt, Germany (GSI). Systematic investigations with moving radioactive sources demonstrate the more effective reduction of motion artefacts by applying a 4D maximum likelihood expectation maximization (MLEM) algorithm instead of the retrospective co-registration of phasewise reconstructed quasi-static activity distributions. Further 4D MLEM results are presented from in-beam PET measurements of irradiated moving phantoms which verify the accessibility of relevant parameters for the dose monitoring of intra-fractionally moving targets. From in-beam PET listmode data sets acquired together with a motion surrogate signal, valuable images can be generated by the 4D MLEM reconstruction for different motion patterns and motion-compensated beam delivery techniques.

  18. Pre-correction of distorted Bessel-Gauss beams without wavefront detection

    NASA Astrophysics Data System (ADS)

    Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing

    2017-12-01

    By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.

  19. Real-time optical measurement of the dynamic body surface for use in guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Price, G. J.; Parkhurst, J. M.; Sharrock, P. J.; Moore, C. J.

    2012-01-01

    Optical measurements are increasingly used in radiotherapy. In this paper we present, in detail, the design and implementation of a multi-channel optical system optimized for fast, high spatial resolution, dynamic body surface measurement in guided therapy. We include all algorithmic modifications and calibration procedures required to create a robust, practical system for clinical use. Comprehensive static and dynamic phantom validation measurements in the radiotherapy treatment room show: conformance with simultaneously measured cone beam CT data to within 1 mm over 62% ± 8% of the surface and 2 mm over 90% ± 3%; agreement with the measured radius of a precision geometrical phantom to within 1 mm; and true real-time performance with image capture through to surface display at 23 Hz. An example patient dataset is additionally included, indicating similar performance in the clinic.

  20. Automated patient setup and gating using cone beam computed tomography projections

    NASA Astrophysics Data System (ADS)

    Wan, Hanlin; Bertholet, Jenny; Ge, Jiajia; Poulsen, Per; Parikh, Parag

    2016-03-01

    In radiation therapy, fiducial markers are often implanted near tumors and used for patient positioning and respiratory gating purposes. These markers are then used to manually align the patients by matching the markers in the cone beam computed tomography (CBCT) reconstruction to those in the planning CT. This step is time-intensive and user-dependent, and often results in a suboptimal patient setup. We propose a fully automated, robust method based on dynamic programming (DP) for segmenting radiopaque fiducial markers in CBCT projection images, which are then used to automatically optimize the treatment couch position and/or gating window bounds. The mean of the absolute 2D segmentation error of our DP algorithm is 1.3+/- 1.0 mm for 87 markers on 39 patients. Intrafraction images were acquired every 3 s during treatment at two different institutions. For gated patients from Institution A (8 patients, 40 fractions), the DP algorithm increased the delivery accuracy (96+/- 6% versus 91+/- 11% , p  <  0.01) compared to the manual setup using kV fluoroscopy. For non-gated patients from Institution B (6 patients, 16 fractions), the DP algorithm performed similarly (1.5+/- 0.8 mm versus 1.6+/- 0.9 mm, p  =  0.48) compared to the manual setup matching the fiducial markers in the CBCT to the mean position. Our proposed automated patient setup algorithm only takes 1-2 s to run, requires no user intervention, and performs as well as or better than the current clinical setup.

  1. Simulator for beam-based LHC collimator alignment

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Aßmann, Ralph; Redaelli, Stefano; Sammut, Nicholas

    2014-02-01

    In the CERN Large Hadron Collider, collimators need to be set up to form a multistage hierarchy to ensure efficient multiturn cleaning of halo particles. Automatic algorithms were introduced during the first run to reduce the beam time required for beam-based setup, improve the alignment accuracy, and reduce the risk of human errors. Simulating the alignment procedure would allow for off-line tests of alignment policies and algorithms. A simulator was developed based on a diffusion beam model to generate the characteristic beam loss signal spike and decay produced when a collimator jaw touches the beam, which is observed in a beam loss monitor (BLM). Empirical models derived from the available measurement data are used to simulate the steady-state beam loss and crosstalk between multiple BLMs. The simulator design is presented, together with simulation results and comparison to measurement data.

  2. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.

    2010-01-15

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less

  3. Image reconstruction in cone-beam CT with a spherical detector using the BPF algorithm

    NASA Astrophysics Data System (ADS)

    Zuo, Nianming; Zou, Yu; Jiang, Tianzi; Pan, Xiaochuan

    2006-03-01

    Both flat-panel detectors and cylindrical detectors have been used in CT systems for data acquisition. The cylindrical detector generally offers a sampling of a transverse image plane more uniformly than does a flat-panel detector. However, in the longitudinal dimension, the cylindrical and flat-panel detectors offer similar sampling of the image space. In this work, we investigate a detector of spherical shape, which can yield uniform sampling of the 3D image space because the solid angle subtended by each individual detector bin remains unchanged. We have extended the backprojection-filtration (BPF) algorithm, which we have developed previously for cone-beam CT, to reconstruct images in cone-beam CT with a spherical detector. We also conduct computer-simulation studies to validate the extended BPF algorithm. Quantitative results in these numerical studies indicate that accurate images can be obtained from data acquired with a spherical detector by use of our extended BPF cone-beam algorithms.

  4. Three-dimensional particle simulation of heavy-ion fusion beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A.; Grote, D.P.; Haber, I.

    1992-07-01

    The beams in a heavy-ion-beam-driven inertial fusion (HIF) accelerator are collisionless, nonneutral plasmas, confined by applied magnetic and electric fields. These space-charge-dominated beams must be focused onto small (few mm) spots at the fusion target, and so preservation of a small emittance is crucial. The nonlinear beam self-fields can lead to emittance growth, and so a self-consistent field description is needed. To this end, a multidimensional particle simulation code, WARP (Friedman {ital et} {ital al}., Part. Accel. {bold 37}-{bold 38}, 131 (1992)), has been developed and is being used to study the transport of HIF beams. The code's three-dimensional (3-D)more » package combines features of an accelerator code and a particle-in-cell plasma simulation. Novel techniques allow it to follow beams through many accelerator elements over long distances and around bends. This paper first outlines the algorithms employed in WARP. A number of applications and corresponding results are then presented. These applications include studies of: beam drift-compression in a misaligned lattice of quadrupole focusing magnets; beam equilibria, and the approach to equilibrium; and the MBE-4 experiment ({ital AIP} {ital Conference} {ital Proceedings} 152 (AIP, New York, 1986), p. 145) recently concluded at Lawrence Berkeley Laboratory (LBL). Finally, 3-D simulations of bent-beam dynamics relevant to the planned Induction Linac Systems Experiments (ILSE) (Fessenden, Nucl. Instrum. Methods Plasma Res. A {bold 278}, 13 (1989)) at LBL are described. Axially cold beams are observed to exhibit little or no root-mean-square emittance growth at midpulse in transiting a (sharp) bend. Axially hot beams, in contrast, do exhibit some emittance growth.« less

  5. The Born approximation, multiple scattering, and the butterfly algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alejandro F.

    Radar works by focusing a beam of light and seeing how long it takes to reflect. To see a large region the beam is pointed in different directions. The focus of the beam depends on the size of the antenna (called an aperture). Synthetic aperture radar (SAR) works by moving the antenna through some region of space. A fundamental assumption in SAR is that waves only bounce once. Several imaging algorithms have been designed using that assumption. The scattering process can be described by iterations of a badly behaving integral. Recently a method for efficiently evaluating these types of integrals has been developed. We will give a detailed implementation of this algorithm and apply it to study the multiple scattering effects in SAR using target estimates from single scattering algorithms.

  6. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE PAGES

    Lee, P.; Audet, T. L.; Lehe, R.; ...

    2015-12-31

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  7. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, P.; Audet, T. L.; Lehe, R.

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  8. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less

  9. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  10. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  11. Assessment of directionality performances: comparison between Freedom and CP810 sound processors.

    PubMed

    Razza, Sergio; Albanese, Greta; Ermoli, Lucilla; Zaccone, Monica; Cristofari, Eliana

    2013-10-01

    To compare speech recognition in noise for the Nucleus Freedom and CP810 sound processors using different directional settings among those available in the SmartSound portfolio. Single-subject, repeated measures study. Tertiary care referral center. Thirty-one monoaurally and binaurally implanted subjects (24 children and 7 adults) were enrolled. They were all experienced Nucleus Freedom sound processor users and achieved a 100% open set word recognition score in quiet listening conditions. Each patient was fitted with the Freedom and the CP810 processor. The program setting incorporated Adaptive Dynamic Range Optimization (ADRO) and adopted the directional algorithm BEAM (both devices) and ZOOM (only on CP810). Speech reception threshold (SRT) was assessed in a free-field layout, with disyllabic word list and interfering multilevel babble noise in the 3 different pre-processing configurations. On average, CP810 improved significantly patients' SRTs as compared to Freedom SP after 1 hour of use. Instead, no significant difference was observed in patients' SRT between the BEAM and the ZOOM algorithm fitted in the CP810 processor. The results suggest that hardware developments achieved in the design of CP810 allow an immediate and relevant directional advantage as compared to the previous-generation Freedom device.

  12. Simulations of Dynamical Friction Including Spatially-Varying Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Bell, G. I.; Bruhwiler, D. L.; Litvinenko, V. N.; Busby, R.; Abell, D. T.; Messmer, P.; Veitzer, S.; Cary, J. R.

    2006-03-01

    A proposed luminosity upgrade to the Relativistic Heavy Ion Collider (RHIC) includes a novel electron cooling section, which would use ˜55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. We consider the dynamical friction force exerted on individual ions due to a relevant electron distribution. The electrons may be focussed by a strong solenoid field, with sensitive dependence on errors, or by a wiggler field. In the rest frame of the relativistic co-propagating electron and ion beams, where the friction force can be simulated for nonrelativistic motion and electrostatic fields, the Lorentz transform of these spatially-varying magnetic fields includes strong, rapidly-varying electric fields. Previous friction force simulations for unmagnetized electrons or error-free solenoids used a 4th-order Hermite algorithm, which is not well-suited for the inclusion of strong, rapidly-varying external fields. We present here a new algorithm for friction force simulations, using an exact two-body collision model to accurately resolve close interactions between electron/ion pairs. This field-free binary-collision model is combined with a modified Boris push, using an operator-splitting approach, to include the effects of external fields. The algorithm has been implemented in the VORPAL code and successfully benchmarked.

  13. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*

    PubMed Central

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-01-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122

  14. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.

    PubMed

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-02-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.

  15. Recommendations for dose calculations of lung cancer treatment plans treated with stereotactic ablative body radiotherapy (SABR)

    NASA Astrophysics Data System (ADS)

    Devpura, S.; Siddiqui, M. S.; Chen, D.; Liu, D.; Li, H.; Kumar, S.; Gordon, J.; Ajlouni, M.; Movsas, B.; Chetty, I. J.

    2014-03-01

    The purpose of this study was to systematically evaluate dose distributions computed with 5 different dose algorithms for patients with lung cancers treated using stereotactic ablative body radiotherapy (SABR). Treatment plans for 133 lung cancer patients, initially computed with a 1D-pencil beam (equivalent-path-length, EPL-1D) algorithm, were recalculated with 4 other algorithms commissioned for treatment planning, including 3-D pencil-beam (EPL-3D), anisotropic analytical algorithm (AAA), collapsed cone convolution superposition (CCC), and Monte Carlo (MC). The plan prescription dose was 48 Gy in 4 fractions normalized to the 95% isodose line. Tumors were classified according to location: peripheral tumors surrounded by lung (lung-island, N=39), peripheral tumors attached to the rib-cage or chest wall (lung-wall, N=44), and centrally-located tumors (lung-central, N=50). Relative to the EPL-1D algorithm, PTV D95 and mean dose values computed with the other 4 algorithms were lowest for "lung-island" tumors with smallest field sizes (3-5 cm). On the other hand, the smallest differences were noted for lung-central tumors treated with largest field widths (7-10 cm). Amongst all locations, dose distribution differences were most strongly correlated with tumor size for lung-island tumors. For most cases, convolution/superposition and MC algorithms were in good agreement. Mean lung dose (MLD) values computed with the EPL-1D algorithm were highly correlated with that of the other algorithms (correlation coefficient =0.99). The MLD values were found to be ~10% lower for small lung-island tumors with the model-based (conv/superposition and MC) vs. the correction-based (pencil-beam) algorithms with the model-based algorithms predicting greater low dose spread within the lungs. This study suggests that pencil beam algorithms should be avoided for lung SABR planning. For the most challenging cases, small tumors surrounded entirely by lung tissue (lung-island type), a Monte-Carlo-based algorithm may be warranted.

  16. Exact BPF and FBP algorithms for nonstandard saddle curves.

    PubMed

    Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge

    2005-11-01

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.

  17. A reconstruction algorithm for helical CT imaging on PI-planes.

    PubMed

    Liang, Hongzhu; Zhang, Cishen; Yan, Ming

    2006-01-01

    In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  19. Study on transient beam loading compensation for China ADS proton linac injector II

    NASA Astrophysics Data System (ADS)

    Gao, Zheng; He, Yuan; Wang, Xian-Wu; Chang, Wei; Zhang, Rui-Feng; Zhu, Zheng-Long; Zhang, Sheng-Hu; Chen, Qi; Powers, Tom

    2016-05-01

    Significant transient beam loading effects were observed during beam commissioning tests of prototype II of the injector for the accelerator driven sub-critical (ADS) system, which took place at the Institute of Modern Physics, Chinese Academy of Sciences, between October and December 2014. During these tests experiments were performed with continuous wave (CW) operation of the cavities with pulsed beam current, and the system was configured to make use of a prototype digital low level radio frequency (LLRF) controller. The system was originally operated in pulsed mode with a simple proportional plus integral and deviation (PID) feedback control algorithm, which was not able to maintain the desired gradient regulation during pulsed 10 mA beam operations. A unique simple transient beam loading compensation method which made use of a combination of proportional and integral (PI) feedback and feedforward control algorithm was implemented in order to significantly reduce the beam induced transient effect in the cavity gradients. The superconducting cavity field variation was reduced to less than 1.7% after turning on this control algorithm. The design and experimental results of this system are presented in this paper. Supported by National Natural Science Foundation of China (91426303, 11525523)

  20. Performance of a Nanometer Resolution BPM System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, V; Hayano, H; Honda, Y

    2005-10-14

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. it is important to the ongoing ILC design effort to demonstrate that these tolerances can be achieved--ideally using beam-based stability measurements. It has been estimated that an RF cavity BPM with modern waveform processing could provide a position measurement resolution of less than one nanometer. Such a system could form the basis of the desired beam-based stability measurement, as well as be used for other specialized purposes. They have developed a high resolution RF cavity BPM and associated electronics.more » A triplet comprised of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. The three BPMs are rigidly mounted inside an alignment frame on six variable-length struts which can be used to move the BPMs in position and angle. they have developed novel methods for extracting the position and tilt information from the BPM signals including a robust calibration algorithm which is immune to beam jitter. To date, they have been able to demonstrate a resolution of approximately 20 nm over a dynamic range of {+-} 20 {micro}m. They report on the progress of these ongoing tests.« less

  1. TU-CD-304-01: FEATURED PRESENTATION and BEST IN PHYSICS (THERAPY): Trajectory Modulated Arc Therapy: Development of Novel Arc Delivery Techniques Integrating Dynamic Table Motion for Extended Volume Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, E; Hoppe, R; Million, L

    2015-06-15

    Purpose: Integration of coordinated robotic table motion with inversely-planned arc delivery has the potential to resolve table-top delivery limitations of large-field treatments such as Total Body Irradiation (TBI), Total Lymphoid Irradiation (TLI), and Cranial-Spinal Irradiation (CSI). We formulate the foundation for Trajectory Modulated Arc Therapy (TMAT), and using Varian Developer Mode capabilities, experimentally investigate its practical implementation for such techniques. Methods: A MATLAB algorithm was developed for inverse planning optimization of the table motion, MLC positions, and gantry motion under extended-SSD geometry. To maximize the effective field size, delivery trajectories for TMAT TBI were formed with the table rotated atmore » 270° IEC and dropped vertically to 152.5cm SSD. Preliminary testing of algorithm parameters was done through retrospective planning analysis. Robotic delivery was programmed using custom XML scripting on the TrueBeam Developer Mode platform. Final dose was calculated using the Eclipse AAA algorithm. Initial verification of delivery accuracy was measured using OSLDs on a solid water phantom of varying thickness. Results: A comparison of DVH curves demonstrated that dynamic couch motion irradiation was sufficiently approximated by static control points spaced in intervals of less than 2cm. Optimized MLC motion decreased the average lung dose to 68.5% of the prescription dose. The programmed irradiation integrating coordinated table motion was deliverable on a TrueBeam STx linac in 6.7 min. With the couch translating under an open 10cmx20cm field angled at 10°, OSLD measurements along the midline of a solid water phantom at depths of 3, 5, and 9cm were within 3% of the TPS AAA algorithm with an average deviation of 1.2%. Conclusion: A treatment planning and delivery system for Trajectory Modulated Arc Therapy of extended volumes has been established and experimentally demonstrated for TBI. Extension to other treatment techniques such as TLI and CSI is readily achievable through the developed platform. Grant Funding by Varian Medical Systems.« less

  2. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  3. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  4. The dynamics and control of large flexible space structures-V

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; Reddy, A. S. S. R.; Diarra, C. M.; Kumar, V. K.

    1982-01-01

    A general survey of the progress made in the areas of mathematical modelling of the system dynamics, structural analysis, development of control algorithms, and simulation of environmental disturbances is presented. The use of graph theory techniques is employed to examine the effects of inherent damping associated with LSST systems on the number and locations of the required control actuators. A mathematical model of the forces and moments induced on a flexible orbiting beam due to solar radiation pressure is developed and typical steady state open loop responses obtained for the case when rotations and vibrations are limited to occur within the orbit plane. A preliminary controls analysis based on a truncated (13 mode) finite element model of the 122m. Hoop/Column antenna indicates that a minimum of six appropriately placed actuators is required for controllability. An algorithm to evaluate the coefficients which describe coupling between the rigid rotational and flexible modes and also intramodal coupling was developed and numerical evaluation based on the finite element model of Hoop/Column system is currently in progress.

  5. Attitude tracking control of flexible spacecraft with large amplitude slosh

    NASA Astrophysics Data System (ADS)

    Deng, Mingle; Yue, Baozeng

    2017-12-01

    This paper is focused on attitude tracking control of a spacecraft that is equipped with flexible appendage and partially filled liquid propellant tank. The large amplitude liquid slosh is included by using a moving pulsating ball model that is further improved to estimate the settling location of liquid in microgravity or a zero-g environment. The flexible appendage is modelled as a three-dimensional Bernoulli-Euler beam, and the assumed modal method is employed. A hybrid controller that combines sliding mode control with an adaptive algorithm is designed for spacecraft to perform attitude tracking. The proposed controller has proved to be asymptotically stable. A nonlinear model for the overall coupled system including spacecraft attitude dynamics, liquid slosh, structural vibration and control action is established. Numerical simulation results are presented to show the dynamic behaviors of the coupled system and to verify the effectiveness of the control approach when the spacecraft undergoes the disturbance produced by large amplitude slosh and appendage vibration. Lastly, the designed adaptive algorithm is found to be effective to improve the precision of attitude tracking.

  6. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  7. The thermal-wave model: A Schroedinger-like equation for charged particle beam dynamics

    NASA Technical Reports Server (NTRS)

    Fedele, Renato; Miele, G.

    1994-01-01

    We review some results on longitudinal beam dynamics obtained in the framework of the Thermal Wave Model (TWM). In this model, which has recently shown the capability to describe both longitudinal and transverse dynamics of charged particle beams, the beam dynamics is ruled by Schroedinger-like equations for the beam wave functions, whose squared modulus is proportional to the beam density profile. Remarkably, the role of the Planck constant is played by a diffractive constant epsilon, the emittance, which has a thermal nature.

  8. POD Model Reconstruction for Gray-Box Fault Detection

    NASA Technical Reports Server (NTRS)

    Park, Han; Zak, Michail

    2007-01-01

    Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.

  9. A preliminary investigation of ROI-image reconstruction with the rebinned BPF algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Xia, Dan; Yu, Lifeng; Sidky, Emil Y.; Pan, Xiaochuan

    2008-03-01

    The back-projection filtration (BPF)algorithm is capable of reconstructing ROI images from truncated data acquired with a wide class of general trajectories. However, it has been observed that, similar to other algorithms for convergent beam geometries, the BPF algorithm involves a spatially varying weighting factor in the backprojection step. This weighting factor can not only increase the computation load, but also amplify the noise in reconstructed images The weighting factor can be eliminated by appropriately rebinning the measured cone-beam data into fan-parallel-beam data. Such an appropriate data rebinning not only removes the weighting factor, but also retain other favorable properties of the BPF algorithm. In this work, we conduct a preliminary study of the rebinned BPF algorithm and its noise property. Specifically, we consider an application in which the detector and source can move in several directions for achieving ROI data acquisition. The combined motion of the detector and source generally forms a complex trajectory. We investigate in this work image reconstruction within an ROI from data acquired in this kind of applications.

  10. Diaphragm motion quantification in megavoltage cone-beam CT projection images.

    PubMed

    Chen, Mingqing; Siochi, R Alfredo

    2010-05-01

    To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.

  11. Development and demonstration of 2D dosimetry using optically stimulated luminescence from new Al2O3 films for radiotherapy applications

    NASA Astrophysics Data System (ADS)

    Ahmed, Md Foiez

    Scope and Method of Study: The goal of this work was to develop and demonstrate a 2D dosimetry system based on the optically stimulated luminescence (OSL) from new Al2O3 films for radiotherapy applications. A 2D laser-scanning system was developed for the readout and two OSL films (Al2O3:C and Al2O3:C,Mg) were tested. A dose reconstruction algorithm addressing corrections required for the characteristic material properties and the properties related to the system design was developed. The dosimetric properties of the system were tested using clinical X-ray (6 MV) beam. The feasibility of small field dosimetry was tested using heavy ion beams (221 MeV proton and 430 MeV 12C beam). For comparison, clinical tests were performed with ionization chamber, diode arrays and the commercial radiochromic films (Gafchromic EBT3) when applicable. Findings and Conclusions: The results demonstrate that the developed image reconstruction algorithm enabled > 300x faster laser-scanning readout of the Al2O3 films, eliminating the restriction imposed by its slow luminescence decay. The algorithm facilitates submillimeter spatial resolution, reduces the scanner position dependence (of light collection efficiency) and removes the inherent galvo geometric distortion, among other corrections. The system has a background signal < 1 mGy, linearity correction factor of < 10% up to ˜4.0 Gy and < 2% dose uncertainty over the clinically relevant dose range of 0.1 - 30 Gy. The system has a dynamic range of 4 - 5 orders, only limited by PMT linearity. The absolute response from Al2O2:C films is higher than Al2O 2:C,Mg films, but with lower image signal-to-noise ratio due to lower concentration of fast F+-center emission. As a result, Al2O2:C,Mg films are better suited than Al2O3:C films for small field dosimetry, which requires precise dosimetry with sub-millimeter spatial resolution. The dose uncertainty associated with OSL film dosimetry is lower than that associated with EBT3 film dosimetry due to lower background, simpler calibration and wider dynamic range. In conclusion, this work demonstrates excellent potentials of the 2D OSL dosimetry system for both relative and absolute dosimetry in radiotherapy applications, with especial emphasis on small fields.

  12. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  13. Hard real-time beam scheduler enables adaptive images in multi-probe systems

    NASA Astrophysics Data System (ADS)

    Tobias, Richard J.

    2014-03-01

    Real-time embedded-system concepts were adapted to allow an imaging system to responsively control the firing of multiple probes. Large-volume, operator-independent (LVOI) imaging would increase the diagnostic utility of ultrasound. An obstacle to this innovation is the inability of current systems to drive multiple transducers dynamically. Commercial systems schedule scanning with static lists of beams to be fired and processed; here we allow an imager to adapt to changing beam schedule demands, as an intelligent response to incoming image data. An example of scheduling changes is demonstrated with a flexible duplex mode two-transducer application mimicking LVOI imaging. Embedded-system concepts allow an imager to responsively control the firing of multiple probes. Operating systems use powerful dynamic scheduling algorithms, such as fixed priority preemptive scheduling. Even real-time operating systems lack the timing constraints required for ultrasound. Particularly for Doppler modes, events must be scheduled with sub-nanosecond precision, and acquired data is useless without this requirement. A successful scheduler needs unique characteristics. To get close to what would be needed in LVOI imaging, we show two transducers scanning different parts of a subjects leg. When one transducer notices flow in a region where their scans overlap, the system reschedules the other transducer to start flow mode and alter its beams to get a view of the observed vessel and produce a flow measurement. The second transducer does this in a focused region only. This demonstrates key attributes of a successful LVOI system, such as robustness against obstructions and adaptive self-correction.

  14. Dynamics of elastic nonlinear rotating composite beams with embedded actuators

    NASA Astrophysics Data System (ADS)

    Ghorashi, Mehrdaad

    2009-08-01

    A comprehensive study of the nonlinear dynamics of composite beams is presented. The study consists of static and dynamic solutions with and without active elements. The static solution provides the initial conditions for the dynamic analysis. The dynamic problems considered include the analyses of clamped (hingeless) and articulated (hinged) accelerating rotating beams. Numerical solutions for the steady state and transient responses have been obtained. It is shown that the transient solution of the nonlinear formulation of accelerating rotating beam converges to the steady state solution obtained by the shooting method. The effect of perturbing the steady state solution has also been calculated and the results are shown to be compatible with those of the accelerating beam analysis. Next, the coupled flap-lag rigid body dynamics of a rotating articulated beam with hinge offset and subjected to aerodynamic forces is formulated. The solution to this rigid-body problem is then used, together with the finite difference method, in order to produce the nonlinear elasto-dynamic solution of an accelerating articulated beam. Next, the static and dynamic responses of nonlinear composite beams with embedded Anisotropic Piezo-composite Actuators (APA) are presented. The effect of activating actuators at various directions on the steady state force and moments generated in a rotating composite beam has been presented. With similar results for the transient response, this analysis can be used in controlling the response of adaptive rotating beams.

  15. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  16. Exact BPF and FBP algorithms for nonstandard saddle curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Hengyong; Zhao Shiying; Ye Yangbo

    2005-11-15

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less

  17. Optimization in optical systems revisited: Beyond genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gagnon, Denis; Dumont, Joey; Dubé, Louis

    2013-05-01

    Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

  18. Estimating the uncertainty of calculated out-of-field organ dose from a commercial treatment planning system.

    PubMed

    Wang, Lilie; Ding, George X

    2018-06-12

    Therapeutic radiation to cancer patients is accompanied by unintended radiation to organs outside the treatment field. It is known that the model-based dose algorithm has limitation in calculating the out-of-field doses. This study evaluated the out-of-field dose calculated by the Varian Eclipse treatment planning system (v.11 with AAA algorithm) in realistic treatment plans with the goal of estimating the uncertainties of calculated organ doses. Photon beam phase-space files for TrueBeam linear accelerator were provided by Varian. These were used as incident sources in EGSnrc Monte Carlo simulations of radiation transport through the downstream jaws and MLC. Dynamic movements of the MLC leaves were fully modeled based on treatment plans using IMRT or VMAT techniques. The Monte Carlo calculated out-of-field doses were then compared with those calculated by Eclipse. The dose comparisons were performed for different beam energies and treatment sites, including head-and-neck, lung, and pelvis. For 6 MV (FF/FFF), 10 MV (FF/FFF), and 15 MV (FF) beams, Eclipse underestimated out-of-field local doses by 30%-50% compared with Monte Carlo calculations when the local dose was <1% of prescribed dose. The accuracy of out-of-field dose calculations using Eclipse is improved when collimator jaws were set at the smallest possible aperture for MLC openings. The Eclipse system consistently underestimates out-of-field dose by a factor of 2 for all beam energies studied at the local dose level of less than 1% of prescribed dose. These findings are useful in providing information on the uncertainties of out-of-field organ doses calculated by Eclipse treatment planning system. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Beam-column joint shear prediction using hybridized deep learning neural network with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mundher Yaseen, Zaher; Abdulmohsin Afan, Haitham; Tran, Minh-Tung

    2018-04-01

    Scientifically evidenced that beam-column joints are a critical point in the reinforced concrete (RC) structure under the fluctuation loads effects. In this novel hybrid data-intelligence model developed to predict the joint shear behavior of exterior beam-column structure frame. The hybrid data-intelligence model is called genetic algorithm integrated with deep learning neural network model (GA-DLNN). The genetic algorithm is used as prior modelling phase for the input approximation whereas the DLNN predictive model is used for the prediction phase. To demonstrate this structural problem, experimental data is collected from the literature that defined the dimensional and specimens’ properties. The attained findings evidenced the efficitveness of the hybrid GA-DLNN in modelling beam-column joint shear problem. In addition, the accurate prediction achived with less input variables owing to the feasibility of the evolutionary phase.

  20. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  1. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less

  2. High spatial resolution technique for SPECT using a fan-beam collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichihar, T.; Nambu, K.; Motomura, N.

    1993-08-01

    The physical characteristics of the collimator cause degradation of resolution with increasing distance from the collimator surface. A new convolutional backprojection algorithm has been derived for fanbeam SPECT data without rebinding into parallel beam geometry. The projections are filtered and then backprojected into the area within an isosceles triangle whose vertex is the focal point of the fan-beam and whose base is the fan-beam collimator face, and outside of the circle whose center is located midway between the focal point and the center of rotation and whose diameter is the distance between the focal point and the center of rotation.more » Consequently the backprojected area is close to the collimator surface. This algorithm has been implemented on a GCA-9300A SPECT system showing good results with both phantom and patient studies. The SPECT transaxial resolution was 4.6mm FWHM (reconstructed image matrix size of 256x256) at the center of SPECT FOV using UHR (ultra-high-resolution) fan beam collimators for brain study. Clinically, Tc-99m HMPAO and Tc-99m ECD brain data were reconstructed using this algorithm. The reconstruction results were compared with MRI images of the same slice position and showed significantly improved over results obtained with standard reconstruction algorithms.« less

  3. Three dimensional δf simulations of beams in the SSC

    NASA Astrophysics Data System (ADS)

    Koga, J.; Tajima, T.; Machida, S.

    1993-12-01

    A three dimensional δf strong-strong algorithm has been developed to apply to the study of such effects as space charge and beam-beam interaction phenomena in the Superconducting Super Collider (SSC). The algorithm is obtained from the merging of the particle tracking code Simpsons used for 3 dimensional space charge effects and a δf code. The δf method is used to follow the evolution of the non-gaussian part of the beam distribution. The advantages of this method are twofold. First, the Simpsons code utilizes a realistic accelerator model including synchrotron oscillations and energy ramping in 6 dimensional phase space with electromagnetic fields of the beams calculated using a realistic 3 dimensional field solver. Second, the beams are evolving in the fully self-consistent strong-strong sense with finite particle fluctuation noise is greatly reduced as opposed to the weak-strong models where one beam is fixed.

  4. Optics measurement and correction for the Relativistic Heavy Ion Collider

    NASA Astrophysics Data System (ADS)

    Shen, Xiaozhe

    The quality of beam optics is of great importance for the performance of a high energy accelerator like the Relativistic Heavy Ion Collider (RHIC). The turn-by-turn (TBT) beam position monitor (BPM) data can be used to derive beam optics. However, the accuracy of the derived beam optics is often limited by the performance and imperfections of instruments as well as measurement methods and conditions. Therefore, a robust and model-independent data analysis method is highly desired to extract noise-free information from TBT BPM data. As a robust signal-processing technique, an independent component analysis (ICA) algorithm called second order blind identification (SOBI) has been proven to be particularly efficient in extracting physical beam signals from TBT BPM data even in the presence of instrument's noise and error. We applied the SOBI ICA algorithm to RHIC during the 2013 polarized proton operation to extract accurate linear optics from TBT BPM data of AC dipole driven coherent beam oscillation. From the same data, a first systematic estimation of RHIC BPM noise performance was also obtained by the SOBI ICA algorithm, and showed a good agreement with the RHIC BPM configurations. Based on the accurate linear optics measurement, a beta-beat response matrix correction method and a scheme of using horizontal closed orbit bumps at sextupoles for arc beta-beat correction were successfully applied to reach a record-low beam optics error at RHIC. This thesis presents principles of the SOBI ICA algorithm and theory as well as experimental results of optics measurement and correction at RHIC.

  5. A fast optimization approach for treatment planning of volumetric modulated arc therapy.

    PubMed

    Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong

    2018-05-30

    Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.

  6. Impact of dose engine algorithm in pencil beam scanning proton therapy for breast cancer.

    PubMed

    Tommasino, Francesco; Fellin, Francesco; Lorentini, Stefano; Farace, Paolo

    2018-06-01

    Proton therapy for the treatment of breast cancer is acquiring increasing interest, due to the potential reduction of radiation-induced side effects such as cardiac and pulmonary toxicity. While several in silico studies demonstrated the gain in plan quality offered by pencil beam scanning (PBS) compared to passive scattering techniques, the related dosimetric uncertainties have been poorly investigated so far. Five breast cancer patients were planned with Raystation 6 analytical pencil beam (APB) and Monte Carlo (MC) dose calculation algorithms. Plans were optimized with APB and then MC was used to recalculate dose distribution. Movable snout and beam splitting techniques (i.e. using two sub-fields for the same beam entrance, one with and the other without the use of a range shifter) were considered. PTV dose statistics were recorded. The same planning configurations were adopted for the experimental benchmark. Dose distributions were measured with a 2D array of ionization chambers and compared to APB and MC calculated ones by means of a γ analysis (agreement criteria 3%, 3 mm). Our results indicate that, when using proton PBS for breast cancer treatment, the Raystation 6 APB algorithm does not allow obtaining sufficient accuracy, especially with large air gaps. On the contrary, the MC algorithm resulted into much higher accuracy in all beam configurations tested and has to be recommended. Centers where a MC algorithm is not yet available should consider a careful use of APB, possibly combined with a movable snout system or in any case with strategies aimed at minimizing air gaps. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Numerical Solution of the Kzk Equation for Pulsed Finite Amplitude Sound Beams in Thermoviscous Fluids

    NASA Astrophysics Data System (ADS)

    Lee, Yang-Sub

    A time-domain numerical algorithm for solving the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear parabolic wave equation is developed for pulsed, axisymmetric, finite amplitude sound beams in thermoviscous fluids. The KZK equation accounts for the combined effects of diffraction, absorption, and nonlinearity at the same order of approximation. The accuracy of the algorithm is established via comparison with analytical solutions for several limiting cases, and with numerical results obtained from a widely used algorithm for solving the KZK equation in the frequency domain. The time domain algorithm is used to investigate waveform distortion and shock formation in directive sound beams radiated by pulsed circular piston sources. New results include predictions for the entire process of self-demodulation, and for the effect of frequency modulation on pulse envelope distortion. Numerical results are compared with measurements, and focused sources are investigated briefly.

  8. Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Dong, Bin; Wen, Zaiwen

    2017-02-01

    In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.

  9. Dynamics of 3D Timoshenko gyroelastic beams with large attitude changes for the gyros

    NASA Astrophysics Data System (ADS)

    Hassanpour, Soroosh; Heppler, G. R.

    2016-01-01

    This work is concerned with the theoretical development of dynamic equations for undamped gyroelastic beams which are dynamic systems with continuous inertia, elasticity, and gyricity. Assuming unrestricted or large attitude changes for the axes of the gyros and utilizing generalized Hooke's law, Duleau torsion theory, and Timoshenko bending theory, the energy expressions and equations of motion for the gyroelastic beams in three-dimensional space are derived. The so-obtained comprehensive gyroelastic beam model is compared against earlier gyroelastic beam models developed using Euler-Bernoulli beam models and is used to study the dynamics of gyroelastic beams through numerical examples. It is shown that there are significant differences between the developed unrestricted Timoshenko gyroelastic beam model and the previously derived zero-order restricted Euler-Bernoulli gyroelastic beam models. These differences are more pronounced in the short beam and transverse gyricity cases.

  10. Cone-Beam Computed Tomography for Image-Guided Radiation Therapy of Prostate Cancer

    DTIC Science & Technology

    2010-01-01

    ltered ba kproje tion (FBP) al-gorithm that does not depend upon the hords hastherefore been developed for volumetri image re- onstru tion in a...reproje tion of the rst re onstru ted volumetri image. The NCAT 6 phantom images re onstru ted by the tandem algorithm are shown in Fig. 3. The paper...algorithm has been applied to a ir ular one-beam mi ro-CT for volumetri images of the ROIwith a higher spatial resolution and at a redu edexposure to

  11. Improvements in pencil beam scanning proton therapy dose calculation accuracy in brain tumor cases with a commercial Monte Carlo algorithm.

    PubMed

    Widesott, Lamberto; Lorentini, Stefano; Fracchiolla, Francesco; Farace, Paolo; Schwarz, Marco

    2018-05-04

    validation of a commercial Monte Carlo (MC) algorithm (RayStation ver6.0.024) for the treatment of brain tumours with pencil beam scanning (PBS) proton therapy, comparing it via measurements and analytical calculations in clinically realistic scenarios. Methods: For the measurements a 2D ion chamber array detector (MatriXX PT)) was placed underneath the following targets: 1) anthropomorphic head phantom (with two different thickness) and 2) a biological sample (i.e. half lamb's head). In addition, we compared the MC dose engine vs. the RayStation pencil beam (PB) algorithm clinically implemented so far, in critical conditions such as superficial targets (i.e. in need of range shifter), different air gaps and gantry angles to simulate both orthogonal and tangential beam arrangements. For every plan the PB and MC dose calculation were compared to measurements using a gamma analysis metrics (3%, 3mm). Results: regarding the head phantom the gamma passing rate (GPR) was always >96% and on average > 99% for the MC algorithm; PB algorithm had a GPR ≤90% for all the delivery configurations with single slab (apart 95 % GPR from gantry 0° and small air gap) and in case of two slabs of the head phantom the GPR was >95% only in case of small air gaps for all the three (0°, 45°,and 70°) simulated beam gantry angles. Overall the PB algorithm tends to overestimate the dose to the target (up to 25%) and underestimate the dose to the organ at risk (up to 30%). We found similar results (but a bit worse for PB algorithm) for the two targets of the lamb's head where only two beam gantry angles were simulated. Conclusions: our results suggest that in PBS proton therapy range shifter (RS) need to be used with extreme caution when planning the treatment with an analytical algorithm due to potentially great discrepancies between the planned dose and the dose delivered to the patients, also in case of brain tumours where this issue could be underestimated. Our results also suggest that a MC evaluation of the dose has to be performed every time the RS is used and, mostly, when it is used with large air gaps and beam directions tangential to the patient surface. . © 2018 Institute of Physics and Engineering in Medicine.

  12. Image reconstruction from cone-beam projections with attenuation correction

    NASA Astrophysics Data System (ADS)

    Weng, Yi

    1997-07-01

    In single photon emission computered tomography (SPECT) imaging, photon attenuation within the body is a major factor contributing to the quantitative inaccuracy in measuring the distribution of radioactivity. Cone-beam SPECT provides improved sensitivity for imaging small organs. This thesis extends the results for 2D parallel- beam and fan-beam geometry to 3D parallel-beam and cone- beam geometries in order to derive filtered backprojection reconstruction algorithms for the 3D exponential parallel-beam transform and for the exponential cone-beam transform with sampling on a sphere. An exact inversion formula for the 3D exponential parallel-beam transform is obtained and is extended to the 3D exponential cone-beam transform. Sampling on a sphere is not useful clinically and current cone-beam tomography, with the focal point traversing a planar orbit, does not acquire sufficient data to give an accurate reconstruction. Thus a data acquisition method that obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix that surrounds the patient was developed. First, an implementation of Grangeat's algorithm for helical cone- beam projections was developed without attenuation correction. A fast new rebinning scheme was developed that uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. In the case of attenuation no theorem analogous to Tuy's has been proven. We hypothesized that an artifact-free reconstruction could be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. Cone-beam emission data were acquired by using a circle- and-line and a helix orbit on a clinical SPECT system. An iterative conjugate gradient reconstruction algorithm was used to reconstruct projection data with a known attenuation map. The quantitative accuracy of the attenuation-corrected emission reconstruction was significantly improved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badkul, R; Nicolai, W; Pokhrel, D

    Purpose: To compare the impact of Pencil Beam(PB) and Anisotropic Analytic Algorithm(AAA) dose calculation algorithms on OARs and planning target volume (PTV) in thoracic spine stereotactic body radiation therapy (SBRT). Methods: Ten Spine SBRT patients were planned on Brainlab iPlan system using hybrid plan consisting of 1–2 non-coplanar conformal-dynamic arcs and few IMRT beams treated on NovalisTx with 6MV photon. Dose prescription varied from 20Gy to 30Gy in 5 fractions depending on the situation of the patient. PB plans were retrospectively recalculated using the Varian Eclipse with AAA algorithm using same MUs, MLC pattern and grid size(3mm).Differences in dose volumemore » parameters for PTV, spinal cord, lung, and esophagus were analyzed and compared for PB and AAA algorithms. OAR constrains were followed per RTOG-0631. Results: Since patients were treated using PB calculation, we compared all the AAA DVH values with respect to PB plan values as standard, although AAA predicts the dose more accurately than PB. PTV(min), PTV(Max), PTV(mean), PTV(D99%), PTV(D90%) were overestimated with AAA calculation on average by 3.5%, 1.84%, 0.95%, 3.98% and 1.55% respectively as compared to PB. All lung DVH parameters were underestimated with AAA algorithm mean deviation of lung V20, V10, V5, and 1000cc were 42.81%,19.83%, 18.79%, and 18.35% respectively. AAA overestimated Cord(0.35cc) by mean of 17.3%; cord (0.03cc) by 12.19% and cord(max) by 10.5% as compared to PB. Esophagus max dose were overestimated by 4.4% and 5cc by 3.26% for AAA algorithm as compared to PB. Conclusion: AAA overestimated the PTV dose values by up to 4%.The lung DVH had the greatest underestimation of dose by AAA versus PB. Spinal cord dose was overestimated by AAA versus PB. Given the critical importance of accuracy of OAR and PTV dose calculation for SBRT spine, more accurate algorithms and validation of calculated doses in phantom models are indicated.« less

  14. Quantifying the effect of air gap, depth, and range shifter thickness on TPS dosimetric accuracy in superficial PBS proton therapy.

    PubMed

    Shirey, Robert J; Wu, Hsinshun Terry

    2018-01-01

    This study quantifies the dosimetric accuracy of a commercial treatment planning system as functions of treatment depth, air gap, and range shifter thickness for superficial pencil beam scanning proton therapy treatments. The RayStation 6 pencil beam and Monte Carlo dose engines were each used to calculate the dose distributions for a single treatment plan with varying range shifter air gaps. Central axis dose values extracted from each of the calculated plans were compared to dose values measured with a calibrated PTW Markus chamber at various depths in RW3 solid water. Dose was measured at 12 depths, ranging from the surface to 5 cm, for each of the 18 different air gaps, which ranged from 0.5 to 28 cm. TPS dosimetric accuracy, defined as the ratio of calculated dose relative to the measured dose, was plotted as functions of depth and air gap for the pencil beam and Monte Carlo dose algorithms. The accuracy of the TPS pencil beam dose algorithm was found to be clinically unacceptable at depths shallower than 3 cm with air gaps wider than 10 cm, and increased range shifter thickness only added to the dosimetric inaccuracy of the pencil beam algorithm. Each configuration calculated with Monte Carlo was determined to be clinically acceptable. Further comparisons of the Monte Carlo dose algorithm to the measured spread-out Bragg Peaks of multiple fields used during machine commissioning verified the dosimetric accuracy of Monte Carlo in a variety of beam energies and field sizes. Discrepancies between measured and TPS calculated dose values can mainly be attributed to the ability (or lack thereof) of the TPS pencil beam dose algorithm to properly model secondary proton scatter generated in the range shifter. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  15. Longitudinal dynamics of an intense electron beam

    NASA Astrophysics Data System (ADS)

    Harris, John Richardson

    2005-11-01

    The dynamics of charged particle beams are governed by the particles' thermal velocities, external focusing forces, and Coulomb forces. Beams in which Coulomb forces play the dominant role are known as space charge dominated, or intense. Intense beams are of great interest for heavy ion fusion, spallation neutron sources, free-electron lasers, and other applications. In addition, all beams of interest are dominated by space charge forces when they are first created, so an understanding of space charge effects is critical to explain the later evolution of any beam. Historically, more attention has been paid to the transverse dynamics of beams. However, many interesting and important effects in beams occur along their length. These longitudinal effects can be limiting factors in many systems. For example, modulation or structure applied to the beam at low energy will evolve under space charge forces. Depending on the intended use of the beam and the nature of the modulation, this may result in improved or degraded performance. To study longitudinal dynamics in intense beams, experiments were conducted using the University of Maryland Electron Ring, a 10 keV, 100 mA electron transport system. These experiments concentrated on space charge driven changes in beam length in parabolic and rectangular beams, beam density and velocity modulation, and space charge wave propagation. Coupling between the transverse and longitudinal dynamics was also investigated. These experiments involved operating the UMER gun in space charge limited, temperature limited, triode amplification, photon limited, and hybrid modes. Results of these experiments are presented here, along with a theoretical framework for understanding the longitudinal dynamics of intense beams.

  16. Worldwide Ocean Optics Database (WOOD)

    DTIC Science & Technology

    2002-09-30

    attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the computed results. Extensive algorithm...empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for the...properties, including diffuse attenuation, beam attenuation, and scattering. Data from ONR-funded bio-optical cruises will be given priority for loading

  17. Piezoelectric self-sensing actuator for active vibration control of motorized spindle based on adaptive signal separation

    NASA Astrophysics Data System (ADS)

    He, Ye; Chen, Xiaoan; Liu, Zhi; Qin, Yi

    2018-06-01

    The motorized spindle is the core component of CNC machine tools, and the vibration of it reduces the machining precision and service life of the machine tools. Owing to the fast response, large output force, and displacement of the piezoelectric stack, it is often used as the actuator in the active vibration control of the spindle. A piezoelectric self-sensing actuator (SSA) can reduce the cost of the active vibration control system and simplify the structure by eliminating the use of a sensor, because a SSA can have both actuating and sensing functions at the same time. The signal separation method of a SSA based on a bridge circuit is widely applied because of its simple principle and easy implementation. However, it is difficult to maintain dynamic balance of the circuit. Prior research has used adaptive algorithm to balance of the bridge circuit on the flexible beam dynamically, but those algorithms need no correlation between sensing and control voltage, which limit the applications of SSA in the vibration control of the rotor-bearing system. Here, the electromechanical coupling model of the piezoelectric stack is established, followed by establishment of the dynamic model of the spindle system. Next, a new adaptive signal separation method based on the bridge circuit is proposed, which can separate relative small sensing voltage from related mixed voltage adaptively. The experimental results show that when the self-sensing signal obtained from the proposed method is used as a displacement signal, the vibration of the motorized spindle can be suppressed effectively through a linear quadratic Gaussian (LQG) algorithm.

  18. Dynamic correction of the laser beam coordinate in fabrication of large-sized diffractive elements for testing aspherical mirrors

    NASA Astrophysics Data System (ADS)

    Shimansky, R. V.; Poleshchuk, A. G.; Korolkov, V. P.; Cherkashin, V. V.

    2017-05-01

    This paper presents a method of improving the accuracy of a circular laser system in fabrication of large-diameter diffractive optical elements by means of a polar coordinate system and the results of their use. An algorithm for correcting positioning errors of a circular laser writing system developed at the Institute of Automation and Electrometry, SB RAS, is proposed and tested. Highprecision synthesized holograms fabricated by this method and the results of using these elements for testing the 6.5 m diameter aspheric mirror of the James Webb space telescope (JWST) are described..

  19. Physics Computing '92: Proceedings of the 4th International Conference

    NASA Astrophysics Data System (ADS)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants

  20. Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.

    PubMed

    Tang, Shaojie; Tang, Xiangyang

    2016-09-01

    The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.

  1. Damage assessment in PRC and RC beams by dynamic tests

    NASA Astrophysics Data System (ADS)

    Capozucca, R.

    2011-07-01

    The present paper reports on damaged prestressed reinforced concrete (PRC) beams and reinforced concrete (RC) beams experimentally investigated through dynamic testing in order to verify damage degree due to reinforcement corrosion or cracking correlated to loading. The experimental program foresaw that PRC beams were subjected to artificial reinforcement corrosion and static loading while RC beams were damaged by increasing applied loads to produce bending cracking. Dynamic investigation was developed both on undamaged and damaged PRC and RC beams measuring natural frequencies and evaluating vibration mode shapes. Dynamic testing allowed the recording of frequency response variations at different vibration modes. The experimental results are compared with theoretical results and discussed.

  2. Stochastic collective dynamics of charged-particle beams in the stability regime

    NASA Astrophysics Data System (ADS)

    Petroni, Nicola Cufaro; de Martino, Salvatore; de Siena, Silvio; Illuminati, Fabrizio

    2001-01-01

    We introduce a description of the collective transverse dynamics of charged (proton) beams in the stability regime by suitable classical stochastic fluctuations. In this scheme, the collective beam dynamics is described by time-reversal invariant diffusion processes deduced by stochastic variational principles (Nelson processes). By general arguments, we show that the diffusion coefficient, expressed in units of length, is given by λcN, where N is the number of particles in the beam and λc the Compton wavelength of a single constituent. This diffusion coefficient represents an effective unit of beam emittance. The hydrodynamic equations of the stochastic dynamics can be easily recast in the form of a Schrödinger equation, with the unit of emittance replacing the Planck action constant. This fact provides a natural connection to the so-called ``quantum-like approaches'' to beam dynamics. The transition probabilities associated to Nelson processes can be exploited to model evolutions suitable to control the transverse beam dynamics. In particular we show how to control, in the quadrupole approximation to the beam-field interaction, both the focusing and the transverse oscillations of the beam, either together or independently.

  3. Optimization of view weighting in tilted-plane-based reconstruction algorithms to minimize helical artifacts in multi-slice helical CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang

    2003-05-01

    In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.

  4. Thermal effect on the dynamic response of axially functionally graded beam subjected to a moving harmonic load

    NASA Astrophysics Data System (ADS)

    Wang, Yuewu; Wu, Dafang

    2016-10-01

    Dynamic response of an axially functionally graded (AFG) beam under thermal environment subjected to a moving harmonic load is investigated within the frameworks of classical beam theory (CBT) and Timoshenko beam theory (TBT). The Lagrange method is employed to derive the equations of thermal buckling for AFG beam, and then with the critical buckling temperature as a parameter the Newmark-β method is adopted to evaluate the dynamic response of AFG beam under thermal environments. Admissible functions denoting transverse displacement are expressed in simple algebraic polynomial forms. Temperature-dependency of material constituent is considered. The rule of mixture (Voigt model) and Mori-Tanaka (MT) scheme are used to evaluate the beam's effective material properties. A ceramic-metal AFG beam with immovable boundary condition is considered as numerical illustration to show the thermal effects on the dynamic behaviors of the beam subjected to a moving harmonic load.

  5. A programmable metasurface with dynamic polarization, scattering and focusing control

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  6. A programmable metasurface with dynamic polarization, scattering and focusing control

    PubMed Central

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-01-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997

  7. A programmable metasurface with dynamic polarization, scattering and focusing control.

    PubMed

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-24

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  8. Video-rate hyperspectral two-photon fluorescence microscopy for in vivo imaging

    NASA Astrophysics Data System (ADS)

    Deng, Fengyuan; Ding, Changqin; Martin, Jerald C.; Scarborough, Nicole M.; Song, Zhengtian; Eakins, Gregory S.; Simpson, Garth J.

    2018-02-01

    Fluorescence hyperspectral imaging is a powerful tool for in vivo biological studies. The ability to recover the full spectra of the fluorophores allows accurate classification of different structures and study of the dynamic behaviors during various biological processes. However, most existing methods require significant instrument modifications and/or suffer from image acquisition rates too low for compatibility with in vivo imaging. In the present work, a fast (up to 18 frames per second) hyperspectral two-photon fluorescence microscopy approach was demonstrated. Utilizing the beamscanning hardware inherent in conventional multi-photon microscopy, the angle dependence of the generated fluorescence signal as a function beam's position allowed the system to probe of a different potion of the spectrum at every single scanning line. An iterative algorithm to classify the fluorophores recovered spectra with up to 2,400 channels using a custom high-speed 16-channel photon multiplier tube array. Several dynamic samples including live fluorescent labeled C. elegans were imaged at video rate. Fluorescence spectra recovered using no a priori spectral information agreed well with those obtained by fluorimetry. This system required minimal changes to most existing beam-scanning multi-photon fluorescence microscopes, already accessible in many research facilities.

  9. Comparison of dosimetric and radiobiological parameters on plans for prostate stereotactic body radiotherapy using an endorectal balloon for different dose-calculation algorithms and delivery-beam modes

    NASA Astrophysics Data System (ADS)

    Kang, Sang-Won; Suh, Tae-Suk; Chung, Jin-Beom; Eom, Keun-Yong; Song, Changhoon; Kim, In-Ah; Kim, Jae-Sung; Lee, Jeong-Woo; Cho, Woong

    2017-02-01

    The purpose of this study was to evaluate the impact of dosimetric and radiobiological parameters on treatment plans by using different dose-calculation algorithms and delivery-beam modes for prostate stereotactic body radiation therapy using an endorectal balloon. For 20 patients with prostate cancer, stereotactic body radiation therapy (SBRT) plans were generated by using a 10-MV photon beam with flattening filter (FF) and flattening-filter-free (FFF) modes. The total treatment dose prescribed was 42.7 Gy in 7 fractions to cover at least 95% of the planning target volume (PTV) with 95% of the prescribed dose. The dose computation was initially performed using an anisotropic analytical algorithm (AAA) in the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA) and was then re-calculated using Acuros XB (AXB V. 11.0.34) with the same monitor units and multileaf collimator files. The dosimetric and the radiobiological parameters for the PTV and organs at risk (OARs) were analyzed from the dose-volume histogram. An obvious difference in dosimetric parameters between the AAA and the AXB plans was observed in the PTV and rectum. Doses to the PTV, excluding the maximum dose, were always higher in the AAA plans than in the AXB plans. However, doses to the other OARs were similar in both algorithm plans. In addition, no difference was observed in the dosimetric parameters for different delivery-beam modes when using the same algorithm to generate plans. As a result of the dosimetric parameters, the radiobiological parameters for the two algorithm plans presented an apparent difference in the PTV and the rectum. The average tumor control probability of the AAA plans was higher than that of the AXB plans. The average normal tissue complication probability (NTCP) to rectum was lower in the AXB plans than in the AAA plans. The AAA and the AXB plans yielded very similar NTCPs for the other OARs. In plans using the same algorithms, the NTCPs for delivery-beam modes showed no differences. This study demonstrated that the dosimetric and the radiobiological parameters for the PTV and the rectum affected the dose-calculation algorithms for prostate SBRT using an endorectal balloon. However, the dosimetric and the radiobiological parameters in the AAA and the AXB plans for other OARs were similar. Furthermore, difference between the dosimetric and the radiobiological parameters for different delivery-beam modes were not found when the same algorithm was used to generate the treatment plan.

  10. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  11. Shaping non-diffracting beams with a digital micromirror device

    NASA Astrophysics Data System (ADS)

    Ren, Yu-Xuan; Fang, Zhao-Xiang; Lu, Rong-De

    2016-02-01

    The micromechanical digital micromirror device (DMD) performs as a spatial light modulator to shape the light wavefront. Different from the liquid crystal devices, which use the birefringence to modulate the light wave, the DMD regulates the wavefront through an amplitude modulation with the digitally controlled mirrors switched on and off. The advantages of such device are the fast speed, polarization insensitivity, and the broadband modulation ability. The fast switching ability for the DMD not only enables the shaping of static light mode, but also could dynamically compensate for the wavefront distortion due to scattering medium. We have employed such device to create the higher order modes, including the Laguerre-Gaussian, Hermite-Gaussian, as well as Mathieu modes. There exists another kind of beam with shape-preservation against propagation, and self-healing against obstacles. Representative modes are the Bessel modes, Airy modes, and the Pearcey modes. Since the DMD modulates the light intensity, a series of algorithms are developed to calculate proper amplitude hologram for shaping the light. The quasi-continuous gray scale images could imitate the continuous amplitude hologram, while the binary amplitude modulation is another means to create the modulation pattern for a steady light field. We demonstrate the generation of the non-diffracting beams with the binary amplitude modulation via the DMD, and successfully created the non-diffracting Bessel beam, Airy beam, and the Pearcey beam. We have characterized the non-diffracting modes through propagation measurements as well as the self-healing measurements.

  12. Neighbor Discovery Algorithm in Wireless Local Area Networks Using Multi-beam Directional Antennas

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Peng, Wei; Liu, Song

    2017-10-01

    Neighbor discovery is an important step for Wireless Local Area Networks (WLAN) and the use of multi-beam directional antennas can greatly improve the network performance. However, most neighbor discovery algorithms in WLAN, based on multi-beam directional antennas, can only work effectively in synchronous system but not in asynchro-nous system. And collisions at AP remain a bottleneck for neighbor discovery. In this paper, we propose two asynchrono-us neighbor discovery algorithms: asynchronous hierarchical scanning (AHS) and asynchronous directional scanning (ADS) algorithm. Both of them are based on three-way handshaking mechanism. AHS and ADS reduce collisions at AP to have a good performance in a hierarchical way and directional way respectively. In the end, the performance of the AHS and ADS are tested on OMNeT++. Moreover, it is analyzed that different application scenarios and the factors how to affect the performance of these algorithms. The simulation results show that AHS is suitable for the densely populated scenes around AP while ADS is suitable for that most of the neighborhood nodes are far from AP.

  13. Gaussian-Beam Laser-Resonator Program

    NASA Technical Reports Server (NTRS)

    Cross, Patricia L.; Bair, Clayton H.; Barnes, Norman

    1989-01-01

    Gaussian Beam Laser Resonator Program models laser resonators by use of Gaussian-beam-propagation techniques. Used to determine radii of beams as functions of position in laser resonators. Algorithm used in program has three major components. First, ray-transfer matrix for laser resonator must be calculated. Next, initial parameters of beam calculated. Finally, propagation of beam through optical elements computed. Written in Microsoft FORTRAN (Version 4.01).

  14. Dynamic response of composite beams with induced-strain actuators

    NASA Astrophysics Data System (ADS)

    Chandra, Ramesh

    1994-05-01

    This paper presents an analytical-experimental study on dynamic response of open-section composite beams with actuation by piezoelectric devices. The analysis includes the essential features of open-section composite beam modeling, such as constrained warping and transverse shear deformation. A general plate segment of the beam with and without piezoelectric ply is modeled using laminated plate theory and the forces and displacement relations of this plate segment are then reduced to the force and displacement of the one-dimensional beam. The dynamic response of bending-torsion coupled composite beams excited by piezoelectric devices is predicted. In order to validate the analysis, kevlar-epoxy and graphite-epoxy beams with surface mounted pieziceramic actuators are tested for their dynamic response. The response was measured using accelerometer. Good correlation between analysis and experiment is achieved.

  15. Dose calculation of dynamic trajectory radiotherapy using Monte Carlo.

    PubMed

    Manser, P; Frauchiger, D; Frei, D; Volken, W; Terribilini, D; Fix, M K

    2018-04-06

    Using volumetric modulated arc therapy (VMAT) delivery technique gantry position, multi-leaf collimator (MLC) as well as dose rate change dynamically during the application. However, additional components can be dynamically altered throughout the dose delivery such as the collimator or the couch. Thus, the degrees of freedom increase allowing almost arbitrary dynamic trajectories for the beam. While the dose delivery of such dynamic trajectories for linear accelerators is technically possible, there is currently no dose calculation and validation tool available. Thus, the aim of this work is to develop a dose calculation and verification tool for dynamic trajectories using Monte Carlo (MC) methods. The dose calculation for dynamic trajectories is implemented in the previously developed Swiss Monte Carlo Plan (SMCP). SMCP interfaces the treatment planning system Eclipse with a MC dose calculation algorithm and is already able to handle dynamic MLC and gantry rotations. Hence, the additional dynamic components, namely the collimator and the couch, are described similarly to the dynamic MLC by defining data pairs of positions of the dynamic component and the corresponding MU-fractions. For validation purposes, measurements are performed with the Delta4 phantom and film measurements using the developer mode on a TrueBeam linear accelerator. These measured dose distributions are then compared with the corresponding calculations using SMCP. First, simple academic cases applying one-dimensional movements are investigated and second, more complex dynamic trajectories with several simultaneously moving components are compared considering academic cases as well as a clinically motivated prostate case. The dose calculation for dynamic trajectories is successfully implemented into SMCP. The comparisons between the measured and calculated dose distributions for the simple as well as for the more complex situations show an agreement which is generally within 3% of the maximum dose or 3mm. The required computation time for the dose calculation remains the same when the additional dynamic moving components are included. The results obtained for the dose comparisons for simple and complex situations suggest that the extended SMCP is an accurate dose calculation and efficient verification tool for dynamic trajectory radiotherapy. This work was supported by Varian Medical Systems. Copyright © 2018. Published by Elsevier GmbH.

  16. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach

    NASA Astrophysics Data System (ADS)

    Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.

    2016-01-01

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  17. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  18. Linear array measurements of enhanced dynamic wedge and treatment planning system (TPS) calculation for 15 MV photon beam and comparison with electronic portal imaging device (EPID) measurements.

    PubMed

    Petrovic, Borislava; Grzadziel, Aleksandra; Rutonjski, Laza; Slosarek, Krzysztof

    2010-09-01

    Enhanced dynamic wedges (EDW) are known to increase drastically the radiation therapy treatment efficiency. This paper has the aim to compare linear array measurements of EDW with the calculations of treatment planning system (TPS) and the electronic portal imaging device (EPID) for 15 MV photon energy. The range of different field sizes and wedge angles (for 15 MV photon beam) were measured by the linear chamber array CA 24 in Blue water phantom. The measurement conditions were applied to the calculations of the commercial treatment planning system XIO CMS v.4.2.0 using convolution algorithm. EPID measurements were done on EPID-focus distance of 100 cm, and beam parameters being the same as for CA24 measurements. Both depth doses and profiles were measured. EDW linear array measurements of profiles to XIO CMS TPS calculation differ around 0.5%. Profiles in non-wedged direction and open field profiles practically do not differ. Percentage depth doses (PDDs) for all EDW measurements show the difference of not more than 0.2%, while the open field PDD is almost the same as EDW PDD. Wedge factors for 60 deg wedge angle were also examined, and the difference is up to 4%. EPID to linear array differs up to 5%. The implementation of EDW in radiation therapy treatments provides clinicians with an effective tool for the conformal radiotherapy treatment planning. If modelling of EDW beam in TPS is done correctly, a very good agreement between measurements and calculation is obtained, but EPID cannot be used for reference measurements.

  19. Interfacial damage identification of steel and concrete composite beams based on piezoceramic wave method.

    PubMed

    Yan, Shi; Dai, Yong; Zhao, Putian; Liu, Weiling

    2018-01-01

    Steel-concrete composite structures are playing an increasingly important role in economic construction because of a series of advantages of great stiffness, good seismic performance, steel material saving, cost efficiency, convenient construction, etc. However, in service process, due to the long-term effects of environmental impacts and dynamic loading, interfaces of a composite structure might generate debonding cracks, relative slips or separations, and so on, lowering the composite effect of the composite structure. In this paper, the piezoceramics (PZT) are used as transducers to perform experiments on interface debonding slips and separations of composite beams, respectively, aimed at proposing an interface damage identification model and a relevant damage detection innovation method based on PZT wave technology. One part of various PZT patches was embedded in concrete as "smart aggregates," and another part of the PZT patches was pasted on the surface of the steel beam flange, forming a sensor array. A push-out test for four specimens was carried out and experimental results showed that, under the action of the external loading, the received signal amplitudes will increasingly decrease with increase of debonding slips along the interface. The proposed signal energy-based interface damage detection algorithm is highly efficient in surface state evaluations of composite beams.

  20. A modified beam-to-earth transformation to measure short-wavelength internal waves with an acoustic Doppler current profiler

    USGS Publications Warehouse

    Scotti, A.; Butman, B.; Beardsley, R.C.; Alexander, P.S.; Anderson, S.

    2005-01-01

    The algorithm used to transform velocity signals from beam coordinates to earth coordinates in an acoustic Doppler current profiler (ADCP) relies on the assumption that the currents are uniform over the horizontal distance separating the beams. This condition may be violated by (nonlinear) internal waves, which can have wavelengths as small as 100-200 m. In this case, the standard algorithm combines velocities measured at different phases of a wave and produces horizontal velocities that increasingly differ from true velocities with distance from the ADCP. Observations made in Massachusetts Bay show that currents measured with a bottom-mounted upward-looking ADCP during periods when short-wavelength internal waves are present differ significantly from currents measured by point current meters, except very close to the instrument. These periods are flagged with high error velocities by the standard ADCP algorithm. In this paper measurements from the four spatially diverging beams and the backscatter intensity signal are used to calculate the propagation direction and celerity of the internal waves. Once this information is known, a modified beam-to-earth transformation that combines appropriately lagged beam measurements can be used to obtain current estimates in earth coordinates that compare well with pointwise measurements. ?? 2005 American Meteorological Society.

  1. Photon spectral characteristics of dissimilar 6 MV linear accelerators.

    PubMed

    Hinson, William H; Kearns, William T; deGuzman, Allan F; Bourland, J Daniel

    2008-05-01

    This work measures and compares the energy spectra of four dosimetrically matched 6 MV beams, generated from four physically different linear accelerators. The goal of this work is twofold. First, this study determines whether the spectra of dosimetrically matched beams are measurably different. This study also demonstrates that the spectra of clinical photon beams can be measured as a part of the beam data collection process for input to a three-dimensional (3D) treatment planning system. The spectra of 6 MV beams that are dosimetrically matched for clinical use were studied to determine if the beam spectra are similarly matched. Each of the four accelerators examined had a standing waveguide, but with different physical designs. The four accelerators were two Varian 2100C/Ds (one 6 MV/18 MV waveguide and one 6 MV/10 MV waveguide), one Varian 600 C with a vertically mounted waveguide and no bending magnet, and one Siemens MD 6740 with a 6 MV/10 MV waveguide. All four accelerators had percent depth dose curves for the 6 MV beam that were matched within 1.3%. Beam spectra were determined from narrow beam transmission measurements through successive thicknesses of pure aluminum along the central axis of the accelerator, made with a graphite Farmer ion chamber with a Lucite buildup cap. An iterative nonlinear fit using a Marquardt algorithm was used to find each spectrum. Reconstructed spectra show that all four beams have similar energy distributions with only subtle differences, despite the differences in accelerator design. The measured spectra of different 6 MV beams are similar regardless of accelerator design. The measured spectra show excellent agreement with those found by the auto-modeling algorithm in a commercial 3D treatment planning system that uses a convolution dose calculation algorithm. Thus, beam spectra can be acquired in a clinical setting at the time of commissioning as a part of the routine beam data collection.

  2. 3D ion velocity distribution function measurement in an electric thruster using laser induced fluorescence tomography

    NASA Astrophysics Data System (ADS)

    Elias, P. Q.; Jarrige, J.; Cucchetti, E.; Cannat, F.; Packan, D.

    2017-09-01

    Measuring the full ion velocity distribution function (IVDF) by non-intrusive techniques can improve our understanding of the ionization processes and beam dynamics at work in electric thrusters. In this paper, a Laser-Induced Fluorescence (LIF) tomographic reconstruction technique is applied to the measurement of the IVDF in the plume of a miniature Hall effect thruster. A setup is developed to move the laser axis along two rotation axes around the measurement volume. The fluorescence spectra taken from different viewing angles are combined using a tomographic reconstruction algorithm to build the complete 3D (in phase space) time-averaged distribution function. For the first time, this technique is used in the plume of a miniature Hall effect thruster to measure the full distribution function of the xenon ions. Two examples of reconstructions are provided, in front of the thruster nose-cone and in front of the anode channel. The reconstruction reveals the features of the ion beam, in particular on the thruster axis where a toroidal distribution function is observed. These findings are consistent with the thruster shape and operation. This technique, which can be used with other LIF schemes, could be helpful in revealing the details of the ion production regions and the beam dynamics. Using a more powerful laser source, the current implementation of the technique could be improved to reduce the measurement time and also to reconstruct the temporal evolution of the distribution function.

  3. Scatter correction in cone-beam CT via a half beam blocker technique allowing simultaneous acquisition of scatter and image information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho; Xing Lei; Lee, Rena

    2012-05-15

    Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less

  4. Large Deformation Dynamic Bending of Composite Beams

    NASA Technical Reports Server (NTRS)

    Derian, E. J.; Hyer, M. W.

    1986-01-01

    Studies were conducted on the large deformation response of composite beams subjected to a dynamic axial load. The beams were loaded with a moderate eccentricity to promote bending. The study was primarily experimental but some finite element results were obtained. Both the deformation and the failure of the beams were of interest. The static response of the beams was also studied to determine potential differences between the static and dynamic failure. Twelve different laminate types were tested. The beams were loaded dynamically with a gravity driven impactor traveling at 19.6 ft/sec and quasi-static tests were conducted on identical beams in a displacement controlled manner. For laminates of practical interest, the failure modes under static and dynamic loadings were identical. Failure in most of the laminate types occurred in a single event involving 40% to 50% of the plies. However, failure in laminates with 30 deg or 15 deg off-axis plies occured in several events. All laminates exhibited bimodular elastic properties. Using empirically determined flexural properties, a finite element analysis was reasonably accurate in predicting the static and dynamic deformation response.

  5. Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering

    PubMed Central

    Tang, Shaojie; Tang, Xiangyang

    2016-01-01

    Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512

  6. A comparison of TPS and different measurement techniques in small-field electron beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donmez Kesen, Nazmiye, E-mail: nazo94@gmail.com; Cakir, Aydin; Okutan, Murat

    In recent years, small-field electron beams have been used for the treatment of superficial lesions, which requires small circular fields. However, when using very small electron fields, some significant dosimetric problems may occur. In this study, dose distributions and outputs of circular fields with dimensions of 5 cm and smaller, for nominal energies of 6, 9, and 15 MeV from the Siemens ONCOR Linac, were measured and compared with data from a treatment planning system using the pencil-beam algorithm in electron beam calculations. All dose distribution measurements were performed using the Gafchromic EBT film; these measurements were compared with datamore » that were obtained from the Computerized Medical Systems (CMS) XiO treatment planning system (TPS), using the gamma-index method in the PTW VeriSoft software program. Output measurements were performed using the Gafchromic EBT film, an Advanced Markus ion chamber, and thermoluminescent dosimetry (TLD). Although the pencil-beam algorithm is used to model electron beams in many clinics, there is no substantial amount of detailed information in the literature about its use. As the field size decreased, the point of maximum dose moved closer to the surface. Output factors were consistent; differences from the values obtained from the TPS were, at maximum, 42% for 6 and 15 MeV and 32% for 9 MeV. When the dose distributions from the TPS were compared with the measurements from the Gafchromic EBT films, it was observed that the results were consistent for 2-cm diameter and larger fields, but the outputs for fields of 1-cm diameter and smaller were not consistent. In CMS XiO TPS, calculated using the pencil-beam algorithm, the dose distributions of electron treatment fields that were created with circular cutout of a 1-cm diameter were not appropriate for patient treatment and the pencil-beam algorithm is not convenient for monitor unit (MU) calculations in electron dosimetry.« less

  7. Superiorized algorithm for reconstruction of CT images from sparse-view and limited-angle polyenergetic data

    NASA Astrophysics Data System (ADS)

    Humphries, T.; Winn, J.; Faridani, A.

    2017-08-01

    Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.

  8. 3D model of filler melting with micro-beam plasma arc based on additive manufacturing technology

    NASA Astrophysics Data System (ADS)

    Chen, Weilin; Yang, Tao; Yang, Ruixin

    2017-07-01

    Additive manufacturing technology is a systematic process based on discrete-accumulation principle, which is derived by the dimension of parts. Aiming at the dimension mathematical model and slicing problems in additive manufacturing process, the constitutive relations between micro-beam plasma welding parameters and the dimension of part were investigated. The slicing algorithm and slicing were also studied based on the dimension characteristics. By using the direct slicing algorithm according to the geometric characteristics of model, a hollow thin-wall spherical part was fabricated by 3D additive manufacturing technology using micro-beam plasma.

  9. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  10. Optimizing performance of hybrid FSO/RF networks in realistic dynamic scenarios

    NASA Astrophysics Data System (ADS)

    Llorca, Jaime; Desai, Aniket; Baskaran, Eswaran; Milner, Stuart; Davis, Christopher

    2005-08-01

    Hybrid Free Space Optical (FSO) and Radio Frequency (RF) networks promise highly available wireless broadband connectivity and quality of service (QoS), particularly suitable for emerging network applications involving extremely high data rate transmissions such as high quality video-on-demand and real-time surveillance. FSO links are prone to atmospheric obscuration (fog, clouds, snow, etc) and are difficult to align over long distances due the use of narrow laser beams and the effect of atmospheric turbulence. These problems can be mitigated by using adjunct directional RF links, which provide backup connectivity. In this paper, methodologies for modeling and simulation of hybrid FSO/RF networks are described. Individual link propagation models are derived using scattering theory, as well as experimental measurements. MATLAB is used to generate realistic atmospheric obscuration scenarios, including moving cloud layers at different altitudes. These scenarios are then imported into a network simulator (OPNET) to emulate mobile hybrid FSO/RF networks. This framework allows accurate analysis of the effects of node mobility, atmospheric obscuration and traffic demands on network performance, and precise evaluation of topology reconfiguration algorithms as they react to dynamic changes in the network. Results show how topology reconfiguration algorithms, together with enhancements to TCP/IP protocols which reduce the network response time, enable the network to rapidly detect and act upon link state changes in highly dynamic environments, ensuring optimized network performance and availability.

  11. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  12. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  13. Beam control in the ETA-II linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu-Jiuan

    1992-08-21

    Corkscrew beam motion is caused by chromatic aberration and misalignment of a focusing system. We have taken some measures to control the corkscrew motion on the ETA-11 induction accelerator. To minimize chromatic aberration, we have developed an energy compensation scheme which reduces energy sweep and differential phase advance within a beam pulse. To minimize the misalignment errors, we have developed a time-independent steering algorithm which minimizes the observed corkscrew amplitude averaged over the beam pulse. The steering algorithm can be used even if the monitor spacing is much greater than the system`s cyclotron wavelength and the corkscrew motion caused bymore » a given misaligned magnet is fully developed, i.e., the relative phase advance is greater than 27{pi}.« less

  14. Beam control in the ETA-II linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu-Jiuan.

    1992-08-21

    Corkscrew beam motion is caused by chromatic aberration and misalignment of a focusing system. We have taken some measures to control the corkscrew motion on the ETA-11 induction accelerator. To minimize chromatic aberration, we have developed an energy compensation scheme which reduces energy sweep and differential phase advance within a beam pulse. To minimize the misalignment errors, we have developed a time-independent steering algorithm which minimizes the observed corkscrew amplitude averaged over the beam pulse. The steering algorithm can be used even if the monitor spacing is much greater than the system's cyclotron wavelength and the corkscrew motion caused bymore » a given misaligned magnet is fully developed, i.e., the relative phase advance is greater than 27[pi].« less

  15. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  16. Propagation dynamics of super-Gaussian beams in fractional Schrödinger equation: from linear to nonlinear regimes.

    PubMed

    Zhang, Lifu; Li, Chuxin; Zhong, Haizhe; Xu, Changwen; Lei, Dajun; Li, Ying; Fan, Dianyuan

    2016-06-27

    We have investigated the propagation dynamics of super-Gaussian optical beams in fractional Schrödinger equation. We have identified the difference between the propagation dynamics of super-Gaussian beams and that of Gaussian beams. We show that, the linear propagation dynamics of the super-Gaussian beams with order m > 1 undergo an initial compression phase before they split into two sub-beams. The sub-beams with saddle shape separate each other and their interval increases linearly with propagation distance. In the nonlinear regime, the super-Gaussian beams evolve to become a single soliton, breathing soliton or soliton pair depending on the order of super-Gaussian beams, nonlinearity, as well as the Lévy index. In two dimensions, the linear evolution of super-Gaussian beams is similar to that for one dimension case, but the initial compression of the input super-Gaussian beams and the diffraction of the splitting beams are much stronger than that for one dimension case. While the nonlinear propagation of the super-Gaussian beams becomes much more unstable compared with that for the case of one dimension. Our results show the nonlinear effects can be tuned by varying the Lévy index in the fractional Schrödinger equation for a fixed input power.

  17. On finding the analytic dependencies of the external field potential on the control function when optimizing the beam dynamics

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, A. D.; Kozynchenko, S. A.; Kozynchenko, V. A.

    2017-12-01

    When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper.

  18. Investigations on KONUS beam dynamics using the pre-stripper drift tube linac at GSI

    NASA Astrophysics Data System (ADS)

    Xiao, C.; Du, X. N.; Groening, L.

    2018-04-01

    Interdigital H-mode (IH) drift tube linacs (DTLs) based on KONUS beam dynamics are very sensitive to the rf-phases and voltages at the gaps between tubes. In order to design these DTLs, a deep understanding of the underlying longitudinal beam dynamics is mandatory. The report presents tracking simulations along an IH-DTL using the PARTRAN and BEAMPATH codes together with MATHCAD and CST. Simulation results illustrate that the beam dynamics design of the pre-stripper IH-DTL at GSI is sensitive to slight deviations of rf-phase and gap voltages with impact to the mean beam energy at the DTL exit. Applying the existing geometrical design, rf-voltages, and rf-phases of the DTL were re-adjusted. In simulations this re-optimized design can provide for more than 90% of transmission of an intense 15 emA beam keeping the reduction of beam brilliance below 25%.

  19. On the dynamics of Airy beams in nonlinear media with nonlinear losses.

    PubMed

    Ruiz-Jiménez, Carlos; Nóbrega, K Z; Porras, Miguel A

    2015-04-06

    We investigate on the nonlinear dynamics of Airy beams in a regime where nonlinear losses due to multi-photon absorption are significant. We identify the nonlinear Airy beam (NAB) that preserves the amplitude of the inward Hänkel component as an attractor of the dynamics. This attractor governs also the dynamics of finite-power (apodized) Airy beams, irrespective of the location of the entrance plane in the medium with respect to the Airy waist plane. A soft (linear) input long before the waist, however, strongly speeds up NAB formation and its persistence as a quasi-stationary beam in comparison to an abrupt input at the Airy waist plane, and promotes the formation of a new type of highly dissipative, fully nonlinear Airy beam not described so far.

  20. Third-dimension information retrieval from a single convergent-beam transmission electron diffraction pattern using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Pennington, Robert S.; Van den Broek, Wouter; Koch, Christoph T.

    2014-05-01

    We have reconstructed third-dimension specimen information from convergent-beam electron diffraction (CBED) patterns simulated using the stacked-Bloch-wave method. By reformulating the stacked-Bloch-wave formalism as an artificial neural network and optimizing with resilient back propagation, we demonstrate specimen orientation reconstructions with depth resolutions down to 5 nm. To show our algorithm's ability to analyze realistic data, we also discuss and demonstrate our algorithm reconstructing from noisy data and using a limited number of CBED disks. Applicability of this reconstruction algorithm to other specimen parameters is discussed.

  1. Simultaneous optimization of the cavity heat load and trip rates in linacs using a genetic algorithm

    DOE PAGES

    Terzić, Balša; Hofler, Alicia S.; Reeves, Cody J.; ...

    2014-10-15

    In this paper, a genetic algorithm-based optimization is used to simultaneously minimize two competing objectives guiding the operation of the Jefferson Lab's Continuous Electron Beam Accelerator Facility linacs: cavity heat load and radio frequency cavity trip rates. The results represent a significant improvement to the standard linac energy management tool and thereby could lead to a more efficient Continuous Electron Beam Accelerator Facility configuration. This study also serves as a proof of principle of how a genetic algorithm can be used for optimizing other linac-based machines.

  2. Reduced electron exposure for energy-dispersive spectroscopy using dynamic sampling

    DOE PAGES

    Zhang, Yan; Godaliyadda, G. M. Dilshan; Ferrier, Nicola; ...

    2017-10-23

    Analytical electron microscopy and spectroscopy of biological specimens, polymers, and other beam sensitive materials has been a challenging area due to irradiation damage. There is a pressing need to develop novel imaging and spectroscopic imaging methods that will minimize such sample damage as well as reduce the data acquisition time. The latter is useful for high-throughput analysis of materials structure and chemistry. Here, in this work, we present a novel machine learning based method for dynamic sparse sampling of EDS data using a scanning electron microscope. Our method, based on the supervised learning approach for dynamic sampling algorithm and neuralmore » networks based classification of EDS data, allows a dramatic reduction in the total sampling of up to 90%, while maintaining the fidelity of the reconstructed elemental maps and spectroscopic data. In conclusion, we believe this approach will enable imaging and elemental mapping of materials that would otherwise be inaccessible to these analysis techniques.« less

  3. Beam Diagnostics of the Compton Scattering Chamber in Jefferson Lab's Hall C

    NASA Astrophysics Data System (ADS)

    Faulkner, Adam; I&C Group Collaboration

    2013-10-01

    Upcoming experimental runs in Hall C will utilize Compton scattering, involving the construction and installation of a rectangular beam enclosure. Conventional cylindrical stripline-style Beam Position Monitors (BPMs) are not appropriate due to their form factor; therefore to facilitate measurement of position, button-style BPMs are being considered due to the ease of placement within the new beam enclosure. Button BPM experience is limited at JLAB, so preliminary measurements are needed to characterize the field response, and guide the development of appropriate algorithms for the Analog to Digital receiver systems. -field mapping is performed using a Goubau Line (G-Line), which employs a surface wave to mimic the electron beam, helping to avoid problems associated with vacuum systems. Potential algorithms include simplistic 1/r modeling (-field mapping), look-up-tables, as well as a potential third order power series fit. In addition, the use of neural networks specifically the multi-layer Perceptron will be examined. The models, sensor field maps, and utility of the neural network will be presented. Next steps include: modification of the control algorithm, as well as to run an in-situ test of the four Button electrodes inside of a mock beam enclosure. The analysis of the field response using Matlab suggests the button BPMs are accurate to within 10 mm, and may be successful for beam diagnostics in Hall C. More testing is necessary to ascertain the limitations of the new electrodes. The National Science Foundation, Old Dominion University, The Department of Energy, and Jefferson Lab.

  4. High-precision positioning system of four-quadrant detector based on the database query

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Deng, Xiao-guo; Su, Xiu-qin; Zheng, Xiao-qiang

    2015-02-01

    The fine pointing mechanism of the Acquisition, Pointing and Tracking (APT) system in free space laser communication usually use four-quadrant detector (QD) to point and track the laser beam accurately. The positioning precision of QD is one of the key factors of the pointing accuracy to APT system. A positioning system is designed based on FPGA and DSP in this paper, which can realize the sampling of AD, the positioning algorithm and the control of the fast swing mirror. We analyze the positioning error of facular center calculated by universal algorithm when the facular energy obeys Gauss distribution from the working principle of QD. A database is built by calculation and simulation with MatLab software, in which the facular center calculated by universal algorithm is corresponded with the facular center of Gaussian beam, and the database is stored in two pieces of E2PROM as the external memory of DSP. The facular center of Gaussian beam is inquiry in the database on the basis of the facular center calculated by universal algorithm in DSP. The experiment results show that the positioning accuracy of the high-precision positioning system is much better than the positioning accuracy calculated by universal algorithm.

  5. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves

    PubMed Central

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J. R.; Krenner, Hubert J.; Wixforth, Achim; Salditt, Tim

    2014-01-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length). PMID:25294979

  6. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves.

    PubMed

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J R; Krenner, Hubert J; Wixforth, Achim; Salditt, Tim

    2014-10-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length).

  7. A wave model of refraction of laser beams with a discrete change in intensity in their cross section and their application for diagnostics of extended nonstationary phase objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raskovskaya, I L

    2015-08-31

    A beam model with a discrete change in the cross-sectional intensity is proposed to describe refraction of laser beams formed on the basis of diffractive optical elements. In calculating the wave field of the beams of this class under conditions of strong refraction, in contrast to the traditional asymptotics of geometric optics which assumes a transition to the infinite limits of integration and obtaining an analytical solution, it is proposed to calculate the integral in the vicinity of stationary points. This approach allows the development of a fast algorithm for correct calculation of the wave field of the laser beamsmore » that are employed in probing and diagnostics of extended optically inhomogeneous media. Examples of the algorithm application for diagnostics of extended nonstationary objects in liquid are presented. (laser beams)« less

  8. Observation and Control of Hamiltonian Chaos in Wave-particle Interaction

    NASA Astrophysics Data System (ADS)

    Doveil, F.; Elskens, Y.; Ruzzon, A.

    2010-11-01

    Wave-particle interactions are central in plasma physics. The paradigm beam-plasma system can be advantageously replaced by a traveling wave tube (TWT) to allow their study in a much less noisy environment. This led to detailed analysis of the self-consistent interaction between unstable waves and an either cold or warm electron beam. More recently a test cold beam has been used to observe its interaction with externally excited wave(s). This allowed observing the main features of Hamiltonian chaos and testing a new method to efficiently channel chaotic transport in phase space. To simulate accurately and efficiently the particle dynamics in the TWT and other 1D particle-wave systems, a new symplectic, symmetric, second order numerical algorithm is developed, using particle position as the independent variable, with a fixed spatial step. This contribution reviews : presentation of the TWT and its connection to plasma physics, resonant interaction of a charged particle in electrostatic waves, observation of particle trapping and transition to chaos, test of control of chaos, and description of the simulation algorithm. The velocity distribution function of the electron beam is recorded with a trochoidal energy analyzer at the output of the TWT. An arbitrary waveform generator is used to launch a prescribed spectrum of waves along the 4m long helix of the TWT. The nonlinear synchronization of particles by a single wave, responsible for Landau damping, is observed. We explore the resonant velocity domain associated with a single wave as well as the transition to large scale chaos when the resonant domains of two waves and their secondary resonances overlap. This transition exhibits a devil's staircase behavior when increasing the excitation level in agreement with numerical simulation. A new strategy for control of chaos by building barriers of transport in phase space as well as its robustness is successfully tested. The underlying concepts extend far beyond the field of electron devices and plasma physics.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nunes, R. P.; Rizzato, F. B.

    This work analyzes the transversal dynamics of an inhomogeneous and mismatched charged particle beam. The beam is azimuthally symmetric, initially cold, and evolves in a linear channel permeated by an external constant magnetic field. Based on a Lagrangian approach, a low-dimensional model for the description of the beam dynamics has been obtained. The small set of nonlinear dynamical equations provided results that are in reasonable agreement with that ones observed in full self-consistent N-particle beam numerical simulations.

  10. Optimizing highly noncoplanar VMAT trajectories: the NoVo method.

    PubMed

    Langhans, Marco; Unkelbach, Jan; Bortfeld, Thomas; Craft, David

    2018-01-16

    We introduce a new method called NoVo (Noncoplanar VMAT Optimization) to produce volumetric modulated arc therapy (VMAT) treatment plans with noncoplanar trajectories. While the use of noncoplanar beam arrangements for intensity modulated radiation therapy (IMRT), and in particular high fraction stereotactic radiosurgery (SRS), is common, noncoplanar beam trajectories for VMAT are less common as the availability of treatment machines handling these is limited. For both IMRT and VMAT, the beam angle selection problem is highly nonconvex in nature, which is why automated beam angle selection procedures have not entered mainstream clinical usage. NoVo determines a noncoplanar VMAT solution (i.e. the simultaneous trajectories of the gantry and the couch) by first computing a [Formula: see text] solution (beams from every possible direction, suitably discretized) and then eliminating beams by examing fluence contributions. Also all beam angles are scored via geometrical considerations only to find out the usefulness of the whole beam space in a very short time. A custom path finding algorithm is applied to find an optimized, continuous trajectory through the most promising beam angles using the calculated score of the beam space. Finally, using this trajectory a VMAT plan is optimized. For three clinical cases, a lung, brain, and liver case, we compare NoVo to the ideal [Formula: see text] solution, nine beam noncoplanar IMRT, coplanar VMAT, and a recently published noncoplanar VMAT algorithm. NoVo comes closest to the [Formula: see text] solution considering the lung case (brain and liver case: second), as well as improving the solution time by using geometrical considerations, followed by a time effective iterative process reducing the [Formula: see text] solution. Compared to a recently published noncoplanar VMAT algorithm, using NoVo the computation time is reduced by a factor of 2-3 (depending on the case). Compared to coplanar VMAT, NoVo reduces the objective function value by 24%, 49% and 6% for the lung, brain and liver cases, respectively.

  11. Optimizing highly noncoplanar VMAT trajectories: the NoVo method

    NASA Astrophysics Data System (ADS)

    Langhans, Marco; Unkelbach, Jan; Bortfeld, Thomas; Craft, David

    2018-01-01

    We introduce a new method called NoVo (Noncoplanar VMAT Optimization) to produce volumetric modulated arc therapy (VMAT) treatment plans with noncoplanar trajectories. While the use of noncoplanar beam arrangements for intensity modulated radiation therapy (IMRT), and in particular high fraction stereotactic radiosurgery (SRS), is common, noncoplanar beam trajectories for VMAT are less common as the availability of treatment machines handling these is limited. For both IMRT and VMAT, the beam angle selection problem is highly nonconvex in nature, which is why automated beam angle selection procedures have not entered mainstream clinical usage. NoVo determines a noncoplanar VMAT solution (i.e. the simultaneous trajectories of the gantry and the couch) by first computing a 4π solution (beams from every possible direction, suitably discretized) and then eliminating beams by examing fluence contributions. Also all beam angles are scored via geometrical considerations only to find out the usefulness of the whole beam space in a very short time. A custom path finding algorithm is applied to find an optimized, continuous trajectory through the most promising beam angles using the calculated score of the beam space. Finally, using this trajectory a VMAT plan is optimized. For three clinical cases, a lung, brain, and liver case, we compare NoVo to the ideal 4π solution, nine beam noncoplanar IMRT, coplanar VMAT, and a recently published noncoplanar VMAT algorithm. NoVo comes closest to the 4π solution considering the lung case (brain and liver case: second), as well as improving the solution time by using geometrical considerations, followed by a time effective iterative process reducing the 4π solution. Compared to a recently published noncoplanar VMAT algorithm, using NoVo the computation time is reduced by a factor of 2-3 (depending on the case). Compared to coplanar VMAT, NoVo reduces the objective function value by 24%, 49% and 6% for the lung, brain and liver cases, respectively.

  12. Intensity-modulated radiotherapy (IMRT) for carcinoma of the maxillary sinus: A comparison of IMRT planning systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Raef S.; Ove, Roger; Duan, Jun

    2006-10-01

    The treatment of maxillary sinus carcinoma with forward planning can be technically difficult when the neck also requires radiotherapy. This difficulty arises because of the need to spare the contralateral face while treating the bilateral neck. There is considerable potential for error in clinical setup and treatment delivery. We evaluated intensity-modulated radiotherapy (IMRT) as an improvement on forward planning, and compared several inverse planning IMRT platforms. A composite dose-volume histogram (DVH) was generated from a complex forward planned case. We compared the results with those generated by sliding window fixed field dynamic multileaf collimator (MLC) IMRT, using sets of coplanarmore » beams. All setups included an anterior posterior (AP) beam, and 3-, 5-, 7-, and 9-field configurations were evaluated. The dose prescription and objective function priorities were invariant. We also evaluated 2 commercial tomotherapy IMRT delivery platforms. DVH results from all of the IMRT approaches compared favorably with the forward plan. Results for the various inverse planning approaches varied considerably across platforms, despite an attempt to prescribe the therapy similarly. The improvement seen with the addition of beams in the fixed beam sliding window case was modest. IMRT is an effective means of delivering radiotherapy reliably in the complex setting of maxillary sinus carcinoma with neck irradiation. Differences in objective function definition and optimization algorithms can lead to unexpected differences in the final dose distribution, and our evaluation suggests that these factors are more significant than the beam arrangement or number of beams.« less

  13. The influence of and the identification of nonlinearity in flexible structures

    NASA Technical Reports Server (NTRS)

    Zavodney, Lawrence D.

    1988-01-01

    Several models were built at NASA Langley and used to demonstrate the following nonlinear behavior: internal resonance in a free response, principal parametric resonance and subcritical instability in a cantilever beam-lumped mass structure, combination resonance in a parametrically excited flexible beam, autoparametric interaction in a two-degree-of-freedom system, instability of the linear solution, saturation of the excited mode, subharmonic bifurcation, and chaotic responses. A video tape documenting these phenomena was made. An attempt to identify a simple structure consisting of two light-weight beams and two lumped masses using the Eigensystem Realization Algorithm showed the inherent difficulty of using a linear based theory to identify a particular nonlinearity. Preliminary results show the technique requires novel interpretation, and hence may not be useful for structural modes that are coupled by a guadratic nonlinearity. A literature survey was also completed on recent work in parametrically excited nonlinear system. In summary, nonlinear systems may possess unique behaviors that require nonlinear identification techniques based on an understanding of how nonlinearity affects the dynamic response of structures. In this was, the unique behaviors of nonlinear systems may be properly identified. Moreover, more accutate quantifiable estimates can be made once the qualitative model has been determined.

  14. Measurement of vibration using phase only correlation technique

    NASA Astrophysics Data System (ADS)

    Balachandar, S.; Vipin, K.

    2017-08-01

    A novel method for the measurement of vibration is proposed and demonstrated. The proposed experiment is based on laser triangulation: consists of line laser, object under test and a high speed camera remotely controlled by a software. Experiment involves launching a line-laser probe beam perpendicular to the axis of the vibrating object. The reflected probe beam is recorded by a high speed camera. The dynamic position of the line laser in camera plane is governed by the magnitude and frequency of the vibrating test-object. Using phase correlation technique the maximum distance travelled by the probe beam in CCD plane is measured in terms of pixels using MATLAB. An actual displacement of the object in mm is measured by calibration. Using displacement data with time, other vibration associated quantities such as acceleration, velocity and frequency are evaluated. The preliminary result of the proposed method is reported for acceleration from 1g to 3g, and from frequency 6Hz to 26Hz. The results are closely matching with its theoretical values. The advantage of the proposed method is that it is a non-destructive method and using phase correlation algorithm subpixel displacement in CCD plane can be measured with high accuracy.

  15. A comparison of hydrographically and optically derived mixed layer depths

    USGS Publications Warehouse

    Zawada, D.G.; Zaneveld, J.R.V.; Boss, E.; Gardner, W.D.; Richardson, M.J.; Mishonov, A.V.

    2005-01-01

    Efforts to understand and model the dynamics of the upper ocean would be significantly advanced given the ability to rapidly determine mixed layer depths (MLDs) over large regions. Remote sensing technologies are an ideal choice for achieving this goal. This study addresses the feasibility of estimating MLDs from optical properties. These properties are strongly influenced by suspended particle concentrations, which generally reach a maximum at pycnoclines. The premise therefore is to use a gradient in beam attenuation at 660 nm (c660) as a proxy for the depth of a particle-scattering layer. Using a global data set collected during World Ocean Circulation Experiment cruises from 1988-1997, six algorithms were employed to compute MLDs from either density or temperature profiles. Given the absence of published optically based MLD algorithms, two new methods were developed that use c660 profiles to estimate the MLD. Intercomparison of the six hydrographically based algorithms revealed some significant disparities among the resulting MLD values. Comparisons between the hydrographical and optical approaches indicated a first-order agreement between the MLDs based on the depths of gradient maxima for density and c660. When comparing various hydrographically based algorithms, other investigators reported that inherent fluctuations of the mixed layer depth limit the accuracy of its determination to 20 m. Using this benchmark, we found a ???70% agreement between the best hydrographical-optical algorithm pairings. Copyright 2005 by the American Geophysical Union.

  16. Active control of flexible structures using a fuzzy logic algorithm

    NASA Astrophysics Data System (ADS)

    Cohen, Kelly; Weller, Tanchum; Ben-Asher, Joseph Z.

    2002-08-01

    This study deals with the development and application of an active control law for the vibration suppression of beam-like flexible structures experiencing transient disturbances. Collocated pairs of sensors/actuators provide active control of the structure. A design methodology for the closed-loop control algorithm based on fuzzy logic is proposed. First, the behavior of the open-loop system is observed. Then, the number and locations of collocated actuator/sensor pairs are selected. The proposed control law, which is based on the principles of passivity, commands the actuator to emulate the behavior of a dynamic vibration absorber. The absorber is tuned to a targeted frequency, whereas the damping coefficient of the dashpot is varied in a closed loop using a fuzzy logic based algorithm. This approach not only ensures inherent stability associated with passive absorbers, but also circumvents the phenomenon of modal spillover. The developed controller is applied to the AFWAL/FIB 10 bar truss. Simulated results using MATLAB© show that the closed-loop system exhibits fairly quick settling times and desirable performance, as well as robustness characteristics. To demonstrate the robustness of the control system to changes in the temporal dynamics of the flexible structure, the transient response to a considerably perturbed plant is simulated. The modal frequencies of the 10 bar truss were raised as well as lowered substantially, thereby significantly perturbing the natural frequencies of vibration. For these cases, too, the developed control law provides adequate settling times and rates of vibrational energy dissipation.

  17. Beam Dynamics for ARIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August Jr.

    2014-10-14

    Beam dynamics issues are assessed for a new linear induction electron accelerator being designed for flash radiography of large explosively driven hydrodynamic experiments. Special attention is paid to equilibrium beam transport, possible emittance growth, and beam stability. It is concluded that a radiographic quality beam will be produced possible if engineering standards and construction details are equivalent to those on the present radiography accelerators at Los Alamos.

  18. Two-Photon Excitation STED Microscopy with Time-Gated Detection

    PubMed Central

    Coto Hernández, Iván; Castello, Marco; Lanzanò, Luca; d’Amora, Marta; Bianchini, Paolo; Diaspro, Alberto; Vicidomini, Giuseppe

    2016-01-01

    We report on a novel two-photon excitation stimulated emission depletion (2PE-STED) microscope based on time-gated detection. The time-gated detection allows for the effective silencing of the fluorophores using moderate stimulated emission beam intensity. This opens the possibility of implementing an efficient 2PE-STED microscope with a stimulated emission beam running in a continuous-wave. The continuous-wave stimulated emission beam tempers the laser architecture’s complexity and cost, but the time-gated detection degrades the signal-to-noise ratio (SNR) and signal-to-background ratio (SBR) of the image. We recover the SNR and the SBR through a multi-image deconvolution algorithm. Indeed, the algorithm simultaneously reassigns early-photons (normally discarded by the time-gated detection) to their original positions and removes the background induced by the stimulated emission beam. We exemplify the benefits of this implementation by imaging sub-cellular structures. Finally, we discuss of the extension of this algorithm to future all-pulsed 2PE-STED implementationd based on time-gated detection and a nanosecond laser source. PMID:26757892

  19. Poster — Thur Eve — 76: Dosimetric Comparison of Pinnacle and iPlan Algorithms with an Anthropomorphic Lung Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez, P.; Tambasco, M.; LaFontaine, R.

    2014-08-15

    Our goal is to compare the dosimetric accuracy of the Pinnacle-3 9.2 Collapsed Cone Convolution Superposition (CCCS) and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms in an anthropomorphic lung phantom using measurement as the gold standard. Ion chamber measurements were taken for 6, 10, and 18 MV beams in a CIRS E2E SBRT Anthropomorphic Lung Phantom, which mimics lung, spine, ribs, and tissue. The plan implemented six beams with a 5×5 cm{sup 2} field size, delivering a total dose of 48 Gy. Data from the planning systems were computed at the treatment isocenter in the leftmore » lung, and two off-axis points, the spinal cord and the right lung. The measurements were taken using a pinpoint chamber. The best results between data from the algorithms and our measurements occur at the treatment isocenter. For the 6, 10, and 18 MV beams, iPlan 4.1 MC software performs the best with 0.3%, 0.2%, and 4.2% absolute percent difference from measurement, respectively. Differences between our measurements and algorithm data are much greater for the off-axis points. The best agreement seen for the right lung and spinal cord is 11.4% absolute percent difference with 6 MV iPlan 4.1 PB and 18 MV iPlan 4.1 MC, respectively. As energy increases absolute percent difference from measured data increases up to 54.8% for the 18 MV CCCS algorithm. This study suggests that iPlan 4.1 MC computes peripheral dose and target dose in the lung more accurately than the iPlan 4.1 PB and Pinnicale CCCS algorithms.« less

  20. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fastmore » superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of algorithms in lung cancer radiotherapy involving small fields. However, further investigation by Monte Carlo simulation is required to confirm our results.« less

  1. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  2. (Proceedings) 18th Advanced ICFA Beam Dynamics Workshop on Quantum Aspects of Beam Physics (QABP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin

    2002-10-25

    The 18th Advanced ICFA Beam Dynamics Workshop on ''Quantum Aspects of Beam Physics'' was held from October 15 to 20, 2000, in Capri, Italy. This was the second workshop under the same title. The first one was held in Monterey, California, in January, 1998. Following the footstep of the first meeting, the second one in Capri was again a tremendous success, both scientifically and socially. About 70 colleagues from astrophysics, atomic physics, beam physics, condensed matter physics, particle physics, and general relativity gathered to update and further explore the topics covered in the Monterey workshop. Namely, the following topics weremore » actively discussed: (1) Quantum Fluctuations in Beam Dynamics; (2) Photon-Electron Interaction in Beam handling; (3) Physics of Condensed Beams; (4) Beam Phenomena under Strong Fields; (5) Quantum Methodologies in Beam Physics. In addition, there was a newly introduced subject on Astro-Beam Physics and Laboratory Astrophysics.« less

  3. Crack identification method in beam-like structures using changes in experimentally measured frequencies and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd

    2018-02-01

    In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.

  4. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  5. Large Deformation Dynamic Bending of Composite Beams

    NASA Technical Reports Server (NTRS)

    Derian, E. J.; Hyer, M. W.

    1986-01-01

    Studies were conducted on the large deformation response of composite beams subjected to a dynamic axial load. The beams were loaded with a moderate eccentricity to promote bending. The study was primarily experimental but some finite element results were obtained. Both the deformation and the failure of the beams were of interest. The static response of the beams was also studied to determine potential differences between the static and dynamic failure. Twelve different laminate types were tested. The beams tested were 23 in. by 2 in. and generally 30 plies thick. The beams were loaded dynamically with a gravity-driven impactor traveling at 19.6 ft/sec and quasi-static tests were conducted on identical beams in a displacement controlled manner. For laminates of practical interest, the failure modes under static and dynamic loadings were identical. Failure in most of the laminate types occurred in a single event involving 40% to 50% of the plies. However, failure in laminates with 300 or 150 off-axis plies occurred in several events. All laminates exhibited bimodular elastic properties. The compressive flexural moduli in some laminates was measured to be 1/2 the tensile flexural modulus. No simple relationship could be found among the measured ultimate failure strains of the different laminate types. Using empirically determined flexural properties, a finite element analysis was reasonably accurate in predicting the static and dynamic deformation response.

  6. The introduction of capillary structures in 4D simulated vascular tree for ART 3.5D algorithm further validation

    NASA Astrophysics Data System (ADS)

    Barra, Beatrice; El Hadji, Sara; De Momi, Elena; Ferrigno, Giancarlo; Cardinale, Francesco; Baselli, Giuseppe

    2017-03-01

    Several neurosurgical procedures, such as Artero Venous Malformations (AVMs), aneurysm embolizations and StereoElectroEncephaloGraphy (SEEG) require accurate reconstruction of the cerebral vascular tree, as well as the classification of arteries and veins, in order to increase the safety of the intervention. Segmentation of arteries and veins from 4D CT perfusion scans has already been proposed in different studies. Nonetheless, such procedures require long acquisition protocols and the radiation dose given to the patient is not negligible. Hence, space is open to approaches attempting to recover the dynamic information from standard Contrast Enhanced Cone Beam Computed Tomography (CE-CBCT) scans. The algorithm proposed by our team is called ART 3.5 D. It is a novel algorithm based on the postprocessing of both the angiogram and the raw data of a standard Digital Subtraction Angiography from a CBCT (DSACBCT) allowing arteries and veins segmentation and labeling without requiring any additional radiation exposure for the patient and neither lowering the resolution. In addition, while in previous versions of the algorithm just the distinction of arteries and veins was considered, here the capillary phase simulation and identification is introduced, in order to increase further information useful for more precise vasculature segmentation.

  7. Holographic otoscope for nano-displacement measurements of surfaces under dynamic excitation

    PubMed Central

    Flores-Moreno, J. M.; Furlong, Cosme; Rosowski, John J.; Harrington, Ellery; Cheng, Jeffrey T.; Scarpino, C.; Santoyo, F. Mendoza

    2011-01-01

    Summary We describe a novel holographic otoscope system for measuring nano-displacements of objects subjected to dynamic excitation. Such measurements are necessary to quantify the mechanical deformation of surfaces in mechanics, acoustics, electronics, biology and many other fields. In particular, we are interested in measuring the sound-induced motion of biological samples, such as an eardrum. Our holographic otoscope system consists of laser illumination delivery (IS), optical head (OH), and image processing computer (IP) systems. The IS delivers the object beam (OB) and the reference beam (RB) to the OH. The backscattered light coming from the object illuminated by the OB interferes with the RB at the camera sensor plane to be digitally recorded as a hologram. The hologram is processed by the IP using Fresnel numerical reconstruction algorithm, where the focal plane can be selected freely. Our holographic otoscope system is currently deployed in a clinic, and is packaged in a custom design. It is mounted in a mechatronic positioning system to increase its maneuverability degrees to be conveniently positioned in front of the object to be measured. We present representative results highlighting the versatility of our system to measure deformations of complex elastic surfaces in the wavelength scale including a copper foil membrane and postmortem tympanic membrane (TM). PMID:21898459

  8. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy

    NASA Astrophysics Data System (ADS)

    Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

    2010-08-01

    This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

  9. Optics measurement algorithms and error analysis for the proton energy frontier

    NASA Astrophysics Data System (ADS)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  10. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.

    PubMed

    Hedin, Emma; Bäck, Anna

    2013-09-06

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types.

  11. The dynamics and control of large flexible space structures - 13

    NASA Technical Reports Server (NTRS)

    Bainum, Peter M.; Li, Feiyue; Xu, Jianke

    1990-01-01

    The optimal control of three-dimensional large angle maneuvers and vibrations of a Shuttle-mast-reflector system is considered. The nonlinear equations of motion are formulated by using Lagrange's formula, with the mast modeled as a continuous beam subject to three-dimensional deformations. Pontryagin's Maximum Principle is applied to the slewing problem, to derive the necessary conditions for the optimal controls, which are bounded by given saturation levels. The resulting two point boundary value problem is then solved by using the quasilinearization algorithm and the method of particular solutions. The study of the large angle maneuvering of the Shuttle-beam-reflector spacecraft in the plane of a circular earth orbit is extended to consider the effects of the structural offset connection, the axial shortening, and the gravitational torque on the slewing motion. Finally the effect of additional design parameters (such as related to additional payload requirement) on the linear quadratic regulator based design of an orbiting control/structural system is examined.

  12. Mobile robot dynamic path planning based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Zhou, Heng; Wang, Ying

    2017-08-01

    In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.

  13. Failure monitoring in dynamic systems: Model construction without fault training data

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Mellstrom, J.

    1993-01-01

    Advances in the use of autoregressive models, pattern recognition methods, and hidden Markov models for on-line health monitoring of dynamic systems (such as DSN antennas) have recently been reported. However, the algorithms described in previous work have the significant drawback that data acquired under fault conditions are assumed to be available in order to train the model used for monitoring the system under observation. This article reports that this assumption can be relaxed and that hidden Markov monitoring models can be constructed using only data acquired under normal conditions and prior knowledge of the system characteristics being measured. The method is described and evaluated on data from the DSS 13 34-m beam wave guide antenna. The primary conclusion from the experimental results is that the method is indeed practical and holds considerable promise for application at the 70-m antenna sites where acquisition of fault data under controlled conditions is not realistic.

  14. Incorporation of interfacial roughness into recursion matrix formalism of dynamical X-ray diffraction in multilayers and superlattices.

    PubMed

    Lobach, Ihar; Benediktovitch, Andrei; Ulyanenkov, Alexander

    2017-06-01

    Diffraction in multilayers in the presence of interfacial roughness is studied theoretically, the roughness being considered as a transition layer. Exact (within the framework of the two-beam dynamical diffraction theory) differential equations for field amplitudes in a crystalline structure with varying properties along its surface normal are obtained. An iterative scheme for approximate solution of the equations is developed. The presented approach to interfacial roughness is incorporated into the recursion matrix formalism in a way that obviates possible numerical problems. Fitting of the experimental rocking curve is performed in order to test the possibility of reconstructing the roughness value from a diffraction scan. The developed algorithm works substantially faster than the traditional approach to dealing with a transition layer (dividing it into a finite number of thin lamellae). Calculations by the proposed approach are only two to three times longer than calculations for corresponding structures with ideally sharp interfaces.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, C; Liu, H; Indiana University Bloomington, Bloomington, IN

    Purpose: A rapid cycling proton beam has several distinct characteristics superior to a slow extraction synchrotron: The beam energy and energy spread, beam intensity and spot size can be varied spot by spot. The feasibility of using a spot scanning beam from a rapidc-ycling-medical-synchrotron (RCMS) at 10 Hz repetition frequency is investigated in this study for its application in proton therapy. Methods: The versatility of the beam is illustrated by two examples in water phantoms: (1) a cylindrical PTV irradiated by a single field and (2) a spherical PTV irradiated by two parallel opposed fields. A uniform dose distribution ismore » to be delivered to the volumes. Geant4 Monte Carlo code is used to validate the dose distributions in each example. Results: Transverse algorithms are developed to produce uniform distributions in each transverseplane in the two examples with a cylindrical and a spherical PTV respectively. Longitudinally, different proton energies are used in successive transverse planes toproduce the SOBP required to cover the PTVs. In general, uniformity of dosedistribution within 3% is obtained for the cylinder and 3.5% for the sphere. The transversealgorithms requires only few hundred beam spots for each plane The algorithms may beapplied to larger volumes by increasing the intensity spot by spot for the same deliverytime of the same dose. The treatment time can be shorter than 1 minute for any fieldconfiguration and tumor shape. Conclusion: The unique beam characteristics of a spot scanning beam from a RCMS at 10 Hz repetitionfrequency are used to design transverse and longitudinal algorithms to produce uniformdistribution for any arbitrary shape and size of targets. The proposed spot scanning beam ismore versatile than existing spot scanning beams in proton therapy with better beamcontrol and lower neutron dose. This work is supported in part by grants from the US Department of Energy under contract; DE-FG02-12ER41800 and the National Science Foundation NSF PHY-1205431.« less

  16. The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    Siebers, J. V.; Keall, P. J.; Mohan, R.

    The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.

  17. A comparison of TPS and different measurement techniques in small-field electron beams.

    PubMed

    Donmez Kesen, Nazmiye; Cakir, Aydin; Okutan, Murat; Bilge, Hatice

    2015-01-01

    In recent years, small-field electron beams have been used for the treatment of superficial lesions, which requires small circular fields. However, when using very small electron fields, some significant dosimetric problems may occur. In this study, dose distributions and outputs of circular fields with dimensions of 5cm and smaller, for nominal energies of 6, 9, and 15MeV from the Siemens ONCOR Linac, were measured and compared with data from a treatment planning system using the pencil-beam algorithm in electron beam calculations. All dose distribution measurements were performed using the Gafchromic EBT film; these measurements were compared with data that were obtained from the Computerized Medical Systems (CMS) XiO treatment planning system (TPS), using the gamma-index method in the PTW VeriSoft software program. Output measurements were performed using the Gafchromic EBT film, an Advanced Markus ion chamber, and thermoluminescent dosimetry (TLD). Although the pencil-beam algorithm is used to model electron beams in many clinics, there is no substantial amount of detailed information in the literature about its use. As the field size decreased, the point of maximum dose moved closer to the surface. Output factors were consistent; differences from the values obtained from the TPS were, at maximum, 42% for 6 and 15MeV and 32% for 9MeV. When the dose distributions from the TPS were compared with the measurements from the Gafchromic EBT films, it was observed that the results were consistent for 2-cm diameter and larger fields, but the outputs for fields of 1-cm diameter and smaller were not consistent. In CMS XiO TPS, calculated using the pencil-beam algorithm, the dose distributions of electron treatment fields that were created with circular cutout of a 1-cm diameter were not appropriate for patient treatment and the pencil-beam algorithm is not convenient for monitor unit (MU) calculations in electron dosimetry. Copyright © 2015 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  18. Molecular Dynamics Simulation of the Three-Dimensional Ordered State in Laser-Cooled Heavy-Ion Beams

    NASA Astrophysics Data System (ADS)

    Yuri, Yosuke

    A molecular dynamics simulation is performed to study the formation of three-dimensional ordered beams by laser cooling in a cooler storage ring. Ultralow-temperature heavy-ion beams are generated by transverse cooling with displaced Gaussian lasers and resonant coupling. A three-dimensional ordered state of the ion beam is attained at a high line density. The ordered beam exhibits several unique characteristics different from those of an ideal crystalline beam.

  19. Beam Dynamics Simulation Platform and Studies of Beam Breakup in Dielectric Wakefield Structures

    NASA Astrophysics Data System (ADS)

    Schoessow, P.; Kanareykin, A.; Jing, C.; Kustov, A.; Altmark, A.; Gai, W.

    2010-11-01

    A particle-Green's function beam dynamics code (BBU-3000) to study beam breakup effects is incorporated into a parallel computing framework based on the Boinc software environment, and supports both task farming on a heterogeneous cluster and local grid computing. User access to the platform is through a web browser.

  20. Reduction of metal artifacts: beam hardening and photon starvation effects

    NASA Astrophysics Data System (ADS)

    Yadava, Girijesh K.; Pal, Debashish; Hsieh, Jiang

    2014-03-01

    The presence of metal-artifacts in CT imaging can obscure relevant anatomy and interfere with disease diagnosis. The cause and occurrence of metal-artifacts are primarily due to beam hardening, scatter, partial volume and photon starvation; however, the contribution to the artifacts from each of them depends on the type of hardware. A comparison of CT images obtained with different metallic hardware in various applications, along with acquisition and reconstruction parameters, helps understand methods for reducing or overcoming such artifacts. In this work, a metal beam hardening correction (BHC) and a projection-completion based metal artifact reduction (MAR) algorithms were developed, and applied on phantom and clinical CT scans with various metallic implants. Stainless-steel and Titanium were used to model and correct for metal beam hardening effect. In the MAR algorithm, the corrupted projection samples are replaced by the combination of original projections and in-painted data obtained by forward projecting a prior image. The data included spine fixation screws, hip-implants, dental-filling, and body extremity fixations, covering range of clinically used metal implants. Comparison of BHC and MAR on different metallic implants was used to characterize dominant source of the artifacts, and conceivable methods to overcome those. Results of the study indicate that beam hardening could be a dominant source of artifact in many spine and extremity fixations, whereas dental and hip implants could be dominant source of photon starvation. The BHC algorithm could significantly improve image quality in CT scans with metallic screws, whereas MAR algorithm could alleviate artifacts in hip-implants and dentalfillings.

  1. Temporally separating Cherenkov radiation in a scintillator probe exposed to a pulsed X-ray beam.

    PubMed

    Archer, James; Madden, Levi; Li, Enbang; Carolan, Martin; Petasecca, Marco; Metcalfe, Peter; Rosenfeld, Anatoly

    2017-10-01

    Cherenkov radiation is generated in optical systems exposed to ionising radiation. In water or plastic devices, if the incident radiation has components with high enough energy (for example, electrons or positrons with energy greater than 175keV), Cherenkov radiation will be generated. A scintillator dosimeter that collects optical light, guided by optical fibre, will have Cherenkov radiation generated throughout the length of fibre exposed to the radiation field and compromise the signal. We present a novel algorithm to separate Cherenkov radiation signal that requires only a single probe, provided the radiation source is pulsed, such as a linear accelerator in external beam radiation therapy. We use a slow scintillator (BC-444) that, in a constant beam of radiation, reaches peak light output after 1 microsecond, while the Cherenkov signal is detected nearly instantly. This allows our algorithm to separate the scintillator signal from the Cherenkov signal. The relative beam profile and depth dose of a linear accelerator 6MV X-ray field were reconstructed using the algorithm. The optimisation method improved the fit to the ionisation chamber data and improved the reliability of the measurements. The algorithm was able to remove 74% of the Cherenkov light, at the expense of only 1.5% scintillation light. Further characterisation of the Cherenkov radiation signal has the potential to improve the results and allow this method to be used as a simpler optical fibre dosimeter for quality assurance in external beam therapy. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Complex amplitude reconstruction for dynamic beam quality M2 factor measurement with self-referencing interferometer wavefront sensor.

    PubMed

    Du, Yongzhao; Fu, Yuqing; Zheng, Lixin

    2016-12-20

    A real-time complex amplitude reconstruction method for determining the dynamic beam quality M2 factor based on a Mach-Zehnder self-referencing interferometer wavefront sensor is developed. By using the proposed complex amplitude reconstruction method, full characterization of the laser beam, including amplitude (intensity profile) and phase information, can be reconstructed from a single interference pattern with the Fourier fringe pattern analysis method in a one-shot measurement. With the reconstructed complex amplitude, the beam fields at any position z along its propagation direction can be obtained by first utilizing the diffraction integral theory. Then the beam quality M2 factor of the dynamic beam is calculated according to the specified method of the Standard ISO11146. The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment, including the static and dynamic beam process. The experimental method is simple, fast, and operates without movable parts and is allowed in order to investigate the laser beam in inaccessible conditions using existing methods.

  3. SU-F-T-273: Using a Diode Array to Explore the Weakness of TPS DoseCalculation Algorithm for VMAT and Sliding Window Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J; Lu, B; Yan, G

    Purpose: To identify the weakness of dose calculation algorithm in a treatment planning system for volumetric modulated arc therapy (VMAT) and sliding window (SW) techniques using a two-dimensional diode array. Methods: The VMAT quality assurance(QA) was implemented with a diode array using multiple partial arcs that divided from a VMAT plan; each partial arc has the same segments and the original monitor units. Arc angles were less than ± 30°. Multiple arcs delivered through consecutive and repetitive gantry operating clockwise and counterclockwise. The source-toaxis distance setup with the effective depths of 10 and 20 cm were used for a diodemore » array. To figure out dose errors caused in delivery of VMAT fields, the numerous fields having the same segments with the VMAT field irradiated using different delivery techniques of static and step-and-shoot. The dose distributions of the SW technique were evaluated by creating split fields having fine moving steps of multi-leaf collimator leaves. Calculated doses using the adaptive convolution algorithm were analyzed with measured ones with distance-to-agreement and dose difference of 3 mm and 3%.. Results: While the beam delivery through static and step-and-shoot techniques showed the passing rate of 97 ± 2%, partial arc delivery of the VMAT fields brought out passing rate of 85%. However, when leaf motion was restricted less than 4.6 mm/°, passing rate was improved up to 95 ± 2%. Similar passing rate were obtained for both 10 and 20 cm effective depth setup. The calculated doses using the SW technique showed the dose difference over 7% at the final arrival point of moving leaves. Conclusion: Error components in dynamic delivery of modulated beams were distinguished by using the suggested QA method. This partial arc method can be used for routine VMAT QA. Improved SW calculation algorithm is required to provide accurate estimated doses.« less

  4. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  5. [Application of elastic registration based on Demons algorithm in cone beam CT].

    PubMed

    Pang, Haowen; Sun, Xiaoyang

    2014-02-01

    We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.

  6. Super-resolution chemical imaging with dynamic placement of plasmonic hotspots

    NASA Astrophysics Data System (ADS)

    Olson, Aeli P.; Ertsgaard, Christopher T.; McKoskey, Rachel M.; Rich, Isabel S.; Lindquist, Nathan C.

    2015-08-01

    We demonstrate dynamic placement of plasmonic "hotspots" for super-resolution chemical imaging via Surface Enhanced Raman Spectroscopy (SERS). A silver nanohole array surface was coated with biological samples and illuminated with a laser. Due to the large plasmonic field enhancements, blinking behavior of the SERS hotspots was observed and processed using a Stochastic Optical Reconstruction Microscopy (STORM) algorithm enabling localization to within 10 nm. However, illumination of the sample with a single static laser beam (i.e., a slightly defocused Gaussian beam) only produced SERS hotspots in fixed locations on the surface, leaving noticeable gaps in any final image. But, by using a spatial light modulator (SLM), the illumination profile of the beam could be altered, shifting any hotspots across the nanohole array surface in sub-wavelength steps. Therefore, by properly structuring an illuminating light field with the SLM, we show the possibility of positioning plasmonic hotspots over a metallic nanohole surface on-the-fly. Using this and our SERS-STORM imaging technique, we show potential for high-resolution chemical imaging without the noticeable gaps that were present with static laser illumination. Interestingly, even illuminating the surface with randomly shifting SLM phase profiles was sufficient to completely fill in a wide field of view for super-resolution SERS imaging of a single strand of 100-nm thick collagen protein fibrils. Images were then compared to those obtained with a scanning electron microscope (SEM). Additionally, we explored alternative methods of phase shifting other than holographic illumination through the SLM to create localization of hotspots necessary for SERS-STORM imaging.

  7. Three-dimensional online surface reconstruction of augmented fluorescence lifetime maps using photometric stereo (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Unger, Jakob; Lagarto, Joao; Phipps, Jennifer; Ma, Dinglong; Bec, Julien; Sorger, Jonathan; Farwell, Gregory; Bold, Richard; Marcu, Laura

    2017-02-01

    Multi-Spectral Time-Resolved Fluorescence Spectroscopy (ms-TRFS) can provide label-free real-time feedback on tissue composition and pathology during surgical procedures by resolving the fluorescence decay dynamics of the tissue. Recently, an ms-TRFS system has been developed in our group, allowing for either point-spectroscopy fluorescence lifetime measurements or dynamic raster tissue scanning by merging a 450 nm aiming beam with the pulsed fluorescence excitation light in a single fiber collection. In order to facilitate an augmented real-time display of fluorescence decay parameters, the lifetime values are back projected to the white light video. The goal of this study is to develop a 3D real-time surface reconstruction aiming for a comprehensive visualization of the decay parameters and providing an enhanced navigation for the surgeon. Using a stereo camera setup, we use a combination of image feature matching and aiming beam stereo segmentation to establish a 3D surface model of the decay parameters. After camera calibration, texture-related features are extracted for both camera images and matched providing a rough estimation of the surface. During the raster scanning, the rough estimation is successively refined in real-time by tracking the aiming beam positions using an advanced segmentation algorithm. The method is evaluated for excised breast tissue specimens showing a high precision and running in real-time with approximately 20 frames per second. The proposed method shows promising potential for intraoperative navigation, i.e. tumor margin assessment. Furthermore, it provides the basis for registering the fluorescence lifetime maps to the tissue surface adapting it to possible tissue deformations.

  8. Analysis of the penumbra enlargement in lung versus the quality index of photon beams: a methodology to check the dose calculation algorithm.

    PubMed

    Tsiakalos, Miltiadis F; Theodorou, Kiki; Kappas, Constantin; Zefkili, Sofia; Rosenwold, Jean-Claude

    2004-04-01

    It is well known that considerable underdosage can occur at the edges of a tumor inside the lung because of the degradation of penumbra due to lack of lateral electronic equilibrium. Although present even at smaller energies, this phenomenon is more pronounced for higher energies. Apart from Monte Carlo calculation, most of the existing Treatment Planning Systems (TPSs) cannot deal at all, or with acceptable accuracy, with this effect. A methodology has been developed for assessing the dose calculation algorithms in the lung region where lateral electronic disequilibrium exists, based on the Quality Index (QI) of the incident beam. A phantom, consisting of layers of polystyrene and lung material, has been irradiated using photon beams of 4, 6, 15, and 20 MV. The cross-plane profiles of each beam for 5x5, 10x10, and 25x10 fields have been measured at the middle of the phantom with the use of films. The penumbra (20%-80%) and fringe (50%-90%) enlargement was measured and the ratio of the widths for the lung to that of polystyrene was defined as the Correction Factor (CF). Monte Carlo calculations in the two phantoms have also been performed for energies of 6, 15, and 20 MV. Five commercial TPS's algorithms were tested for their ability to predict the penumbra and fringe enlargement. A linear relationship has been found between the QI of the beams and the CF of the penumbra and fringe enlargement for all the examined fields. Monte Carlo calculations agree very well (less than 1% difference) with the film measurements. The CF values range between 1.1 for 4 MV (QI 0.620) and 2.28 for 20 MV (QI 0.794). Three of the tested TPS's algorithms could not predict any enlargement at all for all energies and all fields and two of them could predict the penumbra enlargement to some extent. The proposed methodology can help any user or developer to check the accuracy of its algorithm for lung cases, based on a simple phantom geometry and the QI of the incident beam. This check is very important especially when higher energies are used, as the inaccuracies in existing algorithms can lead to an incorrect choice of energy for lung treatment and consequently to a failure in tumor control.

  9. Design and simulation of MEMS-actuated adjustable optical wedge for laser beam scanners

    NASA Astrophysics Data System (ADS)

    Bahgat, Ahmed S.; Zaki, Ahmed H.; Abdo Mohamed, Mohamed; El Sherif, Ashraf Fathy

    2018-01-01

    This paper introduces both optical and mechanical design and simulation of large static deflection MOEMS actuator. The designed device is in the form of an adjustable optical wedge (AOW) laser scanner. The AOW is formed of 1.5-mm-diameter plano-convex lens separated by air gap from plano-concave fixed lens. The convex lens is actuated by staggered vertical comb drive and suspended by rectangular cross-section torsion beam. An optical analysis and simulation of air separated AOW as well as detailed design, analysis, and static simulation of comb -drive are introduced. The dynamic step response of the full system is also introduced. The analytical solution showed a good agreement with the simulation results. A general global minimum optimization algorithm is applied to the comb-drive design to minimize driving voltage. A maximum comb-drive mechanical deflection angle of 12 deg in each direction was obtained under DC actuation voltage of 32 V with a settling time of 90 ms, leading to 1-mm one-dimensional (1-D) steering of laser beam with continuous optical scan angle of 5 deg in each direction. This optimization process provided a design of larger deflection actuator with smaller driving voltage compared with other conventional devices. This enhancement could lead to better performance of MOEMS-based laser beam scanners for imaging and low-speed applications.

  10. Differential polarization nonlinear optical microscopy with adaptive optics controlled multiplexed beams.

    PubMed

    Samim, Masood; Sandkuijl, Daaf; Tretyakov, Ian; Cisek, Richard; Barzda, Virginijus

    2013-09-09

    Differential polarization nonlinear optical microscopy has the potential to become an indispensable tool for structural investigations of ordered biological assemblies and microcrystalline aggregates. Their microscopic organization can be probed through fast and sensitive measurements of nonlinear optical signal anisotropy, which can be achieved with microscopic spatial resolution by using time-multiplexed pulsed laser beams with perpendicular polarization orientations and photon-counting detection electronics for signal demultiplexing. In addition, deformable membrane mirrors can be used to correct for optical aberrations in the microscope and simultaneously optimize beam overlap using a genetic algorithm. The beam overlap can be achieved with better accuracy than diffraction limited point-spread function, which allows to perform polarization-resolved measurements on the pixel-by-pixel basis. We describe a newly developed differential polarization microscope and present applications of the differential microscopy technique for structural studies of collagen and cellulose. Both, second harmonic generation, and fluorescence-detected nonlinear absorption anisotropy are used in these investigations. It is shown that the orientation and structural properties of the fibers in biological tissue can be deduced and that the orientation of fluorescent molecules (Congo Red), which label the fibers, can be determined. Differential polarization microscopy sidesteps common issues such as photobleaching and sample movement. Due to tens of megahertz alternating polarization of excitation pulses fast data acquisition can be conveniently applied to measure changes in the nonlinear signal anisotropy in dynamically changing in vivo structures.

  11. Methods for coherent lensless imaging and X-ray wavefront measurements

    NASA Astrophysics Data System (ADS)

    Guizar Sicairos, Manuel

    X-ray diffractive imaging is set apart from other high-resolution imaging techniques (e.g. scanning electron or atomic force microscopy) for its high penetration depth, which enables tomographic 3D imaging of thick samples and buried structures. Furthermore, using short x-ray pulses, it enables the capability to take ultrafast snapshots, giving a unique opportunity to probe nanoscale dynamics at femtosecond time scales. In this thesis we present improvements to phase retrieval algorithms, assess their performance through numerical simulations, and develop new methods for both imaging and wavefront measurement. Building on the original work by Faulkner and Rodenburg, we developed an improved reconstruction algorithm for phase retrieval with transverse translations of the object relative to the illumination beam. Based on gradient-based nonlinear optimization, this algorithm is capable of estimating the object, and at the same time refining the initial knowledge of the incident illumination and the object translations. The advantages of this algorithm over the original iterative transform approach are shown through numerical simulations. Phase retrieval has already shown substantial success in wavefront sensing at optical wavelengths. Although in principle the algorithms can be used at any wavelength, in practice the focus-diversity mechanism that makes optical phase retrieval robust is not practical to implement for x-rays. In this thesis we also describe the novel application of phase retrieval with transverse translations to the problem of x-ray wavefront sensing. This approach allows the characterization of the complex-valued x-ray field in-situ and at-wavelength and has several practical and algorithmic advantages over conventional focused beam measurement techniques. A few of these advantages include improved robustness through diverse measurements, reconstruction from far-field intensity measurements only, and significant relaxation of experimental requirements over other beam characterization approaches. Furthermore, we show that a one-dimensional version of this technique can be used to characterize an x-ray line focus produced by a cylindrical focusing element. We provide experimental demonstrations of the latter at hard x-ray wavelengths, where we have characterized the beams focused by a kinoform lens and an elliptical mirror. In both experiments the reconstructions exhibited good agreement with independent measurements, and in the latter a small mirror misalignment was inferred from the phase retrieval reconstruction. These experiments pave the way for the application of robust phase retrieval algorithms for in-situ alignment and performance characterization of x-ray optics for nanofocusing. We also present a study on how transverse translations help with the well-known uniqueness problem of one-dimensional phase retrieval. We also present a novel method for x-ray holography that is capable of reconstructing an image using an off-axis extended reference in a non-iterative computation, greatly generalizing an earlier approach by Podorov et al. The approach, based on the numerical application of derivatives on the field autocorrelation, was developed from first mathematical principles. We conducted a thorough theoretical study to develop technical and intuitive understanding of this technique and derived sufficient separation conditions required for an artifact-free reconstruction. We studied the effects of missing information in the Fourier domain, and of an imperfect reference, and we provide a signal-to-noise ratio comparison with the more traditional approach of Fourier transform holography. We demonstrated this new holographic approach through proof-of-principle optical experiments and later experimentally at soft x-ray wavelengths, where we compared its performance to Fourier transform holography, iterative phase retrieval and state-of-the-art zone-plate x-ray imaging techniques (scanning and full-field). Finally, we present a demonstration of the technique using a single 20 fs pulse from a high-harmonic table-top source. Holography with an extended reference is shown to provide fast, good quality images that are robust to noise and artifacts that arise from missing information due to a beam stop. (Abstract shortened by UMI.)

  12. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations

    NASA Astrophysics Data System (ADS)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-10-01

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  13. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations.

    PubMed

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-09-12

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  14. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    NASA Astrophysics Data System (ADS)

    Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.

    2017-02-01

    Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.

  15. Optimization of chiral lattice based metastructures for broadband vibration suppression using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.

    2016-05-01

    One of the major challenges in civil, mechanical, and aerospace engineering is to develop vibration suppression systems with high efficiency and low cost. Recent studies have shown that high damping performance at broadband frequencies can be achieved by incorporating periodic inserts with tunable dynamic properties as internal resonators in structural systems. Structures featuring these kinds of inserts are referred to as metamaterials inspired structures or metastructures. Chiral lattice inserts exhibit unique characteristics such as frequency bandgaps which can be tuned by varying the parameters that define the lattice topology. Recent analytical and experimental investigations have shown that broadband vibration attenuation can be achieved by including chiral lattices as internal resonators in beam-like structures. However, these studies have suggested that the performance of chiral lattice inserts can be maximized by utilizing an efficient optimization technique to obtain the optimal topology of the inserted lattice. In this study, an automated optimization procedure based on a genetic algorithm is applied to obtain the optimal set of parameters that will result in chiral lattice inserts tuned properly to reduce the global vibration levels of a finite-sized beam. Genetic algorithms are considered in this study due to their capability of dealing with complex and insufficiently understood optimization problems. In the optimization process, the basic parameters that govern the geometry of periodic chiral lattices including the number of circular nodes, the thickness of the ligaments, and the characteristic angle are considered. Additionally, a new set of parameters is introduced to enable the optimization process to explore non-periodic chiral designs. Numerical simulations are carried out to demonstrate the efficiency of the optimization process.

  16. Control algorithms for dynamic attenuators.

    PubMed

    Hsieh, Scott S; Pelc, Norbert J

    2014-06-01

    The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.

  17. Beam dynamics in MABE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poukey, J.W.; Coleman, P.D.; Sanford, T.W.L.

    1985-10-01

    MABE is a multistage linear electron accelerator which accelerates up to nine beams in parallel. Nominal parameters per beam are 25 kA, final energy 7 MeV, and guide field 20 kG. We report recent progress via theory and simulation in understanding the beam dynamics in such a system. In particular, we emphasize our results on the radial oscillations and emittance growth for a beam passing through a series of accelerating gaps.

  18. A novel imaging technique for measuring kinematics of light-weight flexible structures.

    PubMed

    Zakaria, Mohamed Y; Eliethy, Ahmed S; Canfield, Robert A; Hajj, Muhammad R

    2016-07-01

    A new imaging algorithm is proposed to capture the kinematics of flexible, thin, light structures including frequencies and motion amplitudes for real time analysis. The studied case is a thin flexible beam that is preset at different angles of attack in a wind tunnel. As the angle of attack is increased beyond a critical value, the beam was observed to undergo a static deflection that is ensued by limit cycle oscillations. Imaging analysis of the beam vibrations shows that the motion consists of a superposition of the bending and torsion modes. The proposed algorithm was able to capture the oscillation amplitudes as well as the frequencies of both bending and torsion modes. The analysis results are validated through comparison with measurements from a piezoelectric sensor that is attached to the beam at its root.

  19. A novel imaging technique for measuring kinematics of light-weight flexible structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakaria, Mohamed Y., E-mail: zakaria@vt.edu; Eliethy, Ahmed S.; Canfield, Robert A.

    2016-07-15

    A new imaging algorithm is proposed to capture the kinematics of flexible, thin, light structures including frequencies and motion amplitudes for real time analysis. The studied case is a thin flexible beam that is preset at different angles of attack in a wind tunnel. As the angle of attack is increased beyond a critical value, the beam was observed to undergo a static deflection that is ensued by limit cycle oscillations. Imaging analysis of the beam vibrations shows that the motion consists of a superposition of the bending and torsion modes. The proposed algorithm was able to capture the oscillationmore » amplitudes as well as the frequencies of both bending and torsion modes. The analysis results are validated through comparison with measurements from a piezoelectric sensor that is attached to the beam at its root.« less

  20. Evaluation of an empirical monitor output estimation in carbon ion radiotherapy.

    PubMed

    Matsumura, Akihiko; Yusa, Ken; Kanai, Tatsuaki; Mizota, Manabu; Ohno, Tatsuya; Nakano, Takashi

    2015-09-01

    A conventional broad beam method is applied to carbon ion radiotherapy at Gunma University Heavy Ion Medical Center. According to this method, accelerated carbon ions are scattered by various beam line devices to form 3D dose distribution. The physical dose per monitor unit (d/MU) at the isocenter, therefore, depends on beam line parameters and should be calibrated by a measurement in clinical practice. This study aims to develop a calculation algorithm for d/MU using beam line parameters. Two major factors, the range shifter dependence and the field aperture effect, are measured via PinPoint chamber in a water phantom, which is an identical setup as that used for monitor calibration in clinical practice. An empirical monitor calibration method based on measurement results is developed using a simple algorithm utilizing a linear function and a double Gaussian pencil beam distribution to express the range shifter dependence and the field aperture effect. The range shifter dependence and the field aperture effect are evaluated to have errors of 0.2% and 0.5%, respectively. The proposed method has successfully estimated d/MU with a difference of less than 1% with respect to the measurement results. Taking the measurement deviation of about 0.3% into account, this result is sufficiently accurate for clinical applications. An empirical procedure to estimate d/MU with a simple algorithm is established in this research. This procedure allows them to use the beam time for more treatments, quality assurances, and other research endeavors.

  1. Technical Note: A direct ray-tracing method to compute integral depth dose in pencil beam proton radiography with a multilayer ionization chamber.

    PubMed

    Farace, Paolo; Righetto, Roberto; Deffet, Sylvain; Meijers, Arturs; Vander Stappen, Francois

    2016-12-01

    To introduce a fast ray-tracing algorithm in pencil proton radiography (PR) with a multilayer ionization chamber (MLIC) for in vivo range error mapping. Pencil beam PR was obtained by delivering spots uniformly positioned in a square (45 × 45 mm 2 field-of-view) of 9 × 9 spots capable of crossing the phantoms (210 MeV). The exit beam was collected by a MLIC to sample the integral depth dose (IDD MLIC ). PRs of an electron-density and of a head phantom were acquired by moving the couch to obtain multiple 45 × 45 mm 2 frames. To map the corresponding range errors, the two-dimensional set of IDD MLIC was compared with (i) the integral depth dose computed by the treatment planning system (TPS) by both analytic (IDD TPS ) and Monte Carlo (IDD MC ) algorithms in a volume of water simulating the MLIC at the CT, and (ii) the integral depth dose directly computed by a simple ray-tracing algorithm (IDD direct ) through the same CT data. The exact spatial position of the spot pattern was numerically adjusted testing different in-plane positions and selecting the one that minimized the range differences between IDD direct and IDD MLIC . Range error mapping was feasible by both the TPS and the ray-tracing methods, but very sensitive to even small misalignments. In homogeneous regions, the range errors computed by the direct ray-tracing algorithm matched the results obtained by both the analytic and the Monte Carlo algorithms. In both phantoms, lateral heterogeneities were better modeled by the ray-tracing and the Monte Carlo algorithms than by the analytic TPS computation. Accordingly, when the pencil beam crossed lateral heterogeneities, the range errors mapped by the direct algorithm matched better the Monte Carlo maps than those obtained by the analytic algorithm. Finally, the simplicity of the ray-tracing algorithm allowed to implement a prototype procedure for automated spatial alignment. The ray-tracing algorithm can reliably replace the TPS method in MLIC PR for in vivo range verification and it can be a key component to develop software tools for spatial alignment and correction of CT calibration.

  2. Investigation of propagation dynamics of truncated vector vortex beams.

    PubMed

    Srinivas, P; Perumangatt, C; Lal, Nijil; Singh, R P; Srinivasan, B

    2018-06-01

    In this Letter, we experimentally investigate the propagation dynamics of truncated vector vortex beams generated using a Sagnac interferometer. Upon focusing, the truncated vector vortex beam is found to regain its original intensity structure within the Rayleigh range. In order to explain such behavior, the propagation dynamics of a truncated vector vortex beam is simulated by decomposing it into the sum of integral charge beams with associated complex weights. We also show that the polarization of the truncated composite vector vortex beam is preserved all along the propagation axis. The experimental observations are consistent with theoretical predictions based on previous literature and are in good agreement with our simulation results. The results hold importance as vector vortex modes are eigenmodes of the optical fiber.

  3. Reduction of the unnecessary dose from the over-range area with a spiral dynamic z-collimator: comparison of beam pitch and detector coverage with 128-detector row CT.

    PubMed

    Shirasaka, Takashi; Funama, Yoshinori; Hayashi, Mutsukazu; Awamoto, Shinichi; Kondo, Masatoshi; Nakamura, Yasuhiko; Hatakenaka, Masamitsu; Honda, Hiroshi

    2012-01-01

    Our purpose in this study was to assess the radiation dose reduction and the actual exposed scan length of over-range areas using a spiral dynamic z-collimator at different beam pitches and detector coverage. Using glass rod dosimeters, we measured the unilateral over-range scan dose between the beginning of the planned scan range and the beginning of the actual exposed scan range. Scanning was performed at detector coverage of 80.0 and 40.0 mm, with and without the spiral dynamic z-collimator. The dose-saving ratio was calculated as the ratio of the unnecessary over-range dose, with and without the spiral dynamic z-collimator. In 80.0 mm detector coverage without the spiral dynamic z-collimator, the actual exposed scan length for the over-range area was 108, 120, and 126 mm, corresponding to a beam pitch of 0.60, 0.80, and 0.99, respectively. With the spiral dynamic z-collimator, the actual exposed scan length for the over-range area was 48, 66, and 84 mm with a beam pitch of 0.60, 0.80, and 0.99, respectively. The dose-saving ratios with and without the spiral dynamic z-collimator for a beam pitch of 0.60, 0.80, and 0.99 were 35.07, 24.76, and 13.51%, respectively. With 40.0 mm detector coverage, the dose-saving ratios with and without the spiral dynamic z-collimator had the highest value of 27.23% with a low beam pitch of 0.60. The spiral dynamic z-collimator is important for a reduction in the unnecessary over-range dose and makes it possible to reduce the unnecessary dose by means of a lower beam pitch.

  4. SU-E-T-252: Developing a Pencil Beam Dose Calculation Algorithm for CyberKnife System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, B; Duke University Medical Center, Durham, NC; Liu, B

    2015-06-15

    Purpose: Currently there are two dose calculation algorithms available in the Cyberknife planning system: ray-tracing and Monte Carlo, which is either not accurate or time-consuming for irregular field shaped by the MLC that was recently introduced. The purpose of this study is to develop a fast and accurate pencil beam dose calculation algorithm which can handle irregular field. Methods: A pencil beam dose calculation algorithm widely used in Linac system is modified. The algorithm models both primary (short range) and scatter (long range) components with a single input parameter: TPR{sub 20}/{sub 10}. The TPR{sub 20}/{sub 20}/{sub 10} value was firstmore » estimated to derive an initial set of pencil beam model parameters (PBMP). The agreement between predicted and measured TPRs for all cones were evaluated using the root mean square of the difference (RMSTPR), which was then minimized by adjusting PBMPs. PBMPs are further tuned to minimize OCR RMS (RMSocr) by focusing at the outfield region. Finally, an arbitrary intensity profile is optimized by minimizing RMSocr difference at infield region. To test model validity, the PBMPs were obtained by fitting to only a subset of cones (4) and applied to all cones (12) for evaluation. Results: With RMS values normalized to the dmax and all cones combined, the average RMSTPR at build-up and descending region is 2.3% and 0.4%, respectively. The RMSocr at infield, penumbra and outfield region is 1.5%, 7.8% and 0.6%, respectively. Average DTA in penumbra region is 0.5mm. There is no trend found in TPR or OCR agreement among cones or depths. Conclusion: We have developed a pencil beam algorithm for Cyberknife system. The prediction agrees well with commissioning data. Only a subset of measurements is needed to derive the model. Further improvements are needed for TPR buildup region and OCR penumbra. Experimental validations on MLC shaped irregular field needs to be performed. This work was partially supported by the National Natural Science Foundation of China (61171005) and the China Scholarship Council (CSC)« less

  5. Non-Contact Smartphone-Based Monitoring of Thermally Stressed Structures

    PubMed Central

    Ozturk, Turgut; Mas, David; Rizzo, Piervincenzo

    2018-01-01

    The in-situ measurement of thermal stress in beams or continuous welded rails may prevent structural anomalies such as buckling. This study proposed a non-contact monitoring/inspection approach based on the use of a smartphone and a computer vision algorithm to estimate the vibrating characteristics of beams subjected to thermal stress. It is hypothesized that the vibration of a beam can be captured using a smartphone operating at frame rates higher than conventional 30 Hz, and the first few natural frequencies of the beam can be extracted using a computer vision algorithm. In this study, the first mode of vibration was considered and compared to the information obtained with a conventional accelerometer attached to the two structures investigated, namely a thin beam and a thick beam. The results show excellent agreement between the conventional contact method and the non-contact sensing approach proposed here. In the future, these findings may be used to develop a monitoring/inspection smartphone application to assess the axial stress of slender structures, to predict the neutral temperature of continuous welded rails, or to prevent thermal buckling. PMID:29670034

  6. Non-Contact Smartphone-Based Monitoring of Thermally Stressed Structures.

    PubMed

    Sefa Orak, Mehmet; Nasrollahi, Amir; Ozturk, Turgut; Mas, David; Ferrer, Belen; Rizzo, Piervincenzo

    2018-04-18

    The in-situ measurement of thermal stress in beams or continuous welded rails may prevent structural anomalies such as buckling. This study proposed a non-contact monitoring/inspection approach based on the use of a smartphone and a computer vision algorithm to estimate the vibrating characteristics of beams subjected to thermal stress. It is hypothesized that the vibration of a beam can be captured using a smartphone operating at frame rates higher than conventional 30 Hz, and the first few natural frequencies of the beam can be extracted using a computer vision algorithm. In this study, the first mode of vibration was considered and compared to the information obtained with a conventional accelerometer attached to the two structures investigated, namely a thin beam and a thick beam. The results show excellent agreement between the conventional contact method and the non-contact sensing approach proposed here. In the future, these findings may be used to develop a monitoring/inspection smartphone application to assess the axial stress of slender structures, to predict the neutral temperature of continuous welded rails, or to prevent thermal buckling.

  7. Efficient storage, computation, and exposure of computer-generated holograms by electron-beam lithography.

    PubMed

    Newman, D M; Hawley, R W; Goeckel, D L; Crawford, R D; Abraham, S; Gallagher, N C

    1993-05-10

    An efficient storage format was developed for computer-generated holograms for use in electron-beam lithography. This method employs run-length encoding and Lempel-Ziv-Welch compression and succeeds in exposing holograms that were previously infeasible owing to the hologram's tremendous pattern-data file size. These holograms also require significant computation; thus the algorithm was implemented on a parallel computer, which improved performance by 2 orders of magnitude. The decompression algorithm was integrated into the Cambridge electron-beam machine's front-end processor.Although this provides much-needed ability, some hardware enhancements will be required in the future to overcome inadequacies in the current front-end processor that result in a lengthy exposure time.

  8. Technical Note: A fast online adaptive replanning method for VMAT using flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ates, Ozgur; Ahunbay, Ergun E.; Li, X. Allen, E-mail: ali@mcw.edu

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT research planning system, which enables the input and output of beam and machine parameters of VMAT plans. The SAM algorithm was used to modify multileaf collimator positions for each segment aperture based on the changes of the target from the planning (CT/MR) to daily image [CT/CBCT/magnetic resonance imaging (MRI)]. The leaf travel distance was controlled for large shifts to prevent themore » increase of VMAT delivery time. The SAM algorithm was tested for 11 patient cases including prostate, pancreatic, and lung cancers. For each daily image set, three types of VMAT plans, image-guided radiation therapy (IGRT) repositioning, SAM adaptive, and full-scope reoptimization plans, were generated and compared. Results: The SAM adaptive plans were found to have improved the plan quality in target and/or critical organs when compared to the IGRT repositioning plans and were comparable to the reoptimization plans based on the data of planning target volume (PTV)-V100 (volume covered by 100% of prescription dose). For the cases studied, the average PTV-V100 was 98.85% ± 1.13%, 97.61% ± 1.45%, and 92.84% ± 1.61% with FFF beams for the reoptimization, SAM adaptive, and repositioning plans, respectively. The execution of the SAM algorithm takes less than 10 s using 16-CPU (2.6 GHz dual core) hardware. Conclusions: The SAM algorithm can generate adaptive VMAT plans using FFF beams with comparable plan qualities as those from the full-scope reoptimization plans based on daily CT/CBCT/MRI and can be used for online replanning to address interfractional variations.« less

  9. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, M; Li, R; Xing, L

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less

  10. Dynamic trajectory-based couch motion for improvement of radiation therapy trajectories in cranial SRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, R. Lee; Thomas, Christopher G., E-mail: Chris.Thomas@cdha.nshealth.ca; Department of Medical Physics, Nova Scotia Cancer Centre, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia B3H 1V7

    2015-05-15

    Purpose: To investigate potential improvement in external beam stereotactic radiation therapy plan quality for cranial cases using an optimized dynamic gantry and patient support couch motion trajectory, which could minimize exposure to sensitive healthy tissue. Methods: Anonymized patient anatomy and treatment plans of cranial cancer patients were used to quantify the geometric overlap between planning target volumes and organs-at-risk (OARs) based on their two-dimensional projection from source to a plane at isocenter as a function of gantry and couch angle. Published dose constraints were then used as weighting factors for the OARs to generate a map of couch-gantry coordinate space,more » indicating degree of overlap at each point in space. A couch-gantry collision space was generated by direct measurement on a linear accelerator and couch using an anthropomorphic solid-water phantom. A dynamic, fully customizable algorithm was written to generate a navigable ideal trajectory for the patient specific couch-gantry space. The advanced algorithm can be used to balance the implementation of absolute minimum values of overlap with the clinical practicality of large-scale couch motion and delivery time. Optimized cranial cancer treatment trajectories were compared to conventional treatment trajectories. Results: Comparison of optimized treatment trajectories with conventional treatment trajectories indicated an average decrease in mean dose to the OARs of 19% and an average decrease in maximum dose to the OARs of 12%. Degradation was seen for homogeneity index (6.14% ± 0.67%–5.48% ± 0.76%) and conformation number (0.82 ± 0.02–0.79 ± 0.02), but neither was statistically significant. Removal of OAR constraints from volumetric modulated arc therapy optimization reveals that reduction in dose to OARs is almost exclusively due to the optimized trajectory and not the OAR constraints. Conclusions: The authors’ study indicated that simultaneous couch and gantry motion during radiation therapy to minimize the geometrical overlap in the beams-eye-view of target volumes and the organs-at-risk can have an appreciable dose reduction to organs-at-risk.« less

  11. Beam dynamics in MABE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poukey, J.W.; Coleman, P.D.; Sanford, T.W.L.

    1985-01-01

    MABE is a multistage linear electron accelerator which accelerates up to nine beams in parallel. Nominal parameters per beam are 25 kA, final energy 7 MeV, and guide field 20 kG. We report recent progress via theory and simulation in understanding the beam dynamics in such a system. In particular, we emphasize our results on the radial oscillations and emittance growth for a beam passing through a series of accelerating gaps. 12 refs., 8 figs.

  12. Dynamic Reconstruction Algorithm of Three-Dimensional Temperature Field Measurement by Acoustic Tomography

    PubMed Central

    Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.

    2017-01-01

    Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930

  13. Dynamic optical modulation of an electron beam on a photocathode RF gun: Toward intensity-modulated radiation therapy (IMRT)

    NASA Astrophysics Data System (ADS)

    Kondoh, Takafumi; Kashima, Hiroaki; Yang, Jinfeng; Yoshida, Yoichi; Tagawa, Seiichi

    2008-10-01

    In intensity-modulated radiation therapy (IMRT), the aim is to deliver reduced doses of radiation to normal tissue. As a step toward IMRT, we examined dynamic optical modulation of an electron beam produced by a photocathode RF gun. Images on photomasks were transferred onto a photocathode by relay imaging. The resulting beam was controlled by a remote mirror. The modulated electron beam maintained its shape on acceleration, had a fine spatial resolution, and could be moved dynamically by optical methods.

  14. Fraction-variant beam orientation optimization for non-coplanar IMRT

    NASA Astrophysics Data System (ADS)

    O'Connor, Daniel; Yu, Victoria; Nguyen, Dan; Ruan, Dan; Sheng, Ke

    2018-02-01

    Conventional beam orientation optimization (BOO) algorithms for IMRT assume that the same set of beam angles is used for all treatment fractions. In this paper we present a BOO formulation based on group sparsity that simultaneously optimizes non-coplanar beam angles for all fractions, yielding a fraction-variant (FV) treatment plan. Beam angles are selected by solving a multi-fraction fluence map optimization problem involving 500-700 candidate beams per fraction, with an additional group sparsity term that encourages most candidate beams to be inactive. The optimization problem is solved using the fast iterative shrinkage-thresholding algorithm. Our FV BOO algorithm is used to create five-fraction treatment plans for digital phantom, prostate, and lung cases as well as a 30-fraction plan for a head and neck case. A homogeneous PTV dose coverage is maintained in all fractions. The treatment plans are compared with fraction-invariant plans that use a fixed set of beam angles for all fractions. The FV plans reduced OAR mean dose and D 2 values on average by 3.3% and 3.8% of the prescription dose, respectively. Notably, mean OAR dose was reduced by 14.3% of prescription dose (rectum), 11.6% (penile bulb), 10.7% (seminal vesicle), 5.5% (right femur), 3.5% (bladder), 4.0% (normal left lung), 15.5% (cochleas), and 5.2% (chiasm). D 2 was reduced by 14.9% of prescription dose (right femur), 8.2% (penile bulb), 12.7% (proximal bronchus), 4.1% (normal left lung), 15.2% (cochleas), 10.1% (orbits), 9.1% (chiasm), 8.7% (brainstem), and 7.1% (parotids). Meanwhile, PTV homogeneity defined as D 95/D 5 improved from .92 to .95 (digital phantom), from .95 to .98 (prostate case), and from .94 to .97 (lung case), and remained constant for the head and neck case. Moreover, the FV plans are dosimetrically similar to conventional plans that use twice as many beams per fraction. Thus, FV BOO offers the potential to reduce delivery time for non-coplanar IMRT.

  15. Decision algorithm for data center vortex beam receiver

    NASA Astrophysics Data System (ADS)

    Kupferman, Judy; Arnon, Shlomi

    2017-12-01

    We present a new scheme for a vortex beam communications system which exploits the radial component p of Laguerre-Gauss modes in addition to the azimuthal component l generally used. We derive a new encoding algorithm which makes use of the spatial distribution of intensity to create an alphabet dictionary for communication. We suggest an application of the scheme as part of an optical wireless link for intra data center communication. We investigate the probability of error in decoding, for several detector options.

  16. Cone-beam reconstruction for the two-circles-plus-one-line trajectory

    NASA Astrophysics Data System (ADS)

    Lu, Yanbin; Yang, Jiansheng; Emerson, John W.; Mao, Heng; Zhou, Tie; Si, Yuanzheng; Jiang, Ming

    2012-05-01

    The Kodak Image Station In-Vivo FX has an x-ray module with cone-beam configuration for radiographic imaging but lacks the functionality of tomography. To introduce x-ray tomography into the system, we choose the two-circles-plus-one-line trajectory by mounting one translation motor and one rotation motor. We establish a reconstruction algorithm by applying the M-line reconstruction method. Numerical studies and preliminary physical phantom experiment demonstrate the feasibility of the proposed design and reconstruction algorithm.

  17. Shape determination and control for large space structures

    NASA Technical Reports Server (NTRS)

    Weeks, C. J.

    1981-01-01

    An integral operator approach is used to derive solutions to static shape determination and control problems associated with large space structures. Problem assumptions include a linear self-adjoint system model, observations and control forces at discrete points, and performance criteria for the comparison of estimates or control forms. Results are illustrated by simulations in the one dimensional case with a flexible beam model, and in the multidimensional case with a finite model of a large space antenna. Modal expansions for terms in the solution algorithms are presented, using modes from the static or associated dynamic mode. These expansions provide approximated solutions in the event that a used form analytical solution to the system boundary value problem is not available.

  18. Study on elucidation of bactericidal effects induced by laser beam irradiation Measurement of dynamic stress on laser irradiated surface

    NASA Astrophysics Data System (ADS)

    Furumoto, Tatsuaki; Kasai, Atsushi; Tachiya, Hiroshi; Hosokawa, Akira; Ueda, Takashi

    2010-09-01

    In dental treatment, many types of laser beams have been used for various surgical treatments, and the influences of laser beam irradiation on bactericidal effect have been investigated. However, most of the work has been performed by irradiating to an agar plate with the colony of bacteria, and very few studies have been reported on the physical mechanism of bactericidal effects induced by laser beam irradiation. This paper deals with the measurement of dynamic stress induced in extracted human enamel by irradiation with Nd:YAG laser beams. Laser beams can be delivered to the enamel surface through a quartz optical fiber. Dynamic stress induced in the specimen using elastic wave propagation in a cylindrical long bar made of aluminum alloy is measured. Laser induced stress intensity is evaluated from dynamic strain measured by small semiconductor strain gauges. Carbon powder and titanium dioxide powder were applied to the human enamel surface as absorbents. Additionally, the phenomenon of laser beam irradiation to the human enamel surface was observed with an ultrahigh speed video camera. Results showed that a plasma was generated on the enamel surface during laser beam irradiation, and the melted tissues were scattered in the vertical direction against the enamel surface with a mushroom-like wave. Averaged scattering velocity of the melted tissues was 25.2 m/s. Induced dynamic stress on the enamel surface increased with increasing laser energy in each absorbent. Induced dynamic stresses with titanium dioxide powder were superior to those with carbon powder. Induced dynamic stress was related to volume of prepared cavity, and induced stress for the removal of unit volume of human enamel was 0.03 Pa/mm 3.

  19. The application of dynamic programming in production planning

    NASA Astrophysics Data System (ADS)

    Wu, Run

    2017-05-01

    Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.

  20. Adaptive conversion of a high-order mode beam into a near-diffraction-limited beam.

    PubMed

    Zhao, Haichuan; Wang, Xiaolin; Ma, Haotong; Zhou, Pu; Ma, Yanxing; Xu, Xiaojun; Zhao, Yijun

    2011-08-01

    We present a new method for efficiently transforming a high-order mode beam into a nearly Gaussian beam with much higher beam quality. The method is based on modulation of phases of different lobes by stochastic parallel gradient descent algorithm and coherent addition after phase flattening. We demonstrate the method by transforming an LP11 mode into a nearly Gaussian beam. The experimental results reveal that the power in the diffraction-limited bucket in the far field is increased by more than a factor of 1.5.

  1. Linear Controller Design: Limits of Performance

    DTIC Science & Technology

    1991-01-01

    where a sensor should be placed eg where an accelerometer is to be positioned on an aircraft or where a strain gauge is placed along a beam The...309 VIII CONTENTS 14 Special Algorithms for Convex Optimization 311 Notation and Problem Denitions...311 On Algorithms for Convex Optimization 312 CuttingPlane Algorithms

  2. Cantilever-beam dynamic modulus for wood composite products. Part 1, apparatus

    Treesearch

    Chris Turk; John F. Hunt; David J. Marr

    2008-01-01

    A cantilever-beam vibration-testing apparatus has been developed to provide a means of dynamic and non-destructive evaluation of modulus of elasticity for small samples of wood or wood-composite material. The apparatus applies a known displacement to a cantilever beam and then releases the beam into its natural first-mode vibration and records displacement as a...

  3. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  4. Synthetic Incoherence via Scanned Gaussian Beams

    PubMed Central

    Levine, Zachary H.

    2006-01-01

    Tomography, in most formulations, requires an incoherent signal. For a conventional transmission electron microscope, the coherence of the beam often results in diffraction effects that limit the ability to perform a 3D reconstruction from a tilt series with conventional tomographic reconstruction algorithms. In this paper, an analytic solution is given to a scanned Gaussian beam, which reduces the beam coherence to be effectively incoherent for medium-size (of order 100 voxels thick) tomographic applications. The scanned Gaussian beam leads to more incoherence than hollow-cone illumination. PMID:27274945

  5. Self-pumped Gaussian beam-coupling and stimulated backscatter due to reflection gratings in a photorefractive material

    NASA Astrophysics Data System (ADS)

    Saleh, Mohammad Abu

    2007-05-01

    When overlapping monochromatic light beams interfere in a photorefractive material, the resulting intensity fringes create a spatially modulated charge distribution. The resulting refractive index grating may cause power transfer from one beam (the pump) to the other beam (the signal). In a special case of the reflection grating geometry, the Fresnel reflection of the pump beam from the rear surface of the crystal is used as the signal beam. It has been noted that for this self-pumped, contra-directional two-beam coupling (SPCD-TBC) geometry, the coupling efficiency seems to be strongly dependent on the focal position and spot size, which is attributed to diffraction and the resulting change in the spatial overlaps between the pump and signal. In this work a full diffraction based simulation of SPCD-TBC for a Gaussian beam is developed with a novel algorithm. In a related context involving reflection gratings, a particular phenomenon named six-wave mixing has received some interest in the photorefractive research. The generation of multiple waves during near-oblique incidence of a 532 nm weakly focused laser light on photorefractive iron doped lithium niobate in a typical reflection geometry configuration is studied. It is shown that these waves are produced through two-wave coupling (self-diffraction) and four-wave mixing (parametric diffraction). One of these waves, the stimulated photorefractive backscatter produced from parametric diffraction, contains the self-phase conjugate. The dynamics of six-wave mixing, and their dependence on crystal parameters, angle of incidence, and pump power are analyzed. A novel order analysis of the interaction equations provides further insight into experimental observations in the steady state. The quality of the backscatter is evaluated through image restoration, interference experiments, and visibility measurement. Reduction of two-wave coupling may significantly improve the quality of the self-phase conjugate.

  6. Numerical simulation of three-dimensional transonic turbulent projectile aerodynamics by TVD schemes

    NASA Technical Reports Server (NTRS)

    Shiau, Nae-Haur; Hsu, Chen-Chi; Chyu, Wei-Jao

    1989-01-01

    The two-dimensional symmetric TVD scheme proposed by Yee has been extended to and investigated for three-dimensional thin-layer Navier-Stokes simulation of complex aerodynamic problems. An existing three-dimensional Navier-stokes code based on the beam and warming algorithm is modified to provide an option of using the TVD algorithm and the flow problem considered is a transonic turbulent flow past a projectile with sting at ten-degree angle of attack. Numerical experiments conducted for three flow cases, free-stream Mach numbers of 0.91, 0.96 and 1.20 show that the symmetric TVD algorithm can provide surface pressure distribution in excellent agreement with measured data; moreover, the rate of convergence to attain a steady state solution is about two times faster than the original beam and warming algorithm.

  7. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  8. Small field depth dose profile of 6 MV photon beam in a simple air-water heterogeneity combination: A comparison between anisotropic analytical algorithm dose estimation with thermoluminescent dosimeter dose measurement.

    PubMed

    Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin

    2017-01-01

    To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.

  9. Algorithm for ion beam figuring of low-gradient mirrors.

    PubMed

    Jiao, Changjun; Li, Shengyi; Xie, Xuhui

    2009-07-20

    Ion beam figuring technology for low-gradient mirrors is discussed. Ion beam figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target workpiece to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional contact polishing processes, are avoided. Based on the Bayesian principle, an iterative dwell time algorithm for planar mirrors is deduced from the computer-controlled optical surfacing (CCOS) principle. With the properties of the removal function, the shaping process of low-gradient mirrors can be approximated by the linear model for planar mirrors. With these discussions, the error surface figuring technology for low-gradient mirrors with a linear path is set up. With the near-Gaussian property of the removal function, the figuring process with a spiral path can be described by the conventional linear CCOS principle, and a Bayesian-based iterative algorithm can be used to deconvolute the dwell time. Moreover, the selection criterion of the spiral parameter is given. Ion beam figuring technology with a spiral scan path based on these methods can be used to figure mirrors with non-axis-symmetrical errors. Experiments on SiC chemical vapor deposition planar and Zerodur paraboloid samples are made, and the final surface errors are all below 1/100 lambda.

  10. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  11. SU-E-T-188: Film Dosimetry Verification of Monte Carlo Generated Electron Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enright, S; Asprinio, A; Lu, L

    2014-06-01

    Purpose: The purpose of this study was to compare dose distributions from film measurements to Monte Carlo generated electron treatment plans. Irradiation with electrons offers the advantages of dose uniformity in the target volume and of minimizing the dose to deeper healthy tissue. Using the Monte Carlo algorithm will improve dose accuracy in regions with heterogeneities and irregular surfaces. Methods: Dose distributions from GafChromic{sup ™} EBT3 films were compared to dose distributions from the Electron Monte Carlo algorithm in the Eclipse{sup ™} radiotherapy treatment planning system. These measurements were obtained for 6MeV, 9MeV and 12MeV electrons at two depths. Allmore » phantoms studied were imported into Eclipse by CT scan. A 1 cm thick solid water template with holes for bonelike and lung-like plugs was used. Different configurations were used with the different plugs inserted into the holes. Configurations with solid-water plugs stacked on top of one another were also used to create an irregular surface. Results: The dose distributions measured from the film agreed with those from the Electron Monte Carlo treatment plan. Accuracy of Electron Monte Carlo algorithm was also compared to that of Pencil Beam. Dose distributions from Monte Carlo had much higher pass rates than distributions from Pencil Beam when compared to the film. The pass rate for Monte Carlo was in the 80%–99% range, where the pass rate for Pencil Beam was as low as 10.76%. Conclusion: The dose distribution from Monte Carlo agreed with the measured dose from the film. When compared to the Pencil Beam algorithm, pass rates for Monte Carlo were much higher. Monte Carlo should be used over Pencil Beam for regions with heterogeneities and irregular surfaces.« less

  12. Poster - Thurs Eve-23: Effect of lung density and geometry variation on inhomogeneity correction algorithms: A Monte Carlo dosimetry evaluation.

    PubMed

    Chow, J; Leung, M; Van Dyk, J

    2008-07-01

    This study provides new information on the evaluation of the lung dose calculation algorithms as a function of the relative electron density of lung, ρ e,lung . Doses calculated using the collapsed cone convolution (CCC) and adaptive convolution (AC) algorithm in lung with the Pinnacle 3 system were compared to those calculated using the Monte Carlo (MC) simulation (EGSnrc-based code). Three groups of lung phantoms, namely, "Slab", "Column" and "Cube" with different ρ e,lung (0.05-0.7), positions, volumes and shapes of lung in water were used. 6 and 18MV photon beams with 4×4 and 10×10cm 2 field sizes produced by a Varian 21EX Linac were used in the MC dose calculations. Results show that the CCC algorithm agrees well with AC to within ±1% for doses calculated in the lung phantoms, indicating that the AC, with 3-4 times less computing time required than CCC, is a good substitute for the CCC method. Comparing the CCC and AC with MC, dose deviations are found when ρ e,lung are ⩽0.1-0.3. The degree of deviation depends on the photon beam energy and field size, and is relatively large when high-energy photon beams with small field are used. For the penumbra widths (20%-80%), the CCC and AC agree well with MC for the "Slab" and "Cube" phantoms with the lung volumes at the central beam axis (CAX). However, deviations >2mm occur in the "Column" phantoms, with two lung volumes separated by a water column along the CAX, using the 18MV (4×4cm 2 ) photon beams with ρ e,lung ⩽0.1. © 2008 American Association of Physicists in Medicine.

  13. Computational study of scattering of a zero-order Bessel beam by large nonspherical homogeneous particles with the multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Minglin; Wu, Yueqian; Sheng, Xinqing; Ren, Kuan Fang

    2017-12-01

    Computation of scattering of shaped beams by large nonspherical particles is a challenge in both optics and electromagnetics domains since it concerns many research fields. In this paper, we report our new progress in the numerical computation of the scattering diagrams. Our algorithm permits to calculate the scattering of a particle of size as large as 110 wavelengths or 700 in size parameter. The particle can be transparent or absorbing of arbitrary shape, smooth or with a sharp surface, such as the Chebyshev particles or ice crystals. To illustrate the capacity of the algorithm, a zero order Bessel beam is taken as the incident beam, and the scattering of ellipsoidal particles and Chebyshev particles are taken as examples. Some special phenomena have been revealed and examined. The scattering problem is formulated with the combined tangential formulation and solved iteratively with the aid of the multilevel fast multipole algorithm, which is well parallelized with the message passing interface on the distributed memory computer platform using the hybrid partitioning strategy. The numerical predictions are compared with the results of the rigorous method for a spherical particle to validate the accuracy of the approach. The scattering diagrams of large ellipsoidal particles with various parameters are examined. The effect of aspect ratios, as well as half-cone angle of the incident zero-order Bessel beam and the off-axis distance on scattered intensity, is studied. Scattering by asymmetry Chebyshev particle with size parameter larger than 700 is also given to show the capability of the method for computing scattering by arbitrary shaped particles.

  14. Accelerators (4/5)

    ScienceCinema

    Metral, Elias

    2017-12-09

    1a) Introduction and motivation 1b) History and accelerator types 2) Transverse beam dynamics 3a) Longitudinal beam dynamics 3b) Figure of merit of a synchrotron/collider 3c) Beam control 4) Main limiting factors 5) Technical challenges Prerequisite knowledge: Previous knowledge of accelerators is not required.

  15. Accelerators (5/5)

    ScienceCinema

    None

    2018-05-16

    1a) Introduction and motivation; 1b) History and accelerator types; 2) Transverse beam dynamics; 3a) Longitudinal beam dynamics; 3b) Figure of merit of a synchrotron/collider; 3c) Beam control; 4) Main limiting factors; 5) Technical challenges Prerequisite knowledge: Previous knowledge of accelerators is not required.

  16. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  17. Dynamic analysis of geometrically non-linear three-dimensional beams under moving mass

    NASA Astrophysics Data System (ADS)

    Zupan, E.; Zupan, D.

    2018-01-01

    In this paper, we present a coupled dynamic analysis of a moving particle on a deformable three-dimensional frame. The presented numerical model is capable of considering arbitrary curved and twisted initial geometry of the beam and takes into account geometric non-linearity of the structure. Coupled with dynamic equations of the structure, the equations of moving particle are solved. The moving particle represents the dynamic load and varies the mass distribution of the structure and at the same time its path is adapting due to deformability of the structure. A coupled geometrically non-linear behaviour of beam and particle is studied. The equation of motion of the particle is added to the system of the beam dynamic equations and an additional unknown representing the coordinate of the curvilinear path of the particle is introduced. The specially designed finite-element formulation of the three-dimensional beam based on the weak form of consistency conditions is employed where only the boundary conditions are affected by the contact forces.

  18. Generic simulation of multi-element ladar scanner kinematics in USU LadarSIM

    NASA Astrophysics Data System (ADS)

    Omer, David; Call, Benjamin; Pack, Robert; Fullmer, Rees

    2006-05-01

    This paper presents a generic simulation model for a ladar scanner with up to three scan elements, each having a steering, stabilization and/or pattern-scanning role. Of interest is the development of algorithms that automatically generate commands to the scan elements given beam-steering objectives out of the ladar aperture, and the base motion of the sensor platform. First, a straight-forward single-element body-fixed beam-steering methodology is presented. Then a unique multi-element redirective and reflective space-fixed beam-steering methodology is explained. It is shown that standard direction cosine matrix decomposition methods fail when using two orthogonal, space-fixed rotations, thus demanding the development of a new algorithm for beam steering. Finally, a related steering control methodology is presented that uses two separate optical elements mathematically combined to determine the necessary scan element commands. Limits, restrictions, and results on this methodology are presented.

  19. Different realizations of Cooper-Frye sampling with conservation laws

    NASA Astrophysics Data System (ADS)

    Schwarz, C.; Oliinychenko, D.; Pang, L.-G.; Ryu, S.; Petersen, H.

    2018-01-01

    Approaches based on viscous hydrodynamics for the hot and dense stage and hadronic transport for the final dilute rescattering stage are successfully applied to the dynamic description of heavy ion reactions at high beam energies. One crucial step in such hybrid approaches is the so-called particlization, which is the transition between the hydrodynamic description and the microscopic degrees of freedom. For this purpose, individual particles are sampled on the Cooper-Frye hypersurface. In this work, four different realizations of the sampling algorithms are compared, with three of them incorporating the global conservation laws of quantum numbers in each event. The algorithms are compared within two types of scenarios: a simple ‘box’ hypersurface consisting of only one static cell and a typical particlization hypersurface for Au+Au collisions at \\sqrt{{s}{NN}}=200 {GeV}. For all algorithms the mean multiplicities (or particle spectra) remain unaffected by global conservation laws in the case of large volumes. In contrast, the fluctuations of the particle numbers are affected considerably. The fluctuations of the newly developed SPREW algorithm based on the exponential weight, and the recently suggested SER algorithm based on ensemble rejection, are smaller than those without conservation laws and agree with the expectation from the canonical ensemble. The previously applied mode sampling algorithm produces dramatically larger fluctuations than expected in the corresponding microcanonical ensemble, and therefore should be avoided in fluctuation studies. This study might be of interest for the investigation of particle fluctuations and correlations, e.g. the suggested signatures for a phase transition or a critical endpoint, in hybrid approaches that are affected by global conservation laws.

  20. Model of rotary-actuated flexible beam with notch filter vibration suppression controller and torque feedforward load compensation controller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bills, K.C.; Kress, R.L.; Kwon, D.S.

    1994-12-31

    This paper describes ORNL`s development of an environment for the simulation of robotic manipulators. Simulation includes the modeling of kinematics, dynamics, sensors, actuators, control systems, operators, and environments. Models will be used for manipulator design, proposal evaluation, control system design and analysis, graphical preview of proposed motions, safety system development, and training. Of particular interest is the development of models for robotic manipulators having at least one flexible link. As a first application, models have been developed for the Pacific Northwest Laboratory`s Flexible Beam Test Bed (PNL FBTB), which is a 1-Degree-of-Freedom, flexible arm with a hydraulic base actuator. ORNLmore » transferred control algorithms developed for the PNL FBTB to controlling IGRIP models. A robust notch filter is running in IGRIP controlling a full dynamics model of the PNL test bed. Model results provide a reasonable match to the experimental results (quantitative results are being determined) and can run on ORNL`s Onyx machine in approximately realtime. The flexible beam is modeled as six rigid sections with torsional springs between each segment. The spring constants were adjusted to match the physical response of the flexible beam model to the experimental results. The controller is able to improve performance on the model similar to the improvement seen on the experimental system. Some differences are apparent, most notably because the IGRIP model presently uses a different trajectory planner than the one used by ORNL on the PNL test bed. In the future, the trajectory planner will be modified so that the experiments and models are the same. The successful completion of this work provides the ability to link C code with IGRIP, thus allowing controllers to be developed, tested, and tuned in simulation and then ported directly to hardware systems using the C language.« less

  1. Dynamic laser beam shaping for material processing using hybrid holograms

    NASA Astrophysics Data System (ADS)

    Liu, Dun; Wang, Yutao; Zhai, Zhongsheng; Fang, Zheng; Tao, Qing; Perrie, Walter; Edwarson, Stuart P.; Dearden, Geoff

    2018-06-01

    A high quality, dynamic laser beam shaping method is demonstrated by displaying a series of hybrid holograms onto a spatial light modulator (SLM), while each one of the holograms consists of a binary grating and a geometric mask. A diffraction effect around the shaped beam has been significantly reduced. Beam profiles of arbitrary shape, such as square, ring, triangle, pentagon and hexagon, can be conveniently obtained by loading the corresponding holograms on the SLM. The shaped beam can be reconstructed in the range of 0.5 mm at the image plane. Ablation on a polished stainless steel sample at the image plane are consistent with the beam shape at the diffraction near-field. The ±1st order and higher order beams can be completely removed when the grating period is smaller than 160 μm. The local energy ratio of the shaped beam observed by the CCD camera is up to 77.67%. Dynamic processing at 25 Hz using different shapes has also been achieved.

  2. BCD Beam Search: considering suboptimal partial solutions in Bad Clade Deletion supertrees.

    PubMed

    Fleischauer, Markus; Böcker, Sebastian

    2018-01-01

    Supertree methods enable the reconstruction of large phylogenies. The supertree problem can be formalized in different ways in order to cope with contradictory information in the input. Some supertree methods are based on encoding the input trees in a matrix; other methods try to find minimum cuts in some graph. Recently, we introduced Bad Clade Deletion (BCD) supertrees which combines the graph-based computation of minimum cuts with optimizing a global objective function on the matrix representation of the input trees. The BCD supertree method has guaranteed polynomial running time and is very swift in practice. The quality of reconstructed supertrees was superior to matrix representation with parsimony (MRP) and usually on par with SuperFine for simulated data; but particularly for biological data, quality of BCD supertrees could not keep up with SuperFine supertrees. Here, we present a beam search extension for the BCD algorithm that keeps alive a constant number of partial solutions in each top-down iteration phase. The guaranteed worst-case running time of the new algorithm is still polynomial in the size of the input. We present an exact and a randomized subroutine to generate suboptimal partial solutions. Both beam search approaches consistently improve supertree quality on all evaluated datasets when keeping 25 suboptimal solutions alive. Supertree quality of the BCD Beam Search algorithm is on par with MRP and SuperFine even for biological data. This is the best performance of a polynomial-time supertree algorithm reported so far.

  3. Dynamical calculations for RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2005-03-01

    A practical computing algorithm working in real time has been developed for calculating the reflection high-energy electron diffraction from the molecular beam epitaxy growing surface. The calculations are based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. The results of the calculations are presented in the form of rocking curves to illustrate how the diffracted beam intensities depend on the glancing angle of the incident beam. Program summaryTitle of program: RHEED Catalogue identifier:ADUY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the program has been tested: Windows 9x, XP, NT, Linux Programming language used: Borland C++ Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Distribution format:tar.gz Number of lines in distributed program, including test data, etc.:982 Number of bytes in distributed program, including test data, etc.: 126 051 Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). Nowadays, RHEED is used in many laboratories all over the world where researchers deal with the growth of materials by MBE. The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. In most cases the interpretation of experimental results is based on the use of dynamical diffraction approaches. Such approaches are said to be quite useful in qualitative and quantitative analysis of RHEED experimental data. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [Surf. Sci. Lett. 238 (1990) L446] under the one-beam condition. The dynamical diffraction calculations presented in this paper utilize the systematic reflection case in RHEED, in which the atomic potential in the planes parallel to the surface are projected on the surface normal, so that the results are insensitive to the atomic arrangement in the layers parallel to the surface. This model shows a systematic approximation in calculating dynamical RHEED intensities, and only a layer coverage factor for the nth layer was taken into account in calculating the interaction potential between the fast electron and that layer. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is presented in the form of a basic unit RHEED.cpp and should be compiled using C++ compilers, including C++ Builder and g++.

  4. Final project report for NEET pulsed ion beam project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucheyev, S. O.

    The major goal of this project was to develop and demonstrate a novel experimental approach to access the dynamic regime of radiation damage formation in nuclear materials. In particular, the project exploited a pulsed-ion-beam method in order to gain insight into defect interaction dynamics by measuring effective defect interaction time constants and defect diffusion lengths. This project had the following four major objectives: (i) the demonstration of the pulsed ion beam method for a prototypical nuclear ceramic material, SiC; (ii) the evaluation of the robustness of the pulsed beam method from studies of defect generation rate effects; (iii) the measurementmore » of the temperature dependence of defect dynamics and thermally activated defect-interaction processes by pulsed ion beam techniques; and (iv) the demonstration of alternative characterization techniques to study defect dynamics. As we describe below, all these objectives have been met.« less

  5. Fast regional readout CMOS Image Sensor for dynamic MLC tracking

    NASA Astrophysics Data System (ADS)

    Zin, H.; Harris, E.; Osmond, J.; Evans, P.

    2014-03-01

    Advanced radiotherapy techniques such as volumetric modulated arc therapy (VMAT) require verification of the complex beam delivery including tracking of multileaf collimators (MLC) and monitoring the dose rate. This work explores the feasibility of a prototype Complementary metal-oxide semiconductor Image Sensor (CIS) for tracking these complex treatments by utilising fast, region of interest (ROI) read out functionality. An automatic edge tracking algorithm was used to locate the MLC leaves edges moving at various speeds (from a moving triangle field shape) and imaged with various sensor frame rates. The CIS demonstrates successful edge detection of the dynamic MLC motion within accuracy of 1.0 mm. This demonstrates the feasibility of the sensor to verify treatment delivery involving dynamic MLC up to ~400 frames per second (equivalent to the linac pulse rate), which is superior to any current techniques such as using electronic portal imaging devices (EPID). CIS provides the basis to an essential real-time verification tool, useful in accessing accurate delivery of complex high energy radiation to the tumour and ultimately to achieve better cure rates for cancer patients.

  6. Transient beam oscillation with a highly dynamic scanner for laser beam fusion cutting

    NASA Astrophysics Data System (ADS)

    Goppold, Cindy; Pinder, Thomas; Herwig, Patrick

    2016-02-01

    Sheet metals with thicknesses >8 mm have a distinct cutting performance. The free choice of the optical configuration composed of fiber diameter, collimation, and focal length offers many opportunities to influence the static beam geometry. Previous analysis points out the limitations of this method in the thick section area. Within the present study, an experimental investigation of fiber laser fusion cutting of 12 mm stainless steel was performed by means of dynamical beam oscillation. Two standard optical setups are combined with a highly dynamic galvano-driven scanner that achieves frequencies up to 4 kHz. Dependencies of the scanner parameter, the optical circumstances, and the conventional cutting parameters are discussed. The aim is to characterize the capabilities and challenges of the dynamic beam shaping in comparison to the state-of-the-art static beam shaping. Thus, the trials are evaluated by quality criteria of the cut edge as surface roughness and burr height, the feed rate, and the cut kerf geometry. The investigation emphasizes promising procedural possibilities for improvements of the cutting performance in the case of fiber laser fusion cutting of thick stainless steel by means of the application of a highly dynamic scanner.

  7. Self-consistent analysis of radiation and relativistic electron beam dynamics in a helical wiggler using Lienard-Wiechert fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tecimer, M.; Elias, L.R.

    1995-12-31

    Lienard-Wiechert (LW) fields, which are exact solutions of the Wave Equation for a point charge in free space, are employed to formulate a self-consistent treatment of the electron beam dynamics and the evolution of the generated radiation in long undulators. In a relativistic electron beam the internal forces leading to the interaction of the electrons with each other can be computed by means of retarded LW fields. The resulting electron beam dynamics enables us to obtain three dimensional radiation fields starting from an initial incoherent spontaneous emission, without introducing a seed wave at start-up. Based on the formalism employed here,more » both the evolution of the multi-bucket electron phase space dynamics in the beam body as well as edges and the relative slippage of the radiation with respect to the electrons in the considered short bunch are naturally embedded into the simulation model. In this paper, we present electromagnetic radiation studies, including multi-bucket electron phase dynamics and angular distribution of radiation in the time and frequency domain produced by a relativistic short electron beam bunch interacting with a circularly polarized magnetic undulator.« less

  8. Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation

    PubMed Central

    Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong

    2013-01-01

    Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324

  9. Whole-Body Computed Tomography-Based Body Mass and Body Fat Quantification: A Comparison to Hydrostatic Weighing and Air Displacement Plethysmography.

    PubMed

    Gibby, Jacob T; Njeru, Dennis K; Cvetko, Steve T; Heiny, Eric L; Creer, Andrew R; Gibby, Wendell A

    We correlate and evaluate the accuracy of accepted anthropometric methods of percent body fat (%BF) quantification, namely, hydrostatic weighing (HW) and air displacement plethysmography (ADP), to 2 automatic adipose tissue quantification methods using computed tomography (CT). Twenty volunteer subjects (14 men, 6 women) received head-to-toe CT scans. Hydrostatic weighing and ADP were obtained from 17 and 12 subjects, respectively. The CT data underwent conversion using 2 separate algorithms, namely, the Schneider method and the Beam method, to convert Hounsfield units to their respective tissue densities. The overall mass and %BF of both methods were compared with HW and ADP. When comparing ADP to CT data using the Schneider method and Beam method, correlations were r = 0.9806 and 0.9804, respectively. Paired t tests indicated there were no statistically significant biases. Additionally, observed average differences in %BF between ADP and the Schneider method and the Beam method were 0.38% and 0.77%, respectively. The %BF measured from ADP, the Schneider method, and the Beam method all had significantly higher mean differences when compared with HW (3.05%, 2.32%, and 1.94%, respectively). We have shown that total body mass correlates remarkably well with both the Schneider method and Beam method of mass quantification. Furthermore, %BF calculated with the Schneider method and Beam method CT algorithms correlates remarkably well with ADP. The application of these CT algorithms have utility in further research to accurately stratify risk factors with periorgan, visceral, and subcutaneous types of adipose tissue, and has the potential for significant clinical application.

  10. Development of a non-contact diagnostic tool for high power lasers

    NASA Astrophysics Data System (ADS)

    Simmons, Jed A.; Guttman, Jeffrey L.; McCauley, John

    2016-03-01

    High power lasers in excess of 1 kW generate enough Rayleigh scatter, even in the NIR, to be detected by silicon based sensor arrays. A lens and camera system in an off-axis position can therefore be used as a non-contact diagnostic tool for high power lasers. Despite the simplicity of the concept, technical challenges have been encountered in the development of an instrument referred to as BeamWatch. These technical challenges include reducing background radiation, achieving high signal to noise ratio, reducing saturation events caused by particulates crossing the beam, correcting images to achieve accurate beam width measurements, creating algorithms for the removal of non-uniformities, and creating two simultaneous views of the beam from orthogonal directions. Background radiation in the image was reduced by the proper positioning of the back plane and the placement of absorbing materials on the internal surfaces of BeamWatch. Maximizing signal to noise ratio, important to the real-time monitoring of focus position, was aided by increasing lens throughput. The number of particulates crossing the beam path was reduced by creating a positive pressure inside BeamWatch. Algorithms in the software removed non-uniformities in the data prior to generating waist width, divergence, BPP, and M2 results. A dual axis version of BeamWatch was developed by the use of mirrors. By its nature BeamWatch produced results similar to scanning slit measurements. Scanning slit data was therefore taken and compared favorably with BeamWatch results.

  11. Dynamic graph cuts for efficient inference in Markov Random Fields.

    PubMed

    Kohli, Pushmeet; Torr, Philip H S

    2007-12-01

    Abstract-In this paper we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP solutions for certain dynamically changing MRF models in computer vision such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, the dynamic algorithm efficiently computes the maximum flow in a modified version of the graph. The time taken by it is roughly proportional to the total amount of change in the edge weights of the graph. Our experiments show that, when the number of changes in the graph is small, the dynamic algorithm is significantly faster than the best known static graph cut algorithm. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video. It should be noted that the application of our algorithm is not limited to the above problem, the algorithm is generic and can be used to yield similar improvements in many other cases that involve dynamic change.

  12. Optical coherence tomography with a 2.8-mm beam diameter and sensorless defocus and astigmatism correction

    NASA Astrophysics Data System (ADS)

    Reddikumar, Maddipatla; Tanabe, Ayano; Hashimoto, Nobuyuki; Cense, Barry

    2017-02-01

    An optical coherence tomography (OCT) system with a 2.8-mm beam diameter is presented. Sensorless defocus correction can be performed with a Badal optometer and astigmatism correction with a liquid crystal device. OCT B-scans were used in an image-based optimization algorithm for aberration correction. Defocus can be corrected from -4.3 D to +4.3 D and vertical and oblique astigmatism from -2.5 D to +2.5 D. A contrast gain of 6.9 times was measured after aberration correction. In comparison with a 1.3-mm beam diameter OCT system, this concept achieved a 3.7-dB gain in dynamic range on a model retina. Both systems were used to image the retina of a human subject. As the correction of the liquid crystal device can take more than 60 s, the subject's spectacle prescription was adopted instead. This resulted in a 2.5 times smaller speckle size compared with the standard OCT system. The liquid crystal device for astigmatism correction does not need a high-voltage amplifier and can be operated at 5 V. The correction device is small (9 mm×30 mm×38 mm) and can easily be implemented in existing designs for OCT.

  13. Longitudinal density modulation and energy conversion in intense beams.

    PubMed

    Harris, J R; Neumann, J G; Tian, K; O'Shea, P G

    2007-08-01

    Density modulation of charged particle beams may occur as a consequence of deliberate action, or may occur inadvertently because of imperfections in the particle source or acceleration method. In the case of intense beams, where space charge and external focusing govern the beam dynamics, density modulation may, under some circumstances, be converted to velocity modulation, with a corresponding conversion of potential energy to kinetic energy. Whether this will occur depends on the properties of the beam and the initial modulation. This paper describes the evolution of discrete and continuous density modulations on intense beams and discusses three recent experiments related to the dynamics of density-modulated electron beams.

  14. ART 3.5D: an algorithm to label arteries and veins from three-dimensional angiography.

    PubMed

    Barra, Beatrice; De Momi, Elena; Ferrigno, Giancarlo; Pero, Guglielmo; Cardinale, Francesco; Baselli, Giuseppe

    2016-10-01

    Preoperative three-dimensional (3-D) visualization of brain vasculature by digital subtraction angiography from computerized tomography (CT) in neurosurgery is gaining more and more importance, since vessels are the primary landmarks both for organs at risk and for navigation. Surgical embolization of cerebral aneurysms and arteriovenous malformations, epilepsy surgery, and stereoelectroencephalography are a few examples. Contrast-enhanced cone-beam computed tomography (CE-CBCT) represents a powerful facility, since it is capable of acquiring images in the operation room, shortly before surgery. However, standard 3-D reconstructions do not provide a direct distinction between arteries and veins, which is of utmost importance and is left to the surgeon's inference so far. Pioneering attempts by true four-dimensional (4-D) CT perfusion scans were already described, though at the expense of longer acquisition protocols, higher dosages, and sensible resolution losses. Hence, space is open to approaches attempting to recover the contrast dynamics from standard CE-CBCT, on the basis of anomalies overlooked in the standard 3-D approach. This paper aims at presenting algebraic reconstruction technique (ART) 3.5D, a method that overcomes the clinical limitations of 4-D CT, from standard 3-D CE-CBCT scans. The strategy works on the 3-D angiography, previously segmented in the standard way, and reprocesses the dynamics hidden in the raw data to recover an approximate dynamics in each segmented voxel. Next, a classification algorithm labels the angiographic voxels and artery or vein. Numerical simulations were performed on a digital phantom of a simplified 3-D vasculature with contrast transit. CE-CBCT projections were simulated and used for ART 3.5D testing. We achieved up to 90% classification accuracy in simulations, proving the feasibility of the presented approach for dynamic information recovery for arteries and veins segmentation.

  15. Application of snakes and dynamic programming optimisation technique in modeling of buildings in informal settlement areas

    NASA Astrophysics Data System (ADS)

    Rüther, Heinz; Martine, Hagai M.; Mtalo, E. G.

    This paper presents a novel approach to semiautomatic building extraction in informal settlement areas from aerial photographs. The proposed approach uses a strategy of delineating buildings by optimising their approximate building contour position. Approximate building contours are derived automatically by locating elevation blobs in digital surface models. Building extraction is then effected by means of the snakes algorithm and the dynamic programming optimisation technique. With dynamic programming, the building contour optimisation problem is realized through a discrete multistage process and solved by the "time-delayed" algorithm, as developed in this work. The proposed building extraction approach is a semiautomatic process, with user-controlled operations linking fully automated subprocesses. Inputs into the proposed building extraction system are ortho-images and digital surface models, the latter being generated through image matching techniques. Buildings are modeled as "lumps" or elevation blobs in digital surface models, which are derived by altimetric thresholding of digital surface models. Initial windows for building extraction are provided by projecting the elevation blobs centre points onto an ortho-image. In the next step, approximate building contours are extracted from the ortho-image by region growing constrained by edges. Approximate building contours thus derived are inputs into the dynamic programming optimisation process in which final building contours are established. The proposed system is tested on two study areas: Marconi Beam in Cape Town, South Africa, and Manzese in Dar es Salaam, Tanzania. Sixty percent of buildings in the study areas have been extracted and verified and it is concluded that the proposed approach contributes meaningfully to the extraction of buildings in moderately complex and crowded informal settlement areas.

  16. CT brush and CancerZap!: two video games for computed tomography dose minimization.

    PubMed

    Alvare, Graham; Gordon, Richard

    2015-05-12

    X-ray dose from computed tomography (CT) scanners has become a significant public health concern. All CT scanners spray x-ray photons across a patient, including those using compressive sensing algorithms. New technologies make it possible to aim x-ray beams where they are most needed to form a diagnostic or screening image. We have designed a computer game, CT Brush, that takes advantage of this new flexibility. It uses a standard MART algorithm (Multiplicative Algebraic Reconstruction Technique), but with a user defined dynamically selected subset of the rays. The image appears as the player moves the CT brush over an initially blank scene, with dose accumulating with every "mouse down" move. The goal is to find the "tumor" with as few moves (least dose) as possible. We have successfully implemented CT Brush in Java and made it available publicly, requesting crowdsourced feedback on improving the open source code. With this experience, we also outline a "shoot 'em up game" CancerZap! for photon limited CT. We anticipate that human computing games like these, analyzed by methods similar to those used to understand eye tracking, will lead to new object dependent CT algorithms that will require significantly less dose than object independent nonlinear and compressive sensing algorithms that depend on sprayed photons. Preliminary results suggest substantial dose reduction is achievable.

  17. Vortex Dynamics and Shear-Layer Instability in High-Intensity Cyclotrons.

    PubMed

    Cerfon, Antoine J

    2016-04-29

    We show that the space-charge dynamics of high-intensity beams in the plane perpendicular to the magnetic field in cyclotrons is described by the two-dimensional Euler equations for an incompressible fluid. This analogy with fluid dynamics gives a unified and intuitive framework to explain the beam spiraling and beam breakup behavior observed in experiments and in simulations. Specifically, we demonstrate that beam breakup is the result of a classical instability occurring in fluids subject to a sheared flow. We give scaling laws for the instability and predict the nonlinear evolution of beams subject to it. Our work suggests that cyclotrons may be uniquely suited for the experimental study of shear layers and vortex distributions that are not achievable in Penning-Malmberg traps.

  18. Beam dynamics simulation of a double pass proton linear accelerator

    DOE PAGES

    Hwang, Kilean; Qiang, Ji

    2017-04-03

    A recirculating superconducting linear accelerator with the advantage of both straight and circular accelerator has been demonstrated with relativistic electron beams. The acceleration concept of a recirculating proton beam was recently proposed and is currently under study. In order to further support the concept, the beam dynamics study on a recirculating proton linear accelerator has to be carried out. In this paper, we study the feasibility of a two-pass recirculating proton linear accelerator through the direct numerical beam dynamics design optimization and the start-to-end simulation. This study shows that the two-pass simultaneous focusing without particle losses is attainable including fullymore » 3D space-charge effects through the entire accelerator system.« less

  19. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Q.

    In memory of the significant contribution of Dr. Jacques Ovadia to electron beam techniques, this session will review recent, advanced techniques which are reinvigorating the science of electron beam radiation therapy. Recent research efforts in improving both the applicability and quality of the electron beam therapy will be discussed, including modulated electron beam radiotherapy (MERT) and dynamic electron arc radiotherapy (DEAR). Learning Objectives: To learn about recent advances in electron beam therapy, including modulated electron beam therapy and dynamic electron arc therapy (DEAR). Put recent advances in the context of work that Dr. Ovadia pursued during his career in medicalmore » physics.« less

  1. Characterisation of a MOSFET-based detector for dose measurement under megavoltage electron beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Jong, W. L.; Ung, N. M.; Tiong, A. H. L.; Rosenfeld, A. B.; Wong, J. H. D.

    2018-03-01

    The aim of this study is to investigate the fundamental dosimetric characteristics of the MOSkin detector for megavoltage electron beam dosimetry. The reproducibility, linearity, energy dependence, dose rate dependence, depth dose measurement, output factor measurement, and surface dose measurement under megavoltage electron beam were tested. The MOSkin detector showed excellent reproducibility (>98%) and linearity (R2= 1.00) up to 2000 cGy for 4-20 MeV electron beams. The MOSkin detector also showed minimal dose rate dependence (within ±3%) and energy dependence (within ±2%) over the clinical range of electron beams, except for an energy dependence at 4 MeV electron beam. An energy dependence correction factor of 1.075 is needed when the MOSkin detector is used for 4 MeV electron beam. The output factors measured by the MOSkin detector were within ±2% compared to those measured with the EBT3 film and CC13 chamber. The measured depth doses using the MOSkin detector agreed with those measured using the CC13 chamber, except at the build-up region due to the dose volume averaging effect of the CC13 chamber. For surface dose measurements, MOSkin measurements were in agreement within ±3% to those measured using EBT3 film. Measurements using the MOSkin detector were also compared to electron dose calculation algorithms namely the GGPB and eMC algorithms. Both algorithms were in agreement with measurements to within ±2% and ±4% for output factor (except for the 4 × 4 cm2 field size) and surface dose, respectively. With the uncertainties taken into account, the MOSkin detector was found to be a suitable detector for dose measurement under megavoltage electron beam. This has been demonstrated in the in vivo skin dose measurement on patients during electron boost to the breast tumour bed.

  2. Reduce beam hardening artifacts of polychromatic X-ray computed tomography by an iterative approximation approach.

    PubMed

    Shi, Hongli; Yang, Zhi; Luo, Shuqian

    2017-01-01

    The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.

  3. SU-E-T-454: Dosimetric Comparison between Pencil Beam and Monte Carlo Algorithms for SBRT Lung Treatment Using IPlan V4.1 TPS and CIRS Thorax Phantom.

    PubMed

    Fernandez, M Castrillon; Venencia, C; Garrigó, E; Caussa, L

    2012-06-01

    To compare measured and calculated doses using Pencil Beam (PB) and Monte Carlo (MC) algorithm on a CIRS thorax phantom for SBRT lung treatments. A 6MV photon beam generated by a Primus linac with an Optifocus MLC (Siemens) was used. Dose calculation was done using iPlan v4.1.2 TPS (BrainLAB) by PB and MC (dose to water and dose to medium) algorithms. The commissioning of both algorithms was done reproducing experimental measurements in water. A CIRS thorax phantom was used to compare doses using a Farmer type ion chamber (PTW) and EDR2 radiographic films (KODAK). The ionization chamber, into a tissue equivalent insert, was placed in two position of lung tissue and was irradiated using three treatments plans. Axial dose distributions were measured for four treatments plans using conformal and IMRT technique. Dose distribution comparisons were done by dose profiles and gamma index (3%/3mm). For the studied beam configurations, ion chamber measurements shows that PB overestimate the dose up to 8.5%, whereas MC has a maximum variation of 1.6%. Dosimetric analysis using dose profiles shows that PB overestimates the dose in the region corresponding to the lung up to 16%. For axial dose distribution comparison the percentage of pixels with gamma index bigger than one for MC and PB was, plan 1: 95.6% versus 87.4%, plan 2: 91.2% versus 77.6%, plan 3: 99.7% versus 93.1% and for plan 4: 98.8% versus 91.7%. It was confirmed that the lower dosimetric errors calculated applying MC algorithm appears when the spatial resolution and variance decrease at the expense of increased computation time. The agreement between measured and calculated doses, in a phantom with lung heterogeneities, is better with MC algorithm. PB algorithm overestimates the doses in lung tissue, which could have a clinical impact in SBRT lung treatments. © 2012 American Association of Physicists in Medicine.

  4. Electron-Beam Dynamics for an Advanced Flash-Radiography Accelerator

    DOE PAGES

    Ekdahl, Carl

    2015-11-17

    Beam dynamics issues were assessed for a new linear induction electron accelerator being designed for multipulse flash radiography of large explosively driven hydrodynamic experiments. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Furthermore, beam physics issues were examined through theoretical analysis and computer simulations, including particle-in-cell codes. Beam instabilities investigated included beam breakup, image displacement, diocotron, parametric envelope, ion hose, and themore » resistive wall instability. The beam corkscrew motion and emittance growth from beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos National Laboratory will result if the same engineering standards and construction details are upheld.« less

  5. Electron-beam dynamics for an advanced flash-radiography accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August Jr.

    2015-06-22

    Beam dynamics issues were assessed for a new linear induction electron accelerator. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Beam physics issues were examined through theoretical analysis and computer simulations, including particle-in cell (PIC) codes. Beam instabilities investigated included beam breakup (BBU), image displacement, diocotron, parametric envelope, ion hose, and the resistive wall instability. Beam corkscrew motion and emittance growth frommore » beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos will result if the same engineering standards and construction details are upheld.« less

  6. Explicit symplectic algorithms based on generating functions for charged particle dynamics.

    PubMed

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H(x,p)=p_{i}f(x) or H(x,p)=x_{i}g(p). Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  7. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  8. Holographic otoscope for nanodisplacement measurements of surfaces under dynamic excitation.

    PubMed

    Flores-Moreno, J M; Furlong, Cosme; Rosowski, John J; Harrington, Ellery; Cheng, Jeffrey T; Scarpino, C; Santoyo, F Mendoza

    2011-01-01

    We describe a novel holographic otoscope system for measuring nanodisplacements of objects subjected to dynamic excitation. Such measurements are necessary to quantify the mechanical deformation of surfaces in mechanics, acoustics, electronics, biology, and many other fields. In particular, we are interested in measuring the sound-induced motion of biological samples, such as an eardrum. Our holographic otoscope system consists of laser illumination delivery (IS), optical head (OH), and image processing computer (IP) systems. The IS delivers the object beam (OB) and the reference beam (RB) to the OH. The backscattered light coming from the object illuminated by the OB interferes with the RB at the camera sensor plane to be digitally recorded as a hologram. The hologram is processed by the IP using the Fresnel numerical reconstruction algorithm, where the focal plane can be selected freely. Our holographic otoscope system is currently deployed in a clinic, and is packaged in a custom design. It is mounted in a mechatronic positioning system to increase its maneuverability degrees to be conveniently positioned in front of the object to be measured. We present representative results highlighting the versatility of our system to measure deformations of complex elastic surfaces in the wavelength scale including a copper foil membrane and postmortem tympanic membrane. SCANNING 33: 342-352, 2011. © 2011 Wiley Periodicals, Inc. Copyright © 2011 Wiley Periodicals, Inc.

  9. Experimental verification of an interpolation algorithm for improved estimates of animal position

    NASA Astrophysics Data System (ADS)

    Schell, Chad; Jaffe, Jules S.

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  10. Sun-Relative Pointing for Dual-Axis Solar Trackers Employing Azimuth and Elevation Rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Daniel; Hansen, Clifford W.

    Dual axis trackers employing azimuth and elevation rotations are common in the field of photovoltaic (PV) energy generation. Accurate sun-tracking algorithms are widely available. However, a steering algorithm has not been available to accurately point the tracker away from the sun such that a vector projection of the sun beam onto the tracker face falls along a desired path relative to the tracker face. We have developed an algorithm which produces the appropriate azimuth and elevation angles for a dual axis tracker when given the sun position, desired angle of incidence, and the desired projection of the sun beam ontomore » the tracker face. Development of this algorithm was inspired by the need to accurately steer a tracker to desired sun-relative positions in order to better characterize the electro-optical properties of PV and CPV modules.« less

  11. Optimal condition for employing an axicon-generated Bessel beam to fabricate cylindrical microlens arrays

    NASA Astrophysics Data System (ADS)

    Luo, Zhi; Yin, Kai; Dong, Xinran; Duan, Ji’an

    2018-05-01

    A numerical algorithm, modelling the transformation from a Gaussian beam to a Bessel beam, is presented for the purpose to study the optimal condition for employing an axicon-generated Bessel beam to fabricate cylindrical microlens arrays (CMLAs). By applying the numerical algorithm to simulate the spatial intensity distribution behind the axicon under different defects of a rotund-apex and different diameter ratios of an incident beam to the axicon, we find that the diffraction effects formed by the axicon edge can be almost eliminated when the diameter ratio is less than 1:2, but the spatial intensity distribution is disturbed dramatically even a few tens of microns deviation of the apex, especially for the front part of the axicon-generated Bessel beam. Fortunately, the lateral intensity profile in the rear part still maintains a desirable Bessel curve. Therefore, the rear part of the Bessel area and the less than 1:2 diameter ratio are the optimal choice for employing an axicon-generated Bessel beam to implement surface microstructures fabrication. Furthermore, by applying the optimal conditions to direct writing microstructures on fused silica with a femtosecond (fs) laser, a large area close-packed CMLA is fabricated. The CMLA presents high quality and uniformity and its optical performance is also demonstrated.

  12. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Wind profile recovery from intensity fluctuations of a laser beam reflected in a turbulent atmosphere

    NASA Astrophysics Data System (ADS)

    Banakh, V. A.; Marakasov, D. A.

    2008-04-01

    An algorithm for the wind profile recovery from spatiotemporal spectra of a laser beam reflected in a turbulent atmosphere is presented. The cases of a spherical wave incident on a diffuse reflector of finite size and a spatially limited beam reflected from an infinite random surface are considered.

  13. A new method for incoherent combining of far-field laser beams based on multiple faculae recognition

    NASA Astrophysics Data System (ADS)

    Ye, Demao; Li, Sichao; Yan, Zhihui; Zhang, Zenan; Liu, Yuan

    2018-03-01

    Compared to coherent beam combining, incoherent beam combining can complete the output of high power laser beam with high efficiency, simple structure, low cost and high thermal damage resistance, and it is easy to realize in engineering. Higher target power is achieved by incoherent beam combination which using technology of multi-channel optical path correction. However, each channel forms a spot in the far field respectively, which cannot form higher laser power density with low overlap ratio of faculae. In order to improve the combat effectiveness of the system, it is necessary to overlap different faculae that improve the target energy density. Hence, a novel method for incoherent combining of far-field laser beams is present. The method compromises piezoelectric ceramic technology and evaluation algorithm of faculae coincidence degree which based on high precision multi-channel optical path correction. The results show that the faculae recognition algorithm is low-latency(less than 10ms), which can meet the needs of practical engineering. Furthermore, the real time focusing ability of far field faculae is improved which was beneficial to the engineering of high-energy laser weapon or other laser jamming systems.

  14. Modeling and Bayesian parameter estimation for shape memory alloy bending actuators

    NASA Astrophysics Data System (ADS)

    Crews, John H.; Smith, Ralph C.

    2012-04-01

    In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.

  15. Human-simulated intelligent control of train braking response of bridge with MRB

    NASA Astrophysics Data System (ADS)

    Li, Rui; Zhou, Hongli; Wu, Yueyuan; Wang, Xiaojie

    2016-04-01

    The urgent train braking could bring structural response menace to the bridge under passive control. Based on the analysis of breaking dynamics of a train-bridge vibration system, a magnetorheological elastomeric bearing (MRB) whose mechanical parameters are adjustable is designed, tested and modeled. A finite element method (FEM) is carried out to model and optimize a full scale vibration isolation system for railway bridge based on MRB. According to the model above, we also consider the effect of different braking stop positions on the vibration isolation system and classify the bridge longitudinal vibration characteristics into several cases. Because the train-bridge vibration isolation system has multiple vibration states and strongly coupling with nonlinear characteristics, a human-simulated intelligent control (HSIC) algorithm for isolating the bridge vibration under the impact of train braking is proposed, in which the peak shear force of pier top, the displacement of beam and the acceleration of beam are chosen as control goals. The simulation of longitudinal vibration control system under the condition of train braking is achieved by MATLAB. The results indicate that different braking stop positions significantly affect the vibration isolation system and the structural response is the most drastic when the train stops at the third cross-span. With the proposed HSIC smart isolation system, the displacement of bridge beam and peak shear force of pier top is reduced by 53.8% and 34.4%, respectively. Moreover, the acceleration of bridge beam is effectively controlled within limited range.

  16. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy.

    PubMed

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-10-01

    Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.

  17. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy

    PubMed Central

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-01-01

    Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381

  18. Integrated Reconfigurable Aperture, Digital Beam Forming, and Software GPS Receiver for UAV Navigation

    DTIC Science & Technology

    2007-12-11

    Implemented both carrier and code phase tracking loop for performance evaluation of a minimum power beam forming algorithm and null steering algorithm...4 Antennal Antenna2 Antenna K RF RF RF ct, Ct~2 ChKx1 X2 ....... Xk A W ~ ~ =Z, x W ,=1 Fig. 5. Schematics of a K-element antenna array spatial...adaptive processor Antennal Antenna K A N-i V/ ( Vil= .i= VK Fig. 6. Schematics of a K-element antenna array space-time adaptive processor Two additional

  19. Automatic laser beam alignment using blob detection for an environment monitoring spectroscopy

    NASA Astrophysics Data System (ADS)

    Khidir, Jarjees; Chen, Youhua; Anderson, Gary

    2013-05-01

    This paper describes a fully automated system to align an infra-red laser beam with a small retro-reflector over a wide range of distances. The component development and test were especially used for an open-path spectrometer gas detection system. Using blob detection under OpenCV library, an automatic alignment algorithm was designed to achieve fast and accurate target detection in a complex background environment. Test results are presented to show that the proposed algorithm has been successfully applied to various target distances and environment conditions.

  20. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  1. Cone beam CT imaging with limited angle of projections and prior knowledge for volumetric verification of non-coplanar beam radiation therapy: a proof of concept study

    NASA Astrophysics Data System (ADS)

    Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang

    2013-11-01

    Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.

  2. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications

    PubMed Central

    Bäck, Anna

    2013-01-01

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB to PBC, AAA, or CC. Moving from the PB to the PBC algorithm did not require new model parameters; however, moving from PB to AAA or CC did require a change in the NTCP model parameters, with CC requiring the largest change. It was shown that the new model parameters for a given algorithm are different for the different treatment types. PACS numbers: 87.53.‐j, 87.53.Kn, 87.55.‐x, 87.55.dh, 87.55.kd PMID:24036865

  3. Analysis of the orbit distortion by the use of the wavelet transform

    NASA Astrophysics Data System (ADS)

    Matsushita, T.; Agui, A.; Yoshigoe, A.; Takao, M.; Aoyagi, H.; Takeuchi, M.; Nakatani, T.; Tanaka, H.

    2004-05-01

    We have adopted matching pursuit algorithm of discrete wavelet transform (DWT) for the analysis of the beam position shift correlated with the motion of insertion device(ID). The beam position data measured by the rf beam position monitors have included high-frequency `noises' and fluctuation of background level. Precise evaluation of the electron beam position shift correlated with the motion of the ID is required for estimation of the steering magnet currents in order to suppress the closed orbit distortion (COD). The DWT is a powerful tool for frequency analysis and data processing. The analysis of DWT was applied to the beam position shift correlated with the phase motion of APPLE-2 type undulator (ID23) in SPring-8. The result of the analysis indicated that `noises' are mainly composed of the components of 50 ˜ 6.25Hz and < 0.1Hz. We carried out the data processing to remove the `noises' by the matching pursuit algorithm. Then we have succeeded in suppressing the COD within 2 μm by the use of the steering magnet currents calculated from the processed data.

  4. Suppression of motion-induced streak artifacts along chords in fan-beam BPF-reconstructions of motion-contaminated projection data

    NASA Astrophysics Data System (ADS)

    King, Martin; Xia, Dan; Yu, Lifeng; Pan, Xiaochuan; Giger, Maryellen

    2006-03-01

    Usage of the backprojection filtration (BPF) algorithm for reconstructing images from motion-contaminated fan-beam data may result in motion-induced streak artifacts, which appear in the direction of the chords on which images are reconstructed. These streak artifacts, which are most pronounced along chords tangent to the edges of the moving object, may be suppressed by use of the weighted BPF (WBPF) algorithm, which can exploit the inherent redundancies in fan-beam data. More specifically, reconstructions using full-scan and short-scan data can allow for substantial suppression of these streaks, whereas those using reduced-scan data can allow for partial suppression. Since multiple different reconstructions of the same chord can be obtained by varying the amount of redundant data used, we have laid the groundwork for a possible method to characterize the amount of motion encoded within the data used for reconstructing an image on a particular chord. Furthermore, since motion artifacts in WBPF reconstructions using full-scan and short-scan data appear similar to those in corresponding fan-beam filtered backprojection (FFBP) reconstructions for the cases performed in this study, the BPF and WBPF algorithms potentially may be used to arrive at a more fundamental characterization of how motion artifacts appear in FFBP reconstructions.

  5. A method for photon beam Monte Carlo multileaf collimator particle transport

    NASA Astrophysics Data System (ADS)

    Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe

    2002-09-01

    Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.

  6. Quasi-ideal dynamics of vortex solitons embedded in flattop nonlinear Bessel beams.

    PubMed

    Porras, Miguel A; Ramos, Francisco

    2017-09-01

    The applications of vortex solitons are severely limited by the diffraction and self-defocusing spreading of the background beam where they are nested. Nonlinear Bessel beams in self-defocusing media are nondiffracting, flattop beams where the nested vortex solitons can survive for propagation distances that are one order of magnitude larger than in the Gaussian or super-Gaussian beams. The dynamics of the vortex solitons is studied numerically and found to approach that in the ideal, uniform background, preventing vortex spiraling and decay, which eases vortex steering for applications.

  7. Initial Beam Dynamics Simulations of a High-Average-Current Field-Emission Electron Source in a Superconducting RadioFrequency Gun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohsen, O.; Gonin, I.; Kephart, R.

    High-power electron beams are sought-after tools in support to a wide array of societal applications. This paper investigates the production of high-power electron beams by combining a high-current field-emission electron source to a superconducting radio-frequency (SRF) cavity. We especially carry out beam-dynamics simulations that demonstrate the viability of the scheme to formmore » $$\\sim$$ 300 kW average-power electron beam using a 1+1/2-cell SRF gun.« less

  8. The Modelling of Axially Translating Flexible Beams

    NASA Astrophysics Data System (ADS)

    Theodore, R. J.; Arakeri, J. H.; Ghosal, A.

    1996-04-01

    The axially translating flexible beam with a prismatic joint can be modelled by using the Euler-Bernoulli beam equation together with the convective terms. In general, the method of separation of variables cannot be applied to solve this partial differential equation. In this paper, a non-dimensional form of the Euler Bernoulli beam equation is presented, obtained by using the concept of group velocity, and also the conditions under which separation of variables and assumed modes method can be used. The use of clamped-mass boundary conditions leads to a time-dependent frequency equation for the translating flexible beam. A novel method is presented for solving this time dependent frequency equation by using a differential form of the frequency equation. The assume mode/Lagrangian formulation of dynamics is employed to derive closed form equations of motion. It is shown by using Lyapunov's first method that the dynamic responses of flexural modal variables become unstable during retraction of the flexible beam, which the dynamic response during extension of the beam is stable. Numerical simulation results are presented for the uniform axial motion induced transverse vibration for a typical flexible beam.

  9. Image processing meta-algorithm development via genetic manipulation of existing algorithm graphs

    NASA Astrophysics Data System (ADS)

    Schalkoff, Robert J.; Shaaban, Khaled M.

    1999-07-01

    Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.

  10. An Improved Co-evolutionary Particle Swarm Optimization for Wireless Sensor Networks with Dynamic Deployment

    PubMed Central

    Wang, Xue; Wang, Sheng; Ma, Jun-Jie

    2007-01-01

    The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.

  11. Finite element formulation of viscoelastic sandwich beams using fractional derivative operators

    NASA Astrophysics Data System (ADS)

    Galucio, A. C.; Deü, J.-F.; Ohayon, R.

    This paper presents a finite element formulation for transient dynamic analysis of sandwich beams with embedded viscoelastic material using fractional derivative constitutive equations. The sandwich configuration is composed of a viscoelastic core (based on Timoshenko theory) sandwiched between elastic faces (based on Euler-Bernoulli assumptions). The viscoelastic model used to describe the behavior of the core is a four-parameter fractional derivative model. Concerning the parameter identification, a strategy to estimate the fractional order of the time derivative and the relaxation time is outlined. Curve-fitting aspects are focused, showing a good agreement with experimental data. In order to implement the viscoelastic model into the finite element formulation, the Grünwald definition of the fractional operator is employed. To solve the equation of motion, a direct time integration method based on the implicit Newmark scheme is used. One of the particularities of the proposed algorithm lies in the storage of displacement history only, reducing considerably the numerical efforts related to the non-locality of fractional operators. After validations, numerical applications are presented in order to analyze truncation effects (fading memory phenomena) and solution convergence aspects.

  12. The analysis of thin walled composite laminated helicopter rotor with hierarchical warping functions and finite element method

    NASA Astrophysics Data System (ADS)

    Zhu, Dechao; Deng, Zhongmin; Wang, Xingwei

    2001-08-01

    In the present paper, a series of hierarchical warping functions is developed to analyze the static and dynamic problems of thin walled composite laminated helicopter rotors composed of several layers with single closed cell. This method is the development and extension of the traditional constrained warping theory of thin walled metallic beams, which had been proved very successful since 1940s. The warping distribution along the perimeter of each layer is expanded into a series of successively corrective warping functions with the traditional warping function caused by free torsion or free bending as the first term, and is assumed to be piecewise linear along the thickness direction of layers. The governing equations are derived based upon the variational principle of minimum potential energy for static analysis and Rayleigh Quotient for free vibration analysis. Then the hierarchical finite element method is introduced to form a numerical algorithm. Both static and natural vibration problems of sample box beams are analyzed with the present method to show the main mechanical behavior of the thin walled composite laminated helicopter rotor.

  13. Nonlinear equations for dynamics of pretwisted beams undergoing small strains and large rotations

    NASA Technical Reports Server (NTRS)

    Hodges, D. H.

    1985-01-01

    Nonlinear beam kinematics are developed and applied to the dynamic analysis of a pretwisted, rotating beam element. The common practice of assuming moderate rotations caused by structural deformation in geometric nonlinear analyses of rotating beams was abandoned in the present analysis. The kinematic relations that described the orientation of the cross section during deformation are simplified by systematically ignoring the extensional strain compared to unity in those relations. Open cross section effects such as warping rigidity and dynamics are ignored, but other influences of warp are retained. The beam cross section is not allowed to deform in its own plane. Various means of implementation are discussed, including a finite element formulation. Numerical results obtained for nonlinear static problems show remarkable agreement with experiment.

  14. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  15. A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique

    NASA Technical Reports Server (NTRS)

    Barth, T. J.; Steger, J. L.

    1985-01-01

    An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.

  16. DYNA3D: A computer code for crashworthiness engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallquist, J.O.; Benson, D.J.

    1986-09-01

    A finite element program with crashworthiness applications has been developed at LLNL. DYNA3D, an explicit, fully vectorized, finite deformation structural dynamics program, has four capabilities that are critical for the efficient and realistic modeling crash phenomena: (1) fully optimized nonlinear solid, shell, and beam elements for representing a structure; (2) a broad range of constitutive models for simulating material behavior; (3) sophisticated contact algorithms for impact interactions; (4) a rigid body capability to represent the bodies away from the impact region at a greatly reduced cost without sacrificing accuracy in the momentum calculations. Basic methodologies of the program are brieflymore » presented along with several crashworthiness calculations. Efficiencies of the Hughes-Liu and Belytschko-Tsay shell formulations are considered.« less

  17. Modeling of composite beams and plates for static and dynamic analysis

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.

    1992-01-01

    A rigorous theory and the corresponding computational algorithms were developed for through-the-thickness analysis of composite plates. This type of analysis is needed in order to find the elastic stiffness constants of a plate. Additionally, the analysis is used to post-process the resulting plate solution in order to find approximate three-dimensional displacement, strain, and stress distributions throughout the plate. It was decided that the variational-asymptotical method (VAM) would serve as a suitable framework in which to solve these types of problems. Work during this reporting period has progressed along two lines: (1) further evaluation of neo-classical plate theory (NCPT) as applied to shear-coupled laminates; and (2) continued modeling of plates with nonuniform thickness.

  18. Transverse Space-Charge Field-Induced Plasma Dynamics for Ultraintense Electron-Beam Characterization

    NASA Astrophysics Data System (ADS)

    Tarkeshian, R.; Vay, J. L.; Lehe, R.; Schroeder, C. B.; Esarey, E. H.; Feurer, T.; Leemans, W. P.

    2018-04-01

    Similarly to laser or x-ray beams, the interaction of sufficiently intense particle beams with neutral gases will result in the creation of plasma. In contrast to photon-based ionization, the strong unipolar field of a particle beam can generate a plasma where the electron population receives a large initial momentum kick and escapes, leaving behind unshielded ions. Measuring the properties of the ensuing Coulomb exploding ions—such as their kinetic energy distribution, yield, and spatial distribution—can provide information about the peak electric fields that are achieved in the electron beams. Particle-in-cell simulations and analytical models are presented for high-brightness electron beams of a few femtoseconds or even hundreds of attoseconds, and transverse beam sizes on the micron scale, as generated by today's free electron lasers. Different density regimes for the utilization as a potential diagnostics are explored, and the fundamental differences in plasma dynamical behavior for e-beam or photon-based ionization are highlighted. By measuring the dynamics of field-induced ions for different gas and beam densities, a lower bound on the beam charge density can be obtained in a single shot and in a noninvasive way. The exponential dependency of the ionization yield on the beam properties can provide unprecedented spatial and temporal resolution, at the submicrometer and subfemtosecond scales, respectively, offering a practical and powerful approach to characterizing beams from accelerators at the frontiers of performance.

  19. Design of a bullet beam pattern of a micro ultrasound transducer (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Roh, Yongrae; Lee, Seongmin

    2016-04-01

    Ultrasonic imaging transducer is often required to compose a beam pattern of a low sidelobe level and a small beam width over a long focal region to achieve good image resolution. Normal ultrasound transducers have many channels along its azimuth, which allows easy formation of the sound beam into a desired shape. However, micro-array transducers have no control of the beam pattern along their elevation. In this work, a new method is proposed to manipulate the beam pattern by using an acoustic multifocal lens and a shaded electrode on top of the piezoelectric layer. The shading technique split an initial uniform electrode into several segments and combined those segments to compose a desired beam pattern. For a given elevation width and frequency, the optimal pattern of the split electrodes was determined by means of the OptQuest-Nonlinear Program (OQ-NLP) algorithm to achieve the lowest sidelobe level. The requirement to achieve a small beam width with a long focal region was satisfied by employing an acoustic lens of three multiple focuses. Optimal geometry of the multifocal lens such as the radius of curvature and aperture diameter for each focal point was also determined by the OQ-NLP algorithm. For the optimization, a new index was devised to evaluate the on-axis response: focal region ratio = focal region / minimum beam width. The larger was the focal region ratio, the better was the beam pattern. Validity of the design has been verified through fabricating and characterizing an experimental prototype of the transducer.

  20. On the importance of FIB-SEM specific segmentation algorithms for porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salzer, Martin, E-mail: martin.salzer@uni-ulm.de; Thiele, Simon, E-mail: simon.thiele@imtek.uni-freiburg.de; Zengerle, Roland, E-mail: zengerle@imtek.uni-freiburg.de

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin,more » is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.« less

  1. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Adaptive Beam Loading Compensation in Room Temperature Bunching Cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J. P.; Chase, B. E.; Cullerton, E.

    In this paper we present the design, simulation, and proof of principle results of an optimization based adaptive feedforward algorithm for beam-loading compensation in a high impedance room temperature cavity. We begin with an overview of prior developments in beam loading compensation. Then we discuss different techniques for adaptive beam loading compensation and why the use of Newton?s Method is of interest for this application. This is followed by simulation and initial experimental results of this method.

  3. Compensation Techniques in Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayed, Hisham Kamal

    2011-05-01

    Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Twomore » problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.« less

  4. Control algorithms for dynamic attenuators

    PubMed Central

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-01-01

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods. PMID:24877818

  5. WE-AB-BRB-08: Progress Towards a 2D OSL Dosimetry System Using Al2O3:C Films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, M F; Yukihara, E; Schnell, E

    Purpose: To develop a 2D dosimetry system based on the optically stimulated luminescence (OSL) of Al{sub 2}O{sub 3}:C films for medical applications. Methods: A 2D laser scanning OSL reader was built for readout of newly developed Al2O3:C films (Landauer Inc.). An image reconstruction algorithm was developed to correct for inherent effects introduced by reader design and detector properties. The system was tested using irradiations with photon and carbon ion beams. A calibration was obtained using a 6 MV photon beam from clinical accelerator and the dose measurement precision was tested using a range of doses and different dose distributions (flatmore » field and wedge field). The dynamic range and performance of the system in the presence of large dose gradients was also tested using 430 MeV/u {sup 12}C single and multiple pencil beams. All irradiations were performed with Gafchromic EBT3 film for comparison. Results: Preliminary results demonstrate a near-linear OSL dose response to photon fields and the ability to measure dose in dose distributions such as flat field and wedge field. Tests using {sup 12}C pencil beam demonstrate ability to measure doses over four orders of magnitude. The dose profiles measured by the OSL film generally agreed well with that measured by the EBT3 film. The OSL image signal-to-noise ratio obtained in the current conditions require further improvement. On the other hand, EBT3 films had large uncertainties in the low dose region due to film-to-film or intra-film variation in the background. Conclusion: A 2D OSL dosimetry system was developed and initial tests have demonstrated a wide dynamic range as well as good agreement between the delivered and measured doses. The low background, wide dynamic range and wide range of linearity in dose response observed for the Al{sub 2}O{sub 3}:C OSL film can be beneficial for dosimetry in radiation therapy applications, especially for small field dosimetry. This work has been funded by Landauer Inc. Dr. Eduardo G. Yukihara also would like to thank the Alexander von Humboldt Foundation for his support at the DKFZ.« less

  6. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    NASA Technical Reports Server (NTRS)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  7. Feedback control design for non-inductively sustained scenarios in NSTX-U using TRANSP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyer, M. D.; Andre, R. G.; Gates, D. A.

    This paper examines a method for real-time control of non-inductively sustained scenarios in NSTX-U by using TRANSP, a time-dependent integrated modeling code for prediction and interpretive analysis of tokamak experimental data, as a simulator. The actuators considered for control in this work are the six neutral beam sources and the plasma boundary shape. To understand the response of the plasma current, stored energy, and central safety factor to these actuators and to enable systematic design of control algorithms, simulations were run in which the actuators were modulated and a linearized dynamic response model was generated. A multi-variable model-based control schememore » that accounts for the coupling and slow dynamics of the system while mitigating the effect of actuator limitations was designed and simulated. Simulations show that modest changes in the outer gap and heating power can improve the response time of the system, reject perturbations, and track target values of the controlled values.« less

  8. Feedback control design for non-inductively sustained scenarios in NSTX-U using TRANSP

    DOE PAGES

    Boyer, M. D.; Andre, R. G.; Gates, D. A.; ...

    2017-04-24

    This paper examines a method for real-time control of non-inductively sustained scenarios in NSTX-U by using TRANSP, a time-dependent integrated modeling code for prediction and interpretive analysis of tokamak experimental data, as a simulator. The actuators considered for control in this work are the six neutral beam sources and the plasma boundary shape. To understand the response of the plasma current, stored energy, and central safety factor to these actuators and to enable systematic design of control algorithms, simulations were run in which the actuators were modulated and a linearized dynamic response model was generated. A multi-variable model-based control schememore » that accounts for the coupling and slow dynamics of the system while mitigating the effect of actuator limitations was designed and simulated. Simulations show that modest changes in the outer gap and heating power can improve the response time of the system, reject perturbations, and track target values of the controlled values.« less

  9. Feedback control design for non-inductively sustained scenarios in NSTX-U using TRANSP

    NASA Astrophysics Data System (ADS)

    Boyer, M. D.; Andre, R. G.; Gates, D. A.; Gerhardt, S. P.; Menard, J. E.; Poli, F. M.

    2017-06-01

    This paper examines a method for real-time control of non-inductively sustained scenarios in NSTX-U by using TRANSP, a time-dependent integrated modeling code for prediction and interpretive analysis of tokamak experimental data, as a simulator. The actuators considered for control in this work are the six neutral beam sources and the plasma boundary shape. To understand the response of the plasma current, stored energy, and central safety factor to these actuators and to enable systematic design of control algorithms, simulations were run in which the actuators were modulated and a linearized dynamic response model was generated. A multi-variable model-based control scheme that accounts for the coupling and slow dynamics of the system while mitigating the effect of actuator limitations was designed and simulated. Simulations show that modest changes in the outer gap and heating power can improve the response time of the system, reject perturbations, and track target values of the controlled values.

  10. Gas Filled RF Resonator Hadron Beam Monitor for Intense Neutrino Beam Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yonehara, Katsuya; Abrams, Robert; Dinkel, Holly

    MW-class beam facilities are being considered all over the world to produce an intense neutrino beam for fundamental particle physics experiments. A radiation-robust beam monitor system is required to diagnose the primary and secondary beam qualities in high-radiation environments. We have proposed a novel gas-filled RF-resonator hadron beam monitor in which charged particles passing through the resonator produce ionized plasma that changes the permittivity of the gas. The sensitivity of the monitor has been evaluated in numerical simulation. A signal manipulation algorithm has been designed. A prototype system will be constructed and tested by using a proton beam at themore » MuCool Test Area at Fermilab.« less

  11. Delivery confirmation of bolus electron conformal therapy combined with intensity modulated x-ray therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavanaugh, James A.; Hogstrom, Kenneth R.; Fontenot, Jonas P.

    2013-02-15

    Purpose: The purpose of this study was to demonstrate that a bolus electron conformal therapy (ECT) dose plan and a mixed beam plan, composed of an intensity modulated x-ray therapy (IMXT) dose plan optimized on top of the bolus ECT plan, can be accurately delivered. Methods: Calculated dose distributions were compared with measured dose distributions for parotid and chest wall (CW) bolus ECT and mixed beam plans, each simulated in a cylindrical polystyrene phantom that allowed film dose measurements. Bolus ECT plans were created for both parotid and CW PTVs (planning target volumes) using 20 and 16 MeV beams, respectively,more » whose 90% dose surface conformed to the PTV. Mixed beam plans consisted of an IMXT dose plan optimized on top of the bolus ECT dose plan. The bolus ECT, IMXT, and mixed beam dose distributions were measured using radiographic films in five transverse and one sagittal planes for a total of 36 measurement conditions. Corrections for film dose response, effects of edge-on photon irradiation, and effects of irregular phantom optical properties on the Cerenkov component of the film signal resulted in high precision measurements. Data set consistency was verified by agreement of depth dose at the intersections of the sagittal plane with the five measured transverse planes. For these same depth doses, results for the mixed beam plan agreed with the sum of the individual depth doses for the bolus ECT and IMXT plans. The six mean measured planar dose distributions were compared with those calculated by the treatment planning system for all modalities. Dose agreement was assessed using the 4% dose difference and 0.2 cm distance to agreement. Results: For the combined high-dose region and low-dose region, pass rates for the parotid and CW plans were 98.7% and 96.2%, respectively, for the bolus ECT plans and 97.9% and 97.4%, respectively, for the mixed beam plans. For the high-dose gradient region, pass rates for the parotid and CW plans were 93.1% and 94.62%, respectively, for the bolus ECT plans and 89.2% and 95.1%, respectively, for the mixed beam plans. For all regions, pass rates for the parotid and CW plans were 98.8% and 97.3%, respectively, for the bolus ECT plans and 97.5% and 95.9%, respectively, for the mixed beam plans. For the IMXT component of the mixed beam plans, pass rates for the parotid and CW plans were 93.7% and 95.8%. Conclusions: Bolus ECT and mixed beam therapy dose delivery to the phantom were more accurate than IMXT delivery, adding confidence to the use of planning, fabrication, and delivery for bolus ECT tools either alone or as part of mixed beam therapy. The methodology reported in this work could serve as a basis for future standardization of the commissioning of bolus ECT or mixed beam therapy. When applying this technology to patients, it is recommended that an electron dose algorithm more accurate than the pencil beam algorithm, e.g., a Monte Carlo algorithm or analytical transport such as the pencil beam redefinition algorithm, be used for planning to ensure the desired accuracy.« less

  12. Spectrum and power allocation in cognitive multi-beam satellite communications with flexible satellite payloads

    NASA Astrophysics Data System (ADS)

    Liu, Zhihui; Wang, Haitao; Dong, Tao; Yin, Jie; Zhang, Tingting; Guo, Hui; Li, Dequan

    2018-02-01

    In this paper, the cognitive multi-beam satellite system, i.e., two satellite networks coexist through underlay spectrum sharing, is studied, and the power and spectrum allocation method is employed for interference control and throughput maximization. Specifically, the multi-beam satellite with flexible payload reuses the authorized spectrum of the primary satellite, adjusting its transmission band as well as power for each beam to limit its interference on the primary satellite below the prescribed threshold and maximize its own achievable rate. This power and spectrum allocation problem is formulated as a mixed nonconvex programming. For effective solving, we first introduce the concept of signal to leakage plus noise ratio (SLNR) to decouple multiple transmit power variables in the both objective and constraint, and then propose a heuristic algorithm to assign spectrum sub-bands. After that, a stepwise plus slice-wise algorithm is proposed to implement the discrete power allocation. Finally, simulation results show that adopting cognitive technology can improve spectrum efficiency of the satellite communication.

  13. Minimization of transverse beam-emittance growth in the 90-degree bending section of the RAON rare-isotope accelerator

    NASA Astrophysics Data System (ADS)

    Oh, B. H.; Yoon, M.

    2016-11-01

    The major contribution of the transverse beam emittance growth (EG) in a RAON heavy-ion accelerator comes from the bending section, which consists of a charge-stripping section, a matching section, and a charge-selection section in sequence. In this paper, we describe our research to minimize the two-dimensional EG in the 90-degree bending section of the RAON currently being developed in Korea. The EG minimization was achieved with the help of multi-objective genetic algorithms and the simplex method. We utilized those algorithms to analyze the 90-degree bending section in a driver linac for the in-flight fragmentation system. Horizontal and vertical EGs were limited to below 10 % in the bending section by adjustment of the transverse beam optics upstream from the charge-stripping section, redesign of the charge-selection section, and optimization of the vertical beam optics at the entrance of a charge-selection section.

  14. Image processing of underwater multispectral imagery

    USGS Publications Warehouse

    Zawada, D. G.

    2003-01-01

    Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

  15. Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation

    NASA Technical Reports Server (NTRS)

    Ferriday, James G.; Avery, Susan K.

    1994-01-01

    A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.

  16. SU-F-P-39: End-To-End Validation of a 6 MV High Dose Rate Photon Beam, Configured for Eclipse AAA Algorithm Using Golden Beam Data, for SBRT Treatments Using RapidArc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreyra, M; Salinas Aranda, F; Dodat, D

    Purpose: To use end-to-end testing to validate a 6 MV high dose rate photon beam, configured for Eclipse AAA algorithm using Golden Beam Data (GBD), for SBRT treatments using RapidArc. Methods: Beam data was configured for Varian Eclipse AAA algorithm using the GBD provided by the vendor. Transverse and diagonals dose profiles, PDDs and output factors down to a field size of 2×2 cm2 were measured on a Varian Trilogy Linac and compared with GBD library using 2% 2mm 1D gamma analysis. The MLC transmission factor and dosimetric leaf gap were determined to characterize the MLC in Eclipse. Mechanical andmore » dosimetric tests were performed combining different gantry rotation speeds, dose rates and leaf speeds to evaluate the delivery system performance according to VMAT accuracy requirements. An end-to-end test was implemented planning several SBRT RapidArc treatments on a CIRS 002LFC IMRT Thorax Phantom. The CT scanner calibration curve was acquired and loaded in Eclipse. PTW 31013 ionization chamber was used with Keithley 35617EBS electrometer for absolute point dose measurements in water and lung equivalent inserts. TPS calculated planar dose distributions were compared to those measured using EPID and MapCheck, as an independent verification method. Results were evaluated with gamma criteria of 2% dose difference and 2mm DTA for 95% of points. Results: GBD set vs. measured data passed 2% 2mm 1D gamma analysis even for small fields. Machine performance tests show results are independent of machine delivery configuration, as expected. Absolute point dosimetry comparison resulted within 4% for the worst case scenario in lung. Over 97% of the points evaluated in dose distributions passed gamma index analysis. Conclusion: Eclipse AAA algorithm configuration of the 6 MV high dose rate photon beam using GBD proved efficient. End-to-end test dose calculation results indicate it can be used clinically for SBRT using RapidArc.« less

  17. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  18. Discrete and continuous dynamics modeling of a mass moving on a flexible structure

    NASA Technical Reports Server (NTRS)

    Herman, Deborah Ann

    1992-01-01

    A general discrete methodology for modeling the dynamics of a mass that moves on the surface of a flexible structure is developed. This problem was motivated by the Space Station/Mobile Transporter system. A model reduction approach is developed to make the methodology applicable to large structural systems. To validate the discrete methodology, continuous formulations are also developed. Three different systems are examined: (1) simply-supported beam, (2) free-free beam, and (3) free-free beam with two points of contact between the mass and the flexible beam. In addition to validating the methodology, parametric studies were performed to examine how the system's physical properties affect its dynamics.

  19. Segmentation of large periapical lesions toward dental computer-aided diagnosis in cone-beam CT scans

    NASA Astrophysics Data System (ADS)

    Rysavy, Steven; Flores, Arturo; Enciso, Reyes; Okada, Kazunori

    2008-03-01

    This paper presents an experimental study for assessing the applicability of general-purpose 3D segmentation algorithms for analyzing dental periapical lesions in cone-beam computed tomography (CBCT) scans. In the field of Endodontics, clinical studies have been unable to determine if a periapical granuloma can heal with non-surgical methods. Addressing this issue, Simon et al. recently proposed a diagnostic technique which non-invasively classifies target lesions using CBCT. Manual segmentation exploited in their study, however, is too time consuming and unreliable for real world adoption. On the other hand, many technically advanced algorithms have been proposed to address segmentation problems in various biomedical and non-biomedical contexts, but they have not yet been applied to the field of dentistry. Presented in this paper is a novel application of such segmentation algorithms to the clinically-significant dental problem. This study evaluates three state-of-the-art graph-based algorithms: a normalized cut algorithm based on a generalized eigen-value problem, a graph cut algorithm implementing energy minimization techniques, and a random walks algorithm derived from discrete electrical potential theory. In this paper, we extend the original 2D formulation of the above algorithms to segment 3D images directly and apply the resulting algorithms to the dental CBCT images. We experimentally evaluate quality of the segmentation results for 3D CBCT images, as well as their 2D cross sections. The benefits and pitfalls of each algorithm are highlighted.

  20. Three-dimensional ordering of cold ion beams in a storage ring: A molecular-dynamics simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuri, Yosuke, E-mail: yuri.yosuke@jaea.go.jp

    Three-dimensional (3D) ordering of a charged-particle beams circulating in a storage ring is systematically studied with a molecular-dynamics simulation code. An ion beam can exhibit a 3D ordered configuration at ultralow temperature as a result of powerful 3D laser cooling. Various unique characteristics of the ordered beams, different from those of crystalline beams, are revealed in detail, such as the single-particle motion in the transverse and longitudinal directions, and the dependence of the tune depression and the Coulomb coupling constant on the operating points.

  1. Scintillator-based transverse proton beam profiler for laser-plasma ion sources.

    PubMed

    Dover, N P; Nishiuchi, M; Sakaki, H; Alkhimova, M A; Faenov, A Ya; Fukuda, Y; Kiriyama, H; Kon, A; Kondo, K; Nishitani, K; Ogura, K; Pikuz, T A; Pirozhkov, A S; Sagisaka, A; Kando, M; Kondo, K

    2017-07-01

    A high repetition rate scintillator-based transverse beam profile diagnostic for laser-plasma accelerated proton beams has been designed and commissioned. The proton beam profiler uses differential filtering to provide coarse energy resolution and a flexible design to allow optimisation for expected beam energy range and trade-off between spatial and energy resolution depending on the application. A plastic scintillator detector, imaged with a standard 12-bit scientific camera, allows data to be taken at a high repetition rate. An algorithm encompassing the scintillator non-linearity is described to estimate the proton spectrum at different spatial locations.

  2. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  3. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  4. Explicit symplectic algorithms based on generating functions for relativistic charged particle dynamics in time-dependent electromagnetic field

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa

    2018-02-01

    Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.

  5. Ndarts

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan

    2011-01-01

    Ndarts software provides algorithms for computing quantities associated with the dynamics of articulated, rigid-link, multibody systems. It is designed as a general-purpose dynamics library that can be used for the modeling of robotic platforms, space vehicles, molecular dynamics, and other such applications. The architecture and algorithms in Ndarts are based on the Spatial Operator Algebra (SOA) theory for computational multibody and robot dynamics developed at JPL. It uses minimal, internal coordinate models. The algorithms are low-order, recursive scatter/ gather algorithms. In comparison with the earlier Darts++ software, this version has a more general and cleaner design needed to support a larger class of computational dynamics needs. It includes a frames infrastructure, allows algorithms to operate on subgraphs of the system, and implements lazy and deferred computation for better efficiency. Dynamics modeling modules such as Ndarts are core building blocks of control and simulation software for space, robotic, mechanism, bio-molecular, and material systems modeling.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, G. W.; Fahim, F.; Grybos, P.

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel that recovers composite signals and event driven strobes to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32×32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3 μm X-ray beam. The results of these tests are given in the paper assessing physical implementation of the algorithm.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less

  8. A dose optimization method for electron radiotherapy using randomized aperture beams

    NASA Astrophysics Data System (ADS)

    Engel, Konrad; Gauer, Tobias

    2009-09-01

    The present paper describes the entire optimization process of creating a radiotherapy treatment plan for advanced electron irradiation. Special emphasis is devoted to the selection of beam incidence angles and beam energies as well as to the choice of appropriate subfields generated by a refined version of intensity segmentation and a novel random aperture approach. The algorithms have been implemented in a stand-alone programme using dose calculations from a commercial treatment planning system. For this study, the treatment planning system Pinnacle from Philips has been used and connected to the optimization programme using an ASCII interface. Dose calculations in Pinnacle were performed by Monte Carlo simulations for a remote-controlled electron multileaf collimator (MLC) from Euromechanics. As a result, treatment plans for breast cancer patients could be significantly improved when using randomly generated aperture beams. The combination of beams generated through segmentation and randomization achieved the best results in terms of target coverage and sparing of critical organs. The treatment plans could be further improved by use of a field reduction algorithm. Without a relevant loss in dose distribution, the total number of MLC fields and monitor units could be reduced by up to 20%. In conclusion, using randomized aperture beams is a promising new approach in radiotherapy and exhibits potential for further improvements in dose optimization through a combination of randomized electron and photon aperture beams.

  9. Trajectory optimization for dynamic couch rotation during volumetric modulated arc radiotherapy

    NASA Astrophysics Data System (ADS)

    Smyth, Gregory; Bamber, Jeffrey C.; Evans, Philip M.; Bedford, James L.

    2013-11-01

    Non-coplanar radiation beams are often used in three-dimensional conformal and intensity modulated radiotherapy to reduce dose to organs at risk (OAR) by geometric avoidance. In volumetric modulated arc radiotherapy (VMAT) non-coplanar geometries are generally achieved by applying patient couch rotations to single or multiple full or partial arcs. This paper presents a trajectory optimization method for a non-coplanar technique, dynamic couch rotation during VMAT (DCR-VMAT), which combines ray tracing with a graph search algorithm. Four clinical test cases (partial breast, brain, prostate only, and prostate and pelvic nodes) were used to evaluate the potential OAR sparing for trajectory-optimized DCR-VMAT plans, compared with standard coplanar VMAT. In each case, ray tracing was performed and a cost map reflecting the number of OAR voxels intersected for each potential source position was generated. The least-cost path through the cost map, corresponding to an optimal DCR-VMAT trajectory, was determined using Dijkstra’s algorithm. Results show that trajectory optimization can reduce dose to specified OARs for plans otherwise comparable to conventional coplanar VMAT techniques. For the partial breast case, the mean heart dose was reduced by 53%. In the brain case, the maximum lens doses were reduced by 61% (left) and 77% (right) and the globes by 37% (left) and 40% (right). Bowel mean dose was reduced by 15% in the prostate only case. For the prostate and pelvic nodes case, the bowel V50 Gy and V60 Gy were reduced by 9% and 45% respectively. Future work will involve further development of the algorithm and assessment of its performance over a larger number of cases in site-specific cohorts.

  10. Dynamic characterization of a damaged beam using empirical mode decomposition and Hilbert spectrum method

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Chen; Poon, Chun-Wing

    2004-07-01

    Recently, the empirical mode decomposition (EMD) in combination with the Hilbert spectrum method has been proposed to identify the dynamic characteristics of linear structures. In this study, this EMD and Hilbert spectrum method is used to analyze the dynamic characteristics of a damaged reinforced concrete (RC) beam in the laboratory. The RC beam is 4m long with a cross section of 200mm X 250mm. The beam is sequentially subjected to a concentrated load of different magnitudes at the mid-span to produce different degrees of damage. An impact load is applied around the mid-span to excite the beam. Responses of the beam are recorded by four accelerometers. Results indicate that the EMD and Hilbert spectrum method can reveal the variation of the dynamic characteristics in the time domain. These results are also compared with those obtained using the Fourier analysis. In general, it is found that the two sets of results correlate quite well in terms of mode counts and frequency values. Some differences, however, can be seen in the damping values, which perhaps can be attributed to the linear assumption of the Fourier transform.

  11. Beam Stability R&D for the APS MBA Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sereno, Nicholas S.; Arnold, Ned D.; Bui, Hanh D.

    2015-01-01

    Beam diagnostics required for the APS Multi-bend acromat (MBA) are driven by ambitious beam stability requirements. The major AC stability challenge is to correct rms beam motion to 10% the rms beam size at the insertion device source points from0.01 to 1000 Hz. The vertical plane represents the biggest challenge forAC stability, which is required to be 400 nm rms for a 4-micron vertical beam size. In addition to AC stability, long-term drift over a period of seven days is required to be 1 micron or less. Major diagnostics R&D components include improved rf beam position processing using commercially availablemore » FPGA-based BPM processors, new X-ray beam position monitors based on hard X-ray fluorescence from copper and Compton scattering off diamond, mechanical motion sensing to detect and correct long-term vacuum chamber drift, a new feedback system featuring a tenfold increase in sampling rate, and a several-fold increase in the number of fast correctors and BPMs in the feedback algorithm. Feedback system development represents a major effort, and we are pursuing development of a novel algorithm that integrates orbit correction for both slow and fast correctors down to DC simultaneously. Finally, a new data acquisition system (DAQ) is being developed to simultaneously acquire streaming data from all diagnostics as well as the feedback processors for commissioning and fault diagnosis. Results of studies and the design effort are reported.« less

  12. Sublattice parallel replica dynamics.

    PubMed

    Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  13. An empirical model for calculation of the collimator contamination dose in therapeutic proton beams

    NASA Astrophysics Data System (ADS)

    Vidal, M.; De Marzi, L.; Szymanowski, H.; Guinement, L.; Nauraye, C.; Hierso, E.; Freud, N.; Ferrand, R.; François, P.; Sarrut, D.

    2016-02-01

    Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.

  14. Faceting for direction-dependent spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.

    2018-04-01

    The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.

  15. Real-time dynamics and control strategies for space operations of flexible structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, K. F.; Alexander, S.

    1993-01-01

    This project (NAG9-574) was meant to be a three-year research project. However, due to NASA's reorganizations during 1992, the project was funded only for one year. Accordingly, every effort was made to make the present final report as if the project was meant to be for one-year duration. Originally, during the first year we were planning to accomplish the following: we were to start with a three dimensional flexible manipulator beam with articulated joints and with a linear control-based controller applied at the joints; using this simple example, we were to design the software systems requirements for real-time processing, introduce the streamlining of various computational algorithms, perform the necessary reorganization of the partitioned simulation procedures, and assess the potential speed-up realization of the solution process by parallel computations. The three reports included as part of the final report address: the streamlining of various computational algorithms; the necessary reorganization of the partitioned simulation procedures, in particular the observer models; and an initial attempt of reconfiguring the flexible space structures.

  16. Fluorescence laminar optical tomography for brain imaging: system implementation and performance evaluation.

    PubMed

    Azimipour, Mehdi; Sheikhzadeh, Mahya; Baumgartner, Ryan; Cullen, Patrick K; Helmstetter, Fred J; Chang, Woo-Jin; Pashaie, Ramin

    2017-01-01

    We present our effort in implementing a fluorescence laminar optical tomography scanner which is specifically designed for noninvasive three-dimensional imaging of fluorescence proteins in the brains of small rodents. A laser beam, after passing through a cylindrical lens, scans the brain tissue from the surface while the emission signal is captured by the epi-fluorescence optics and is recorded using an electron multiplication CCD sensor. Image reconstruction algorithms are developed based on Monte Carlo simulation to model light–tissue interaction and generate the sensitivity matrices. To solve the inverse problem, we used the iterative simultaneous algebraic reconstruction technique. The performance of the developed system was evaluated by imaging microfabricated silicon microchannels embedded inside a substrate with optical properties close to the brain as a tissue phantom and ultimately by scanning brain tissue in vivo. Details of the hardware design and reconstruction algorithms are discussed and several experimental results are presented. The developed system can specifically facilitate neuroscience experiments where fluorescence imaging and molecular genetic methods are used to study the dynamics of the brain circuitries.

  17. 4D Cone-beam CT reconstruction using a motion model based on principal component analysis

    PubMed Central

    Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.

    2011-01-01

    Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852

  18. Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit

    PubMed Central

    2012-01-01

    Background Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. Methods The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. Results The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. Conclusions The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable. PMID:22971539

  19. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-01

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  20. Dosimetric verification of radiotherapy treatment planning systems in Serbia: national audit.

    PubMed

    Rutonjski, Laza; Petrović, Borislava; Baucal, Milutin; Teodorović, Milan; Cudić, Ozren; Gershkevitsh, Eduard; Izewska, Joanna

    2012-09-12

    Independent external audits play an important role in quality assurance programme in radiation oncology. The audit supported by the IAEA in Serbia was designed to review the whole chain of activities in 3D conformal radiotherapy (3D-CRT) workflow, from patient data acquisition to treatment planning and dose delivery. The audit was based on the IAEA recommendations and focused on dosimetry part of the treatment planning and delivery processes. The audit was conducted in three radiotherapy departments of Serbia. An anthropomorphic phantom was scanned with a computed tomography unit (CT) and treatment plans for eight different test cases involving various beam configurations suggested by the IAEA were prepared on local treatment planning systems (TPSs). The phantom was irradiated following the treatment plans for these test cases and doses in specific points were measured with an ionization chamber. The differences between the measured and calculated doses were reported. The measurements were conducted for different photon beam energies and TPS calculation algorithms. The deviation between the measured and calculated values for all test cases made with advanced algorithms were within the agreement criteria, while the larger deviations were observed for simpler algorithms. The number of measurements with results outside the agreement criteria increased with the increase of the beam energy and decreased with TPS calculation algorithm sophistication. Also, a few errors in the basic dosimetry data in TPS were detected and corrected. The audit helped the users to better understand the operational features and limitations of their TPSs and resulted in increased confidence in dose calculation accuracy using TPSs. The audit results indicated the shortcomings of simpler algorithms for the test cases performed and, therefore the transition to more advanced algorithms is highly desirable.

  1. CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining

    NASA Astrophysics Data System (ADS)

    Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei

    2015-10-01

    The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.

  2. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.

    PubMed

    Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-07

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  3. Recursive dynamics for flexible multibody systems using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1990-01-01

    Due to their structural flexibility, spacecraft and space manipulators are multibody systems with complex dynamics and possess a large number of degrees of freedom. Here the spatial operator algebra methodology is used to develop a new dynamics formulation and spatially recursive algorithms for such flexible multibody systems. A key feature of the formulation is that the operator description of the flexible system dynamics is identical in form to the corresponding operator description of the dynamics of rigid multibody systems. A significant advantage of this unifying approach is that it allows ideas and techniques for rigid multibody systems to be easily applied to flexible multibody systems. The algorithms use standard finite-element and assumed modes models for the individual body deformation. A Newton-Euler Operator Factorization of the mass matrix of the multibody system is first developed. It forms the basis for recursive algorithms such as for the inverse dynamics, the computation of the mass matrix, and the composite body forward dynamics for the system. Subsequently, an alternative Innovations Operator Factorization of the mass matrix, each of whose factors is invertible, is developed. It leads to an operator expression for the inverse of the mass matrix, and forms the basis for the recursive articulated body forward dynamics algorithm for the flexible multibody system. For simplicity, most of the development here focuses on serial chain multibody systems. However, extensions of the algorithms to general topology flexible multibody systems are described. While the computational cost of the algorithms depends on factors such as the topology and the amount of flexibility in the multibody system, in general, it appears that in contrast to the rigid multibody case, the articulated body forward dynamics algorithm is the more efficient algorithm for flexible multibody systems containing even a small number of flexible bodies. The variety of algorithms described here permits a user to choose the algorithm which is optimal for the multibody system at hand. The availability of a number of algorithms is even more important for real-time applications, where implementation on parallel processors or custom computing hardware is often necessary to maximize speed.

  4. Modeling of polychromatic attenuation using computed tomography reconstructed images

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    1999-01-01

    This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.

  5. An Algorithm of an X-ray Hit Allocation to a Single Pixel in a Cluster and Its Test-Circuit Implementation

    DOE PAGES

    Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel; ...

    2017-06-28

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less

  6. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less

  7. Vapor plume oscillation mechanisms in transient keyhole during tandem dual beam fiber laser welding

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Zhang, Xiaosi; Pang, Shengyong; Hu, Renzhi; Xiao, Jianzhong

    2018-01-01

    Vapor plume oscillations are common physical phenomena that have an important influence on the welding process in dual beam laser welding. However, until now, the oscillation mechanisms of vapor plumes remain unclear. This is primarily because mesoscale vapor plume dynamics inside a millimeter-scale, invisible, and time-dependent keyhole are difficult to quantitatively observe. In this paper, based on a developed three-dimensional (3D) comprehensive model, the vapor plume evolutions in a dynamical keyhole are directly simulated in tandem dual beam, short-wavelength laser welding. Combined with the vapor plume behaviors outside the keyhole observed by high-speed imaging, the vapor plume oscillations in dynamical keyholes at different inter-beam distances are the first, to our knowledge, to be quantitatively analyzed. It is found that vapor plume oscillations outside the keyhole mainly result from vapor plume instabilities inside the keyhole. The ejection velocity at the keyhole opening and dynamical behaviors outside the keyhole of a vapor plume both violently oscillate with the same order of magnitude of high frequency (several kHz). Furthermore, the ejection speed at the keyhole opening and ejection area outside the keyhole both decrease as the beam distance increases, while the degree of vapor plume instability first decreases and then increases with increasing beam distance from 0.6 to 1.0 mm. Moreover, the oscillation mechanisms of a vapor plume inside the dynamical keyhole irradiated by dual laser beams are investigated by thoroughly analyzing the vapor plume occurrence and flow process. The vapor plume oscillations in the dynamical keyhole are found to mainly result from violent local evaporations and severe keyhole geometry variations. In short, the quantitative method and these findings can serve as a reference for further understanding of the physical mechanisms in dual beam laser welding and of processing optimizations in industrial applications.

  8. Systems and methods of varying charged particle beam spot size

    DOEpatents

    Chen, Yu-Jiuan

    2014-09-02

    Methods and devices enable shaping of a charged particle beam. A modified dielectric wall accelerator includes a high gradient lens section and a main section. The high gradient lens section can be dynamically adjusted to establish the desired electric fields to minimize undesirable transverse defocusing fields at the entrance to the dielectric wall accelerator. Once a baseline setting with desirable output beam characteristic is established, the output beam can be dynamically modified to vary the output beam characteristics. The output beam can be modified by slightly adjusting the electric fields established across different sections of the modified dielectric wall accelerator. Additional control over the shape of the output beam can be excreted by introducing intentional timing de-synchronization offsets and producing an injected beam that is not fully matched to the entrance of the modified dielectric accelerator.

  9. Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.

    PubMed

    Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji

    2015-12-01

    We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.

  10. Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C

    2015-05-01

    We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-π,π) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2π. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper, we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.

  11. Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2015-05-01

    We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-pi, pi) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2pi. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.

  12. Direct aperture optimization: a turnkey solution for step-and-shoot IMRT.

    PubMed

    Shepard, D M; Earl, M A; Li, X A; Naqvi, S; Yu, C

    2002-06-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT.

  13. GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin

    2008-07-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

  14. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  15. Beam-dynamics driven design of the LHeC energy-recovery linac

    NASA Astrophysics Data System (ADS)

    Pellegrini, Dario; Latina, Andrea; Schulte, Daniel; Bogacz, S. Alex

    2015-12-01

    The LHeC is envisioned as a natural upgrade of the LHC that aims at delivering an electron beam for collisions with the existing hadronic beams. The current baseline design for the electron facility consists of a multipass superconducting energy-recovery linac (ERL) operating in a continuous wave mode. The unprecedently high energy of the multipass ERL combined with a stringent emittance dilution budget poses new challenges for the beam optics. Here, we investigate the performances of a novel arc architecture based on a flexible momentum compaction lattice that mitigates the effects of synchrotron radiation while containing the bunch lengthening. Extensive beam-dynamics investigations have been performed with placet2, a recently developed tracking code for recirculating machines. They include the first end-to-end tracking and a simulation of the machine operation with a continuous beam. This paper briefly describes the Conceptual Design Report lattice, with an emphasis on possible and proposed improvements that emerged from the beam-dynamics studies. The detector bypass section has been integrated in the lattice, and its design choices are presented here. The stable operation of the ERL with a current up to ˜150 mA in the linacs has been validated in the presence of single- and multibunch wakefields, synchrotron radiation, and beam-beam effects.

  16. Experimental investigation and damage assessment in a post tensioned concrete beam

    NASA Astrophysics Data System (ADS)

    Limongelli, Maria; Siegert, Dominique; Merliot, Erick; Waeytens, Julien; Bourquin, Frederic; Vidal, Roland; Le Corvec, Veronique; Guegen, Ivan; Cottineau, Louis-Marie

    2017-04-01

    This paper presents the results of an experimental campaign carried out on a prestressed concrete beam in the realm of the project SIPRIS (Systèmes Intelligents pour la Prévention des Risques Structurels), aimed to develop intelligent systems for the prevention of structural risk related to the aging of large infrastructures. The specimen was tested in several configurations aimed to re-produce several different phases of the 'life' of the beam: in the original undamaged state, under an increasing loss of tension in the cables, during and after cracking induced by a point load, after a strengthening intervention, after new cracking of the 'repaired' beam. Damage was introduced in a controlled way by means of three-point static bending tests. The transverse point loads were ap-plied at several different sections along the beam axis. Before and after each static test, the dy-namical response of the beam was measured under sine-sweep and impact tests by an extensive set of accelerometers deployed along the beam axis. The availability of both static and dynamic tests allows to investigate and compare their effectiveness to detect damages in the tensioned beam and to reliably identify the evolution of damage. The paper discusses the tests program and some results relevant to the dynamic characterization of the beam in the different phases.

  17. On the dimension of complex responses in nonlinear structural vibrations

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Spottswood, S. M.

    2016-07-01

    The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.

  18. Region-of-interest image reconstruction with intensity weighting in circular cone-beam CT for image-guided radiation therapy

    PubMed Central

    Cho, Seungryong; Pearson, Erik; Pelizzari, Charles A.; Pan, Xiaochuan

    2009-01-01

    Imaging plays a vital role in radiation therapy and with recent advances in technology considerable emphasis has been placed on cone-beam CT (CBCT). Attaching a kV x-ray source and a flat panel detector directly to the linear accelerator gantry has enabled progress in target localization techniques, which can include daily CBCT setup scans for some treatments. However, with an increasing number of CT scans there is also an increasing concern for patient exposure. An intensity-weighted region-of-interest (IWROI) technique, which has the potential to greatly reduce CBCT dose, in conjunction with the chord-based backprojection-filtration (BPF) reconstruction algorithm, has been developed and its feasibility in clinical use is demonstrated in this article. A nonuniform filter is placed in the x-ray beam to create regions of two different beam intensities. In this manner, regions outside the target area can be given a reduced dose but still visualized with a lower contrast to noise ratio. Image artifacts due to transverse data truncation, which would have occurred in conventional reconstruction algorithms, are avoided and image noise levels of the low- and high-intensity regions are well controlled by use of the chord-based BPF reconstruction algorithm. The proposed IWROI technique can play an important role in image-guided radiation therapy. PMID:19472624

  19. GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration.

    PubMed

    Sharp, G C; Kandasamy, N; Singh, H; Folkert, M

    2007-10-07

    This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup--up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration.

  20. Image reconstruction from few-view CT data by gradient-domain dictionary learning.

    PubMed

    Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong

    2016-05-21

    Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.

  1. Vespertilionid bats control the width of their biosonar sound beam dynamically during prey pursuit

    PubMed Central

    Jakobsen, Lasse; Surlykke, Annemarie

    2010-01-01

    Animals using sound for communication emit directional signals, focusing most acoustic energy in one direction. Echolocating bats are listening for soft echoes from insects. Therefore, a directional biosonar sound beam greatly increases detection probability in the forward direction and decreases off-axis echoes. However, high directionality has context-specific disadvantages: at close range the detection space will be vastly reduced, making a broad beam favorable. Hence, a flexible system would be very advantageous. We investigated whether bats can dynamically change directionality of their biosonar during aerial pursuit of insects. We trained five Myotis daubentonii and one Eptesicus serotinus to capture tethered mealworms and recorded their echolocation signals with a multimicrophone array. The results show that the bats broaden the echolocation beam drastically in the terminal phase of prey pursuit. M. daubentonii increased the half-amplitude angle from approximately 40° to approximately 90° horizontally and from approximately 45° to more than 90° vertically. The increase in beam width is achieved by lowering the frequency by roughly one octave from approximately 55 kHz to approximately 27.5 kHz. The E. serotinus showed beam broadening remarkably similar to that of M. daubentonii. Our results demonstrate dynamic control of beam width in both species. Hence, we propose directionality as an explanation for the frequency decrease observed in the buzz of aerial hawking vespertilionid bats. We predict that future studies will reveal dynamic control of beam width in a broad range of acoustically communicating animals. PMID:20643943

  2. Vespertilionid bats control the width of their biosonar sound beam dynamically during prey pursuit.

    PubMed

    Jakobsen, Lasse; Surlykke, Annemarie

    2010-08-03

    Animals using sound for communication emit directional signals, focusing most acoustic energy in one direction. Echolocating bats are listening for soft echoes from insects. Therefore, a directional biosonar sound beam greatly increases detection probability in the forward direction and decreases off-axis echoes. However, high directionality has context-specific disadvantages: at close range the detection space will be vastly reduced, making a broad beam favorable. Hence, a flexible system would be very advantageous. We investigated whether bats can dynamically change directionality of their biosonar during aerial pursuit of insects. We trained five Myotis daubentonii and one Eptesicus serotinus to capture tethered mealworms and recorded their echolocation signals with a multimicrophone array. The results show that the bats broaden the echolocation beam drastically in the terminal phase of prey pursuit. M. daubentonii increased the half-amplitude angle from approximately 40 degrees to approximately 90 degrees horizontally and from approximately 45 degrees to more than 90 degrees vertically. The increase in beam width is achieved by lowering the frequency by roughly one octave from approximately 55 kHz to approximately 27.5 kHz. The E. serotinus showed beam broadening remarkably similar to that of M. daubentonii. Our results demonstrate dynamic control of beam width in both species. Hence, we propose directionality as an explanation for the frequency decrease observed in the buzz of aerial hawking vespertilionid bats. We predict that future studies will reveal dynamic control of beam width in a broad range of acoustically communicating animals.

  3. Dynamic responses of graphite/epoxy laminated beam to impact of elastic spheres

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Wang, T.

    1982-01-01

    Wave propagation in 90/45/90/-45/902s and 0/45/0/-45/02s laminates of a graphite/epoxy composite due to impact of a steel ball was investigated experimentally and also by using a high order beam finite element. Dynamic strain responses at several locations were obtained using strain gages. The finite element program which incorporated statically determined contact laws was employed to calculate the contact force history as well as the target beam dynamic deformation. The comparison of the finite element solutions with the experimental data indicated that the static contact laws for loading and unloading (developed under this grant) are adequate for the dynamic impact analysis. It was found that for the 0/45/0/-45/02s laminate which has a much larger longitudinal bending rigidity, the use of beam finite elements is not suitable and plate finite element should be used instead.

  4. Randomized Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.

  5. Modeled and Measured Dynamics of a Composite Beam with Periodically Varying Foam Core

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.; Cano, Roberto J.; Schiller, Noah H.; Roberts Gary D.

    2012-01-01

    The dynamics of a sandwich beam with carbon fiber composite facesheets and foam core with periodic variations in material properties are studied. The purpose of the study is to compare finite element predictions with experimental measurements on fabricated beam specimens. For the study, three beams were fabricated: one with a compliant foam core, a second with a stiffer core, and a third with the two cores alternating down the length of the beam to create a periodic variation in properties. This periodic variation produces a bandgap in the frequency domain where vibrational energy does not readily propagate down the length of the beam. Mode shapes and natural frequencies are compared, as well as frequency responses from point force input to velocity response at the opposite end of the beam.

  6. Beam dynamics studies at DAΦNE: from ideas to experimental results

    NASA Astrophysics Data System (ADS)

    Zobov, M.; DAΦNE Team

    2017-12-01

    DAΦNE is the electron-positron collider operating at the energy of Φ-resonance, 1 GeV in the center of mass. The presently achieved luminosity is by about two orders of magnitude higher than that obtained at other colliders ever operated at this energy. Careful beam dynamic studies such as the vacuum chamber design with low beam coupling impedance, suppression of different kinds of beam instabilities, investigation of beam-beam interaction, optimization of the beam nonlinear motion have been the key ingredients that have helped to reach this impressive result. Many novel ideas in accelerator physics have been proposed and/or tested experimentally at DAΦNE for the first time. In this paper we discuss the advanced accelerator physics studies performed at DAΦNE.

  7. Rapid Process to Generate Beam Envelopes for Optical System Analysis

    NASA Technical Reports Server (NTRS)

    Howard, Joseph; Seals, Lenward

    2012-01-01

    The task of evaluating obstructions in the optical throughput of an optical system requires the use of two disciplines, and hence, two models: optical models for the details of optical propagation, and mechanical models for determining the actual structure that exists in the optical system. Previous analysis methods for creating beam envelopes (or cones of light) for use in this obstruction analysis were found to be cumbersome to calculate and take significant time and resources to complete. A new process was developed that takes less time to complete beam envelope analysis, is more accurate and less dependent upon manual node tracking to create the beam envelopes, and eases the burden on the mechanical CAD (computer-aided design) designers to form the beam solids. This algorithm allows rapid generation of beam envelopes for optical system obstruction analysis. Ray trace information is taken from optical design software and used to generate CAD objects that represent the boundary of the beam envelopes for detailed analysis in mechanical CAD software. Matlab is used to call ray trace data from the optical model for all fields and entrance pupil points of interest. These are chosen to be the edge of each space, so that these rays produce the bounding volume for the beam. The x and y global coordinate data is collected on the surface planes of interest, typically an image of the field and entrance pupil internal of the optical system. This x and y coordinate data is then evaluated using a convex hull algorithm, which removes any internal points, which are unnecessary to produce the bounding volume of interest. At this point, tolerances can be applied to expand the size of either the field or aperture, depending on the allocations. Once this minimum set of coordinates on the pupil and field is obtained, a new set of rays is generated between the field plane and aperture plane (or vice-versa). These rays are then evaluated at planes between the aperture and field, at a desired number of steps perceived necessary to build up the bounding volume or cone shape. At each plane, the ray coordinates are again evaluated using the convex hull algorithm to reduce the data to a minimal set. When all of the coordinates of interest are obtained for every plane of the propagation, the data is formatted into an xyz file suitable for FRED optical analysis software to import and create a STEP file of the data. This results in a spiral-like structure that is easily imported by mechanical CAD users who can then use an automated algorithm to wrap a skin around it and create a solid that represents the beam.

  8. Solution of the Fokker-Planck equation with mixing of angular harmonics by beam-beam charge exchange

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, D.R.

    1989-09-01

    A method for solving the linear Fokker-Planck equation with anisotropic beam-beam charge exchange loss is presented. The 2-D equation is transformed to a system of coupled 1-D equations which are solved iteratively as independent equations. Although isotropic approximations to the beam-beam losses lead to inaccurate fast ion distributions, typically only a few angular harmonics are needed to include accurately the effect of the beam-beam charge exchange loss on the usual integrals of the fast ion distribution. Consequently, the algorithm converges very rapidly and is much more efficient than a 2-D finite difference method. A convenient recursion formula for the couplingmore » coefficients is given and generalization of the method is discussed. 13 refs., 2 figs.« less

  9. Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator

    NASA Astrophysics Data System (ADS)

    Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei

    2017-09-01

    This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.

  10. Optical simulation of quantum algorithms using programmable liquid-crystal displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia

    2004-04-01

    We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways.

  11. Image reconstruction algorithm for optically stimulated luminescence 2D dosimetry using laser-scanned Al2O3:C and Al2O3:C,Mg films

    NASA Astrophysics Data System (ADS)

    Ahmed, M. F.; Schnell, E.; Ahmad, S.; Yukihara, E. G.

    2016-10-01

    The objective of this work was to develop an image reconstruction algorithm for 2D dosimetry using Al2O3:C and Al2O3:C,Mg optically stimulated luminescence (OSL) films imaged using a laser scanning system. The algorithm takes into account parameters associated with detector properties and the readout system. Pieces of Al2O3:C films (~8 mm  ×  8 mm  ×  125 µm) were irradiated and used to simulate dose distributions with extreme dose gradients (zero and non-zero dose regions). The OSLD film pieces were scanned using a custom-built laser-scanning OSL reader and the data obtained were used to develop and demonstrate a dose reconstruction algorithm. The algorithm includes corrections for: (a) galvo hysteresis, (b) photomultiplier tube (PMT) linearity, (c) phosphorescence, (d) ‘pixel bleeding’ caused by the 35 ms luminescence lifetime of F-centers in Al2O3, (e) geometrical distortion inherent to Galvo scanning system, and (f) position dependence of the light collection efficiency. The algorithm was also applied to 6.0 cm  ×  6.0 cm  ×  125 μm or 10.0 cm  ×  10.0 cm  ×  125 µm Al2O3:C and Al2O3:C,Mg films exposed to megavoltage x-rays (6 MV) and 12C beams (430 MeV u-1). The results obtained using pieces of irradiated films show the ability of the image reconstruction algorithm to correct for pixel bleeding even in the presence of extremely sharp dose gradients. Corrections for geometric distortion and position dependence of light collection efficiency were shown to minimize characteristic limitations of this system design. We also exemplify the application of the algorithm to more clinically relevant 6 MV x-ray beam and a 12C pencil beam, demonstrating the potential for small field dosimetry. The image reconstruction algorithm described here provides the foundation for laser-scanned OSL applied to 2D dosimetry.

  12. Beam dynamics in heavy ion induction LINACS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.

    1981-10-01

    Interest in the use of an induction linac to accelerate heavy ions for the purpose of providing the energy required to initiate an inertially confined fusion reaction has stimulated a theoretical effort to investigate various beam dynamical effects associated with high intensity heavy ion beams. This paper presents a summary of the work that has been done so far; transverse, longitudinal and coupled longitudinal transverse effects are discussed.

  13. SU-E-T-535: Proton Dose Calculations in Homogeneous Media.

    PubMed

    Chapman, J; Fontenot, J; Newhauser, W; Hogstrom, K

    2012-06-01

    To develop a pencil beam dose calculation algorithm for scanned proton beams that improves modeling of scatter events. Our pencil beam algorithm (PBA) was developed for calculating dose from monoenergetic, parallel proton beams in homogeneous media. Fermi-Eyges theory was implemented for pencil beam transport. Elastic and nonelastic scatter effects were each modeled as a Gaussian distribution, with root mean square (RMS) widths determined from theoretical calculations and a nonlinear fit to a Monte Carlo (MC) simulated 1mm × 1mm proton beam, respectively. The PBA was commissioned using MC simulations in a flat water phantom. Resulting PBA calculations were compared with results of other models reported in the literature on the basis of differences between PBA and MC calculations of 80-20% penumbral widths. Our model was further tested by comparing PBA and MC results for oblique beams (45 degree incidence) and surface irregularities (step heights of 1 and 4 cm) for energies of 50-250 MeV and field sizes of 4cm × 4cm and 10cm × 10cm. Agreement between PBA and MC distributions was quantified by computing the percentage of points within 2% dose difference or 1mm distance to agreement. Our PBA improved agreement between calculated and simulated penumbral widths by an order of magnitude compared with previously reported values. For comparisons of oblique beams and surface irregularities, agreement between PBA and MC distributions was better than 99%. Our algorithm showed improved accuracy over other models reported in the literature in predicting the overall shape of the lateral profile through the Bragg peak. This improvement was achieved by incorporating nonelastic scatter events into our PBA. The increased modeling accuracy of our PBA, incorporated into a treatment planning system, may improve the reliability of treatment planning calculations for patient treatments. This research was supported by contract W81XWH-10-1-0005 awarded by The U.S. Army Research Acquisition Activity, 820 Chandler Street, Fort Detrick, MD 21702-5014. This report does not necessarily reflect the position or policy of the Government, and no official endorsement should be inferred. © 2012 American Association of Physicists in Medicine.

  14. Nonlinear static and dynamic finite element analysis of an eccentrically loaded graphite-epoxy beam

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jackson, Karen E.; Jones, Lisa E.

    1991-01-01

    The Dynamic Crash Analysis of Structures (DYCAT) and NIKE3D nonlinear finite element codes were used to model the static and implulsive response of an eccentrically loaded graphite-epoxy beam. A 48-ply unidirectional composite beam was tested under an eccentric axial compressive load until failure. This loading configuration was chosen to highlight the capabilities of two finite element codes for modeling a highly nonlinear, large deflection structural problem which has an exact solution. These codes are currently used to perform dynamic analyses of aircraft structures under impact loads to study crashworthiness and energy absorbing capabilities. Both beam and plate element models were developed to compare with the experimental data using the DYCAST and NIKE3D codes.

  15. 1985 Particle Accelerator Conference: Accelerator Engineering and Technology, 11th, Vancouver, Canada, May 13-16, 1985, Proceedings

    NASA Astrophysics Data System (ADS)

    Strathdee, A.

    1985-10-01

    The topics discussed are related to high-energy accelerators and colliders, particle sources and electrostatic accelerators, controls, instrumentation and feedback, beam dynamics, low- and intermediate-energy circular accelerators and rings, RF and other acceleration systems, beam injection, extraction and transport, operations and safety, linear accelerators, applications of accelerators, radiation sources, superconducting supercolliders, new acceleration techniques, superconducting components, cryogenics, and vacuum. Accelerator and storage ring control systems are considered along with linear and nonlinear orbit theory, transverse and longitudinal instabilities and cures, beam cooling, injection and extraction orbit theory, high current dynamics, general beam dynamics, and medical and radioisotope applications. Attention is given to superconducting RF structures, magnet technology, superconducting magnets, and physics opportunities with relativistic heavy ion accelerators.

  16. Using deep recurrent neural network for direct beam solar irradiance cloud screening

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John M.; Liu, Chaoshun; Sun, Zhibin; Zempila, Melina Maria; Gao, Wei

    2017-09-01

    Cloud screening is an essential procedure for in-situ calibration and atmospheric properties retrieval on (UV-)MultiFilter Rotating Shadowband Radiometer [(UV-)MFRSR]. Previous study has explored a cloud screening algorithm for direct-beam (UV-)MFRSR voltage measurements based on the stability assumption on a long time period (typically a half day or a whole day). To design such an algorithm requires in-depth understanding of radiative transfer and delicate data manipulation. Recent rapid developments on deep neural network and computation hardware have opened a window for modeling complicated End-to-End systems with a standardized strategy. In this study, a multi-layer dynamic bidirectional recurrent neural network is built for determining the cloudiness on each time point with a 17-year training dataset and tested with another 1-year dataset. The dataset is the daily 3-minute cosine corrected voltages, airmasses, and the corresponding cloud/clear-sky labels at two stations of the USDA UV-B Monitoring and Research Program. The results show that the optimized neural network model (3-layer, 250 hidden units, and 80 epochs of training) has an overall test accuracy of 97.87% (97.56% for the Oklahoma site and 98.16% for the Hawaii site). Generally, the neural network model grasps the key concept of the original model to use data in the entire day rather than short nearby measurements to perform cloud screening. A scrutiny of the logits layer suggests that the neural network model automatically learns a way to calculate a quantity similar to total optical depth and finds an appropriate threshold for cloud screening.

  17. Propagation dynamics of off-axis symmetrical and asymmetrical vortices embedded in flat-topped beams

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Wang, Haiyan

    2017-11-01

    In this paper, propagation dynamics of off-axis symmetrical and asymmetrical optical vortices(OVs) embedded in flat-topped beams have been explored numerically based on rigorous scalar diffraction theory. The distribution properties of phase and intensity play an important role in driving the propagation dynamics of OVs. Numerical results show that the single off-axis vortex moves in a straight line. The displacement of the single off-axis vortex becomes smaller, when either the order of flatness N and the beam size ω0are increased or the off-axis displacement d is decreased. In addition, the phase singularities of high order vortex beams can be split after propagating a certain distance. It is also demonstrated that the movement of OVs are closely related with the spatial symmetrical or asymmetrical distribution of vortex singularities field. Multiple symmetrical and asymmetrical optical vortices(OVs) embedded in flat-topped beams can interact and rotate. The investment of the propagation dynamics of OVs may have many applications in optical micro-manipulation and optical tweezers.

  18. Crossed Molecular Beam Studies and Dynamics of Decomposition of Chemically Activated Radicals

    DOE R&D Accomplishments Database

    Lee, Y. T.

    1973-09-01

    The power of the crossed molecular beams method in the investigation of the dynamics of chemical reactions lies mainly in the direct observation of the consequences of single collisions of well controlled reactant molecules. The primary experimental observations which provide information on reaction dynamics are the measurements of angular and velocity distributions of reaction products.

  19. SU-F-J-76: Evaluation of the Performance of Different Deformable Image Registration Algorithms in Helical, Axial and Cone-Beam CT Images of a Mobile Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaskowiak, J; Ahmad, S; Ali, I

    Purpose: To investigate quantitatively the performance of different deformable-image-registration algorithms (DIR) with helical (HCT), axial (ACT) and cone-beam CT (CBCT) by evaluating the variations in the CT-numbers and lengths of targets moving with controlled motion-patterns. Methods: Four DIR-algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from the DIRART-software are used to register CT-images of a mobile-phantom. A mobile-phantom is scanned with different imaging techniques that include helical, axial and cone-beam CT. The phantom includes three targets with different lengths that are made from water-equivalent material and inserted in low-density-foam which is moved with adjustable motion-amplitudes and frequencies. Results: Most of themore » DIR-algorithms are able to produce the lengths of the stationary-targets, however, they do not produce the CT-number values in CBCT. The image-artifacts induced by motion are more regular in CBCT imaging where the mobile-target elongation increases linearly with motion-amplitude. In ACT and HCT, the motion-artifacts are irregular where some mobile -targets are elongated or shrunk depending on the motion-phase during imaging. The DIR-algorithms are successful in deforming the images of the mobile-targets to the images of the stationary-targets producing the CT-number values and length of the target for motion-amplitudes < 20 mm. Similarly in ACT, all DIR-algorithms produced the actual CT-number and length of the stationary-targets for motion-amplitudes < 15 mm. As stronger motion-artifacts are induced in HCT and ACT, DIR-algorithms fail to produce CT-values and shape of the stationary-targets and fast-demons-algorithm has worst performance. Conclusion: Most of DIR-algorithms produce the CT-number values and lengths of the stationary-targets in HCT and ACT images that has motion-artifacts induced by small motion-amplitudes. As motion-amplitudes increase, the DIR-algorithms fail to deform mobile-target images to the stationary-images in HCT and ACT. In CBCT, DIR-algorithms are successful in producing length and shape of the stationary-targets, however, they fail to produce the accurate CT-number level.« less

  20. Measuring radiation damage dynamics by pulsed ion beam irradiation: 2016 project annual report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucheyev, Sergei O.

    2017-01-04

    The major goal of this project is to develop and demonstrate a novel experimental approach to access the dynamic regime of radiation damage formation in nuclear materials. In particular, the project exploits a pulsed-ion-beam method in order to gain insight into defect interaction dynamics by measuring effective defect interaction time constants and defect diffusion lengths. For Year 3, this project had the following two major milestones: (i) the demonstration of the measurement of thermally activated defect-interaction processes by pulsed ion beam techniques and (ii) the demonstration of alternative characterization techniques to study defect dynamics. As we describe below, both ofmore » these milestones have been met.« less

Top