Sample records for individual beamlets analysis

  1. High brightness--multiple beamlets source for patterned X-ray production

    DOEpatents

    Leung, Ka-Ngo [Hercules, CA; Ji, Qing [Albany, CA; Barletta, William A [Oakland, CA; Jiang, Ximan [El Cerrito, CA; Ji, Lili [Albany, CA

    2009-10-27

    Techniques for controllably directing beamlets to a target substrate are disclosed. The beamlets may be either positive ions or electrons. It has been shown that beamlets may be produced with a diameter of 1 .mu.m, with inter-aperture spacings of 12 .mu.m. An array of such beamlets, may be used for maskless lithography. By step-wise movement of the beamlets relative to the target substrate, individual devices may be directly e-beam written. Ion beams may be directly written as well. Due to the high brightness of the beamlets from extraction from a multicusp source, exposure times for lithographic exposure are thought to be minimized. Alternatively, the beamlets may be electrons striking a high Z material for X-ray production, thereafter collimated to provide patterned X-ray exposures such as those used in CAT scans. Such a device may be used for remote detection of explosives.

  2. The divergence characteristics of constrained-sheath optics systems for use with 5-eV atomic oxygen sources

    NASA Technical Reports Server (NTRS)

    Anderson, John R.; Wilbur, Paul J.

    1989-01-01

    The potential usefulness of the constrained sheath optics concept as a means of controlling the divergence of low energy, high current density ion beams is examined numerically and experimentally. Numerical results demonstrate that some control of the divergence of typical ion beamlets can be achieved at perveance levels of interest by contouring the surface of the constrained sheath properly. Experimental results demonstrate that a sheath can be constrained by a wire mesh attached to the screen plate of the ion optics system. The numerically predicted beamlet divergence characteristics are shown to depart from those measured experimentally, and additional numerical analysis is used to demonstrate that this departure is probably due to distortions of the sheath caused by the fact that it attempts to conform to the individual wires that make up the sheath constraining mesh. The concept is considered potentially useful in controlling the divergence of ion beamlets in applications where low divergence, low energy, high current density beamlets are being sought, but more work is required to demonstrate this for net beam ion energies as low as 5 eV.

  3. High-energy (>70 keV) x-ray conversion efficiency measurement on the ARC laser at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Hermann, M. R.; Kalantar, D. H.; Martinez, D. A.; Di Nicola, P.; Tommasini, R.; Landen, O. L.; Alessi, D.; Bowers, M.; Browning, D.; Brunton, G.; Budge, T.; Crane, J.; Di Nicola, J.-M.; Döppner, T.; Dixit, S.; Erbert, G.; Fishler, B.; Halpin, J.; Hamamoto, M.; Heebner, J.; Hernandez, V. J.; Hohenberger, M.; Homoelle, D.; Honig, J.; Hsing, W.; Izumi, N.; Khan, S.; LaFortune, K.; Lawson, J.; Nagel, S. R.; Negres, R. A.; Novikova, L.; Orth, C.; Pelz, L.; Prantil, M.; Rushford, M.; Shaw, M.; Sherlock, M.; Sigurdsson, R.; Wegner, P.; Widmayer, C.; Williams, G. J.; Williams, W.; Whitman, P.; Yang, S.

    2017-03-01

    The Advanced Radiographic Capability (ARC) laser system at the National Ignition Facility (NIF) is designed to ultimately provide eight beamlets with a pulse duration adjustable from 1 to 30 ps, and energies up to 1.5 kJ per beamlet. Currently, four beamlets have been commissioned. In the first set of 6 commissioning target experiments, the individual beamlets were fired onto gold foil targets with energy up to 1 kJ per beamlet at 20-30 ps pulse length. The x-ray energy distribution and pulse duration were measured, yielding energy conversion efficiencies of 4-9 × 10-4 for x-rays with energies greater than 70 keV. With greater than 3 J of such x-rays, ARC provides a high-precision x-ray backlighting capability for upcoming inertial confinement fusion and high-energy-density physics experiments on NIF.

  4. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    PubMed Central

    Gelover, Edgar; Wang, Dongxu; Hill, Patrick M.; Flynn, Ryan T.; Gao, Mingcheng; Laub, Steve; Pankuch, Mark; Hyer, Daniel E.

    2015-01-01

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEV parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σx1,σx2,σy1,σy2) together with the spatial location of the maximum dose (μx,μy). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets. PMID:25735287

  5. High-energy (> 70 KeV) x-ray conversion efficiency measurement on the ARC laser at the National Ignition Facility

    DOE PAGES

    Chen, Hui; Hermann, M. R.; Kalantar, D. H.; ...

    2017-03-16

    Here, the Advanced Radiographic Capability (ARC) laser system at the National Ignition Facility (NIF) is designed to ultimately provide eight beamlets with a pulse duration adjustable from 1 to 30 ps, and energies up to 1.5 kJ per beamlet. Currently, four beamlets have been commissioned. In the first set of 6 commissioning target experiments, the individual beamlets were fired onto gold foil targets with energy up to 1 kJ per beamlet at 20–30 ps pulse length. The x-ray energy distribution and pulse duration were measured, yielding energy conversion efficiencies of 4–9 × 10 –4 for x-rays with energies greater thanmore » 70 keV. With greater than 3 J of such x-rays, ARC provides a high-precision x-ray backlighting capability for upcoming inertial confinement fusion and high-energy-density physics experiments on NIF.« less

  6. Progress on complementary patterning using plasmon-excited electron beamlets (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Du, Zhidong; Chen, Chen; Pan, Liang

    2017-04-01

    Maskless lithography using parallel electron beamlets is a promising solution for next generation scalable maskless nanolithography. Researchers have focused on this goal but have been unable to find a robust technology to generate and control high-quality electron beamlets with satisfactory brightness and uniformity. In this work, we will aim to address this challenge by developing a revolutionary surface-plasmon-enhanced-photoemission (SPEP) technology to generate massively-parallel electron beamlets for maskless nanolithography. The new technology is built upon our recent breakthroughs in plasmonic lenses, which will be used to excite and focus surface plasmons to generate massively-parallel electron beamlets through photoemission. Specifically, the proposed SPEP device consists of an array of plasmonic lens and electrostatic micro-lens pairs, each pair independently producing an electron beamlet. During lithography, a spatial optical modulator will dynamically project light onto individual plasmonic lenses to control the switching and brightness of electron beamlets. The photons incident onto each plasmonic lens are concentrated into a diffraction-unlimited spot as localized surface plasmons to excite the local electrons to near their vacuum levels. Meanwhile, the electrostatic micro-lens extracts the excited electrons to form a focused beamlet, which can be rastered across a wafer to perform lithography. Studies showed that surface plasmons can enhance the photoemission by orders of magnitudes. This SPEP technology can scale up the maskless lithography process to write at wafers per hour. In this talk, we will report the mechanism of the strong electron-photon couplings and the locally enhanced photoexcitation, design of a SPEP device, overview of our proof-of-concept study, and demonstrated parallel lithography of 20-50 nm features.

  7. A method for modeling laterally asymmetric proton beamlets resulting from collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelover, Edgar; Wang, Dongxu; Flynn, Ryan T.

    2015-03-15

    Purpose: To introduce a method to model the 3D dose distribution of laterally asymmetric proton beamlets resulting from collimation. The model enables rapid beamlet calculation for spot scanning (SS) delivery using a novel penumbra-reducing dynamic collimation system (DCS) with two pairs of trimmers oriented perpendicular to each other. Methods: Trimmed beamlet dose distributions in water were simulated with MCNPX and the collimating effects noted in the simulations were validated by experimental measurement. The simulated beamlets were modeled analytically using integral depth dose curves along with an asymmetric Gaussian function to represent fluence in the beam’s eye view (BEV). The BEVmore » parameters consisted of Gaussian standard deviations (sigmas) along each primary axis (σ{sub x1},σ{sub x2},σ{sub y1},σ{sub y2}) together with the spatial location of the maximum dose (μ{sub x},μ{sub y}). Percent depth dose variation with trimmer position was accounted for with a depth-dependent correction function. Beamlet growth with depth was accounted for by combining the in-air divergence with Hong’s fit of the Highland approximation along each axis in the BEV. Results: The beamlet model showed excellent agreement with the Monte Carlo simulation data used as a benchmark. The overall passing rate for a 3D gamma test with 3%/3 mm passing criteria was 96.1% between the analytical model and Monte Carlo data in an example treatment plan. Conclusions: The analytical model is capable of accurately representing individual asymmetric beamlets resulting from use of the DCS. This method enables integration of the DCS into a treatment planning system to perform dose computation in patient datasets. The method could be generalized for use with any SS collimation system in which blades, leaves, or trimmers are used to laterally sharpen beamlets.« less

  8. Real time radiotherapy verification with Cherenkov imaging: development of a system for beamlet verification

    NASA Astrophysics Data System (ADS)

    Pogue, B. W.; Krishnaswamy, V.; Jermyn, M.; Bruza, P.; Miao, T.; Ware, William; Saunders, S. L.; Andreozzi, J. M.; Gladstone, D. J.; Jarvis, L. A.

    2017-05-01

    Cherenkov imaging has been shown to allow near real time imaging of the beam entrance and exit on patient tissue, with the appropriate intensified camera and associated image processing. A dedicated system has been developed for research into full torso imaging of whole breast irradiation, where the dual camera system captures the beam shape for all beamlets used in this treatment protocol. Particularly challenging verification measurement exists in dynamic wedge, field in field, and boost delivery, and the system was designed to capture these as they are delivered. Two intensified CMOS (ICMOS) cameras were developed and mounted in a breast treatment room, and pilot studies for intensity and stability were completed. Software tools to contour the treatment area have been developed and are being tested prior to initiation of the full trial. At present, it is possible to record delivery of individual beamlets as small as a single MLC thickness, and readout at 20 frames per second is achieved. Statistical analysis of system repeatibilty and stability is presented, as well as pilot human studies.

  9. Ion beamlet steering for two-grid electrostatic thrusters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Homa, J. M.

    1984-01-01

    An experimental study of ion beamlet steering in which the direction of beamlets emitted from a two grid aperture system is controlled by relative translation of the grids, is described. The results can be used to design electrostatic accelerating devices for which the direction and focus of emerging beamlets are important. Deflection and divergence angle data are presented for two grid systems as a function of the relative lateral displacement of the holes in these grids. At large displacements, accelerator grid impingements become excessive and this determines the maximum allowable displacement and as a result the useful range of beamlet deflection. Beamlet deflection is shown to vary linearly with grid offset angle over this range. The divergence of the beamlets is found to be unaffected by deflection over the useful range of beamlet deflection. The grids of a typical dished grid ion thruster are examined to determine the effects of thermally induced grid distortion and prescribed offsets of grid hole centerlines on the characteristics of the emerging beamlets. The results are used to determine the region on the grid surface where ion beamlet deflections exceed the useful range. Over this region high accelerator grid impingement currents and rapid grid erosion are predicted.

  10. A technique for generating phase-space-based Monte Carlo beamlets in radiotherapy applications.

    PubMed

    Bush, K; Popescu, I A; Zavgorodni, S

    2008-09-21

    As radiotherapy treatment planning moves toward Monte Carlo (MC) based dose calculation methods, the MC beamlet is becoming an increasingly common optimization entity. At present, methods used to produce MC beamlets have utilized a particle source model (PSM) approach. In this work we outline the implementation of a phase-space-based approach to MC beamlet generation that is expected to provide greater accuracy in beamlet dose distributions. In this approach a standard BEAMnrc phase space is sorted and divided into beamlets with particles labeled using the inheritable particle history variable. This is achieved with the use of an efficient sorting algorithm, capable of sorting a phase space of any size into the required number of beamlets in only two passes. Sorting a phase space of five million particles can be achieved in less than 8 s on a single-core 2.2 GHz CPU. The beamlets can then be transported separately into a patient CT dataset, producing separate dose distributions (doselets). Methods for doselet normalization and conversion of dose to absolute units of Gy for use in intensity modulated radiation therapy (IMRT) plan optimization are also described.

  11. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.

  12. Design, installation, commissioning and operation of a beamlet monitor in the negative ion beam test stand at NIFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, V.; Agostinetti, P.; Brombin, M.

    2015-04-08

    In the framework of the accompanying activity for the development of the two neutral beam injectors for the ITER fusion experiment, an instrumented beam calorimeter is being designed at Consorzio RFX, to be used in the SPIDER test facility (particle energy 100keV; beam current 50A), with the aim of testing beam characteristics and to verify the source proper operation. The main components of the instrumented calorimeter are one-directional carbon-fibre-carbon composite tiles. Some prototype tiles have been used as a small-scale version of the entire calorimeter in the test stand of the neutral beam injectors of the LHD experiment, with themore » aim of characterising the beam features in various operating conditions. The extraction system of the NIFS test stand source was modified, by applying a mask to the first gridded electrode, in order to isolate only a subset of the beamlets, arranged in two 3×5 matrices, resembling the beamlet groups of the ITER beam sources. The present contribution gives a description of the design of the diagnostic system, including the numerical simulations of the expected thermal pattern. Moreover the dedicated thermocouple measurement system is presented. The beamlet monitor was successfully used for a full experimental campaign, during which the main parameters of the source, mainly the arc power and the grid voltages, were varied. This contribution describes the methods of fitting and data analysis applied to the infrared images of the camera to recover the beamlet optics characteristics, in order to quantify the response of the system to different operational conditions. Some results concerning the beamlet features are presented as a function of the source parameters.« less

  13. Polarization Rotation Caused by Cross-Beam Energy Transfer in Direct-Drive Implosions

    NASA Astrophysics Data System (ADS)

    Edgell, D. H.; Follett, R. K.; Katz, J.; Myatt, J. F.; Shaw, J. G.; Turnbull, D.; Froula, D. H.

    2017-10-01

    The first evidence of polarization rotation caused by cross-beam energy transfer (CBET) during direct-drive implosions has been provided by a new beamlets diagnostic that was fielded on OMEGA. Beamlet images are, in essence, the end points of beamlets of light originating from different regions of each beam profile and following paths determined by refraction through the coronal plasma. The intensity of each beamlet varies because of absorption and many CBET interactions along that path. The new diagnostic records images in two time windows and includes a Wollaston prism to split each beamlet into two orthogonal polarization images recording the polarization of each beamlet. Only the common polarization components couple during CBET so when each beam is linearly polarized, CBET rotates the polarization of each beam. A 3-D CBET postprocessor for hydrodynamics codes was used to model the beamlet images. The predicted images are compared to the images recorded by the new diagnostic. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  14. Multi-beamlet investigation of the deflection compensation methods of SPIDER beamlets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baltador, C., E-mail: carlo.baltador@igi.cnr.it; Veltri, P.; Agostinetti, P.

    2016-02-15

    SPIDER (Source for Production of Ions of Deuterium Extracted from a Rf plasma) is an ion source test bed designed to extract and accelerate a negative ion current up to 40 A and 100 kV whose first beam is expected by the end of 2016. Two main effects perturb beamlet optics during the acceleration stage: space charge repulsion and the deflection induced by the permanent magnets (called co-extracted electron suppression magnets) embedded in the EG. The purpose of this work is to evaluate and compare benefits, collateral effects, and limitations of electrical and magnetic compensation methods for beamlet deflection. Themore » study of these methods has been carried out by means of numerical modeling tools: multi-beamlet simulations have been performed for the first time.« less

  15. Multi-beamlet investigation of the deflection compensation methods of SPIDER beamlets

    NASA Astrophysics Data System (ADS)

    Baltador, C.; Veltri, P.; Agostinetti, P.; Chitarin, G.; Serianni, G.

    2016-02-01

    SPIDER (Source for Production of Ions of Deuterium Extracted from a Rf plasma) is an ion source test bed designed to extract and accelerate a negative ion current up to 40 A and 100 kV whose first beam is expected by the end of 2016. Two main effects perturb beamlet optics during the acceleration stage: space charge repulsion and the deflection induced by the permanent magnets (called co-extracted electron suppression magnets) embedded in the EG. The purpose of this work is to evaluate and compare benefits, collateral effects, and limitations of electrical and magnetic compensation methods for beamlet deflection. The study of these methods has been carried out by means of numerical modeling tools: multi-beamlet simulations have been performed for the first time.

  16. Constrained sheath optics for high thrust density, low specific impulse ion thrusters

    NASA Technical Reports Server (NTRS)

    Wilbur, Paul J.; Han, Jian-Zhang

    1987-01-01

    The results of an experimental study showing that a contoured, fine wire mesh attached to the screen grid can be used to control the divergence characteristics of ion beamlets produced at low net-to-total accelerating voltage ratios are presented. The influence of free and constrained-sheath optics systems on beamlet divergence characteristics are found to be similar in the operating regime investigated, but it was found that constrained-sheath optics systems can be operated at higher perveance levels than free-sheath ones. The concept of a fine wire interference probe that can be used to study ion beamlet focusing behavior is introduced. This probe is used to demonstrate beamlet focusing to a diameter about one hundreth of the screen grid extraction aperture diameter. Additional testing is suggested to define an optimally contoured mesh that could yield well focused beamlets at net-to-total accelerating voltage ratios below about 0.1.

  17. Sputtering Holes with Ion Beamlets

    NASA Technical Reports Server (NTRS)

    Byers, D. C.; Banks, B. A.

    1974-01-01

    Ion beamlets of predetermined configurations are formed by shaped apertures in the screen grid of an ion thruster having a double grid accelerator system. A plate is placed downstream from the screen grid holes and attached to the accelerator grid. When the ion thruster is operated holes having the configuration of the beamlets formed by the screen grid are sputtered through the plate at the accelerator grid.

  18. Design of a Multistep Phase Mask for High-Energy Terahertz Pulse Generation by Optical Rectification

    NASA Astrophysics Data System (ADS)

    Avetisyan, Y.; Makaryan, A.; Tadevosyan, V.; Tonouchi, M.

    2017-12-01

    A new scheme for generating high-energy terahertz (THz) pulses based on using a multistep phase mask (MSPM) is suggested and analyzed. The mask is placed on the entrance surface of the nonlinear optical (NLO) crystal eliminating the necessity of the imaging optics. In contrast to the contact grating method, introduction of large amounts of angular dispersion is avoided. The operation principle of the suggested scheme is based on the fact that the MSPM splits a single input beam into many smaller time-delayed "beamlets," which together form a discretely tilted-front laser pulse in NLO crystal. The analysis of THz-pulse generation in ZnTe and lithium niobate (LN) crystals shows that application of ZnTe crystal is more preferable, especially when long-wavelength pump sources are used. The dimensions of the mask's steps required for high-energy THz-pulse generation in ZnTe and LN crystals are calculated. The optimal number of steps is estimated, taking into account individual beamlet's spatial broadening and problems related to the mask fabrication. The proposed method is a promising way to develop high-energy, monolithic, and alignment-free THz-pulse sources.

  19. Numerical simulations of the first operational conditions of the negative ion test facility SPIDER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serianni, G., E-mail: gianluigi.serianni@igi.cnr.it; Agostinetti, P.; Antoni, V.

    2016-02-15

    In view of the realization of the negative ion beam injectors for ITER, a test facility, named SPIDER, is under construction in Padova (Italy) to study and optimize production and extraction of negative ions. The present paper is devoted to the analysis of the expected first operations of SPIDER in terms of single-beamlet and multiple-beamlet simulations of the hydrogen beam optics in various operational conditions. The effectiveness of the methods adopted to compensate for the magnetic deflection of the particles is also assessed. Indications for a sequence of the experimental activities are obtained.

  20. Numerical simulations of the first operational conditions of the negative ion test facility SPIDER

    NASA Astrophysics Data System (ADS)

    Serianni, G.; Agostinetti, P.; Antoni, V.; Baltador, C.; Cavenago, M.; Chitarin, G.; Marconato, N.; Pasqualotto, R.; Sartori, E.; Toigo, V.; Veltri, P.

    2016-02-01

    In view of the realization of the negative ion beam injectors for ITER, a test facility, named SPIDER, is under construction in Padova (Italy) to study and optimize production and extraction of negative ions. The present paper is devoted to the analysis of the expected first operations of SPIDER in terms of single-beamlet and multiple-beamlet simulations of the hydrogen beam optics in various operational conditions. The effectiveness of the methods adopted to compensate for the magnetic deflection of the particles is also assessed. Indications for a sequence of the experimental activities are obtained.

  1. A Compact High-Brightness Heavy-Ion Injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westenskow, G A; Grote, D P; Halaxa, E

    2005-05-11

    To provide a compact high-brightness heavy-ion beam source for Heavy Ion Fusion (HIF) accelerators, we have been experimenting with merging multi-beamlets in an injector which uses an RF plasma source. In an 80-kV 20-microsecond experiment, the RF plasma source has produced up to 5 mA of Ar{sup +} in a single beamlet. An extraction current density of 100 mA/cm{sup 2} was achieved, and the thermal temperature of the ions was below 1 eV. We have tested at full voltage gradient the first 4 gaps of an injector design. Einzel lens were used to focus the beamlets while reducing the beamletmore » to beamlet space charge interaction. We were able to reach greater than 100 kV/cm in the first four gaps. We also performed experiments on a converging 119 multi-beamlet source. Although the source has the same optics as a full 1.6 MV injector system, these test were carried out at 400 kV due to the test stand HV limit. We have measured the beam's emittance after the beamlets are merged and passed through an electrostatic quadrupole (ESQ). Our goal is to confirm the emittance growth and to demonstrate the technical feasibility of building a driver-scale HIF injector.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thiyagarajan, Rajesh; Karrthick, KP; Kataria, Tejinder

    Purpose: Performing DQA for Bilateral (B-L) breast tomotherapy is a challenging task due to the limitation of any commercially available detector array or film. Aim of this study is to perform DQA for B-L breast tomotherapy plan using MLC fluence sinogram. Methods: Treatment plan was generated on Tomotherapy system for B-L breast tumour. B-L breast targets were given 50.4 Gy prescribed over 28 fractions. Plan is generated with 6 MV photon beam & pitch was set to 0.3. As the width of the total target is 39 cm (left & right) length is 20 cm. DQA plan delivered without anymore » phantom on the mega voltage computed tomography (MCVT) detector system. The pulses recorded by MVCT system were exported to the delivery analysis software (Tomotherapy Inc.) for reconstruction. The detector signals are reconstructed to a sonogram and converted to MLC fluence sonogram. The MLC fluence sinogram compared with the planned fluence sinogram. Also point dose measured with cheese phantom and ionization chamber to verify the absolute dose component Results: Planned fluence sinogram and reconstructed MLC fluence sinogram were compared using Gamma metric. MLC positional difference and intensity of the beamlet were used as parameters to evaluate gamma. 3 mm positional difference and 3% beamlet intensity difference were used set for gamma calculation. A total of 26784 non-zero beamlets were included in the analysis out of which 161 beamlets had gamma more than 1. The gamma passing rate found to be 99.4%. Point dose measurements were within 1.3% of the calculated dose. Conclusion: MLC fluence sinogram based delivery quality assurance performed for bilateral breast irradiation. This would be a suitable alternate for large volume targets like bilateral breast, Total body irradiation etc. However conventional method of DQA should be used to validate this method periodically.« less

  3. Life on the edge: squirrel-cage fringe fields and their effects in the MBE-4 combiner experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, W.M.

    1996-02-01

    The MBE-4 combiner experiment employs an electrostatic combined-function focusing/bending element, the so-called ``squirrel-cage`` just before the actual merging region. There has been concern that non-linear fields, primarily in the fringe regions at the beginning and end of the cage, may be strong enough to lead to significant emittance degradation. This note present the results of numerical calculations which determined the anharmonic, non-linear components of the 3D fields in the cage and the resultant, orbit-integrated effects upon the MBE-4 beamlets. We find that while the anharmonic effects are small compared to the dipole deflection, the resultant transverse emittance growth is significantmore » when compared to the expected value of the initial emittance of the individual beamlets.« less

  4. Beamlet diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theys, M.

    1994-05-06

    Beamlet is a high power laser currently being built at Lawrence Livermore National Lab as a proof of concept for the National Ignition Facility (NIF). Beamlet is testing several areas of laser advancements, such as a 37cm Pockels cell, square amplifier, and propagation of a square beam. The diagnostics on beamlet tell the operators how much energy the beam has in different locations, the pulse shape, the energy distribution, and other important information regarding the beam. This information is being used to evaluate new amplifier designs, and extrapolate performance to the NIF laser. In my term at Lawrence Livermore Nationalmore » Laboratory I have designed and built a diagnostic, calibrated instruments used on diagnostics, setup instruments, hooked up communication lines to the instruments, and setup computers to control specific diagnostics.« less

  5. Inverse-optimized 3D conformal planning: Minimizing complexity while achieving equivalence with beamlet IMRT in multiple clinical sites

    PubMed Central

    Fraass, Benedick A.; Steers, Jennifer M.; Matuszak, Martha M.; McShan, Daniel L.

    2012-01-01

    Purpose: Inverse planned intensity modulated radiation therapy (IMRT) has helped many centers implement highly conformal treatment planning with beamlet-based techniques. The many comparisons between IMRT and 3D conformal (3DCRT) plans, however, have been limited because most 3DCRT plans are forward-planned while IMRT plans utilize inverse planning, meaning both optimization and delivery techniques are different. This work avoids that problem by comparing 3D plans generated with a unique inverse planning method for 3DCRT called inverse-optimized 3D (IO-3D) conformal planning. Since IO-3D and the beamlet IMRT to which it is compared use the same optimization techniques, cost functions, and plan evaluation tools, direct comparisons between IMRT and simple, optimized IO-3D plans are possible. Though IO-3D has some similarity to direct aperture optimization (DAO), since it directly optimizes the apertures used, IO-3D is specifically designed for 3DCRT fields (i.e., 1–2 apertures per beam) rather than starting with IMRT-like modulation and then optimizing aperture shapes. The two algorithms are very different in design, implementation, and use. The goals of this work include using IO-3D to evaluate how close simple but optimized IO-3D plans come to nonconstrained beamlet IMRT, showing that optimization, rather than modulation, may be the most important aspect of IMRT (for some sites). Methods: The IO-3D dose calculation and optimization functionality is integrated in the in-house 3D planning/optimization system. New features include random point dose calculation distributions, costlet and cost function capabilities, fast dose volume histogram (DVH) and plan evaluation tools, optimization search strategies designed for IO-3D, and an improved, reimplemented edge/octree calculation algorithm. The IO-3D optimization, in distinction to DAO, is designed to optimize 3D conformal plans (one to two segments per beam) and optimizes MLC segment shapes and weights with various user-controllable search strategies which optimize plans without beamlet or pencil beam approximations. IO-3D allows comparisons of beamlet, multisegment, and conformal plans optimized using the same cost functions, dose points, and plan evaluation metrics, so quantitative comparisons are straightforward. Here, comparisons of IO-3D and beamlet IMRT techniques are presented for breast, brain, liver, and lung plans. Results: IO-3D achieves high quality results comparable to beamlet IMRT, for many situations. Though the IO-3D plans have many fewer degrees of freedom for the optimization, this work finds that IO-3D plans with only one to two segments per beam are dosimetrically equivalent (or nearly so) to the beamlet IMRT plans, for several sites. IO-3D also reduces plan complexity significantly. Here, monitor units per fraction (MU/Fx) for IO-3D plans were 22%–68% less than that for the 1 cm × 1 cm beamlet IMRT plans and 72%–84% than the 0.5 cm × 0.5 cm beamlet IMRT plans. Conclusions: The unique IO-3D algorithm illustrates that inverse planning can achieve high quality 3D conformal plans equivalent (or nearly so) to unconstrained beamlet IMRT plans, for many sites. IO-3D thus provides the potential to optimize flat or few-segment 3DCRT plans, creating less complex optimized plans which are efficient and simple to deliver. The less complex IO-3D plans have operational advantages for scenarios including adaptive replanning, cases with interfraction and intrafraction motion, and pediatric patients. PMID:22755717

  6. Fine-structure characteristics in the emittance images of a strongly focusing He{sup +} beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasao, M.; Kobuchi, T.; Kisaki, M.

    2010-02-15

    The phase space distribution of a strongly focused He{sup +} ion beam source equipped with concave multiaperture electrodes was measured using a pepper-pot plate and a Kapton foil. The substructure of 301 merging He beamlets was clearly observed on a footprint of pepper-pot hole at the beam waist, where the beam density was 500 mA/cm{sup 2}. The position and the width of each beamlet substructure show the effect of interference of beamlets with surrounding one.

  7. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT.

    PubMed

    Park, Justin C; Li, Jonathan G; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-04-01

    The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm(2) square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm(2), where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a volumetric modulated arc therapy fields (∼90 control points) with a 3D grid size of 2.0 × 2.0 × 2.0 mm(3), dose was computed within 3-5 and 10-15 s timeframe, respectively. The authors have developed an efficient adaptive beamlet-based pencil beam dose calculation algorithm. The fast computation nature along with GPU compatibility has shown better performance than conventional FSPB. This enables the implementation of AB-FSPB in the clinical environment for independent volumetric dose verification.

  8. Broadband interferometric characterization of divergence and spatial chirp.

    PubMed

    Meier, Amanda K; Iliev, Marin; Squier, Jeff A; Durfee, Charles G

    2015-09-01

    We demonstrate a spectral interferometric method to characterize lateral and angular spatial chirp to optimize intensity localization in spatio-temporally focused ultrafast beams. Interference between two spatially sheared beams in an interferometer will lead to straight fringes if the wavefronts are curved. To produce reference fringes, we delay one arm relative to another in order to measure fringe rotation in the spatially resolved spectral interferogram. With Fourier analysis, we can obtain frequency-resolved divergence. In another arrangement, we spatially flip one beam relative to the other, which allows the frequency-dependent beamlet direction (angular spatial chirp) to be measured. Blocking one beam shows the spatial variation of the beamlet position with frequency (i.e., the lateral spatial chirp).

  9. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach

    NASA Astrophysics Data System (ADS)

    Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.

    2016-01-01

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  10. Design of a multistep phase mask for high-energy THz pulse generation in ZnTe crystal

    NASA Astrophysics Data System (ADS)

    Avetisyan, Yuri H.; Makaryan, Armen; Tadevosyan, Vahe

    2017-08-01

    A new scheme for generating high-energy terahertz (THz) pulses by optical rectification of tilted pulse front (TPF) femtosecond laser pulses in ZnTe crystal is proposed and analyzed. The TPF laser pulses are originated due to propagation through a multistep phase mask (MSPM) attached to the entrance surface of the nonlinear crystal. Similar to the case of contacting optical grating the necessity of the imaging optics is avoided. In addition, introduction of large amounts of angular dispersion is also eliminated. The operation principle is based on the fact that the MSPM splits a single input beam into many smaller time-delayed "beamlets", which together form a discretely TPF in the nonlinear crystal. The dimensions of the mask's steps required for high-energy THz-pulse generation in ZnTe and widely used lithium niobate (LN) crystals are calculated. The optimal number of steps is estimated taking into account individual beamlet's spatial broadening and problems related to the mask fabrication. The THz field in no pump depletion approximation is analytically calculated using radiating antenna model. The analysis shows that application of ZnTe crystal allows obtaining higher THz-pulse energy than that of LN crystal, especially when long-wavelength pump sources are used. The proposed method is a promising way to develop high-energy, monolithic, and alignment-free THzpulse source.

  11. Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.

    PubMed

    Zhong, Hualiang; Chetty, Indrin J

    2012-05-01

    Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.

  12. Could we use beamlets as a tool for remote sensing of the magnetotail?

    NASA Astrophysics Data System (ADS)

    Dolgonosov, Maxim; Zelenyi, Lev; Zimbardo, Gaetano; Perri, Silvia; Kovrazhkin, Rostislav

    2012-07-01

    In our presentation we are going to raise a question of exploiting beamlets for remote sensing of magnetotail. There is a long history of investigation of particle dynamics and features of distribution functions with prescribed electric and magnetic fields that could be measured by spacecrafts. But we would like to focus our attention on small part of this story and study in detail the behavior of ion the vicinity of the current sheet. Burkhart and Chen [Burkhart and Chen, 1991,JGR] employed the modified Harris model of the current sheet magnetic field [vec{B}=B_{0} tanh (z/L)vec{e}_{x} +B_{z} vec{e}_{z} ] and found a signature of nonlinear particle dynamics and an underlying partitioning of phase space that manifests itself as a series of peaks in the ion distribution function. The separation between the peaks is proportional to the fourth root of the particle energy and quantities that describe the current sheet structure. Formation of these peaks in the ion distribution function was explained on the basis resonant condition proposed by Buchner and Zelenyi [Buchner and Zelenyi,1989, JGR]. The non-adiabatic dynamics of the ions at vicinity of equatorial plane can be characterized by the action integral I_{z} =1/2 π \\oint \\dot{z}dz , which serves as an approximate integral of motion [Sonnerup, 1971]. Chaos is generated by the jumps Δ I_{z} of this invariant which accompany the particle crossing of the current sheet, which can lead both to the almost regular (field-aligned) motion of particles and to the capture of particles in the center of the current sheet, due to the unavoidable chaotic scattering. However, a subset of the ``regularity'' regions can exist in the physical space for certain combinations of current sheet parameters. Successive jumps of the adiabatic invariant Iz within these regions at the entry of particle into the current sheet and its exit from the current sheet, in the first approximation compensate each other, and ions ejected from these regions form almost monoenergetic highly accelerated and spatially localized ion beams, the so-called beamlets. The quasi-stationary dawn-dusk electric field Ey in the magnetotail accelerates ions between these jumps [Buchner and Zelenyi, 1990; Zelenyi et al., 2006a; Grigorenko et al., 2007]. The sites of acceleration depend on the value of Bn, and for a typical energy of the ions coming from the mantle, the resonance condition is satisfied at a number of discrete positions downtail. Zelenyi et al. [Zelenyi et al., 2007, JETP Letters] found the universal scaling characterizing the chain of these "regularity" regions. This ``law'' gives a relation between the typical beamlet energy WN and corresponding number of resonant region N: W_{N} =4/3 log N. Later Dolgonosov et al. [Dolgonosov et al., 2010, JGR] modified ``universal'' scaling and showed that to study experimentally observed beamlets one should take into account presence of the electric field perpendicular to the plane of the current sheet. On the basis of this paper [Kovrakhin et al., 2012, JETP Letters] it was analyzed spacecraft data (Cluster and Interball) to study properties of thin current sheets. Evidently, nonlinear particle dynamic result to the generation of the regularity ``island'' with some characteristic features. In the paper of Zelenyi et al. [Zelenyi et al, 2006, GRL] modulation of the normal component of the magnetic field under influence of self-consistent currents of particles was investigated. Peaks of Bz modulation nearly coincided with ``regularity'' islands. This result indicates on the fact that turbulence in the plasma sheet could be resulted from the nonlinear particle dynamic and properties of these ``noise'' are governed by features of particle motion. Thereby influence of ``noise'' constrains exploiting beamlets for remote sensing. It is also natural to ask what happens with these ``resonant'' regions under influence of external noise (or externally driven turbulence). Experimental observation of the magnetic field in the plasma sheet indicate on the permanent perturbation of the magnetic field and this perturbation could be very significant δBz ˜Bz. At the same time measurements of beamlets at the PSBL show that beamlets are long living structures [Grigorenko, 2003, JETP Letters]. What is the value of the magnetic field perturbation that could destroy generation of beamlets? In our report we are going to discuss current sheet properties obtained from beamlets analysis and natural restrictions imposed by turbulence.

  13. Wavefront correction for static and dynamic aberrations to within 1 second of the system shot in the NIF Beamlet demonstration facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartley, R.; Kartz, M.; Behrendt, W.

    1996-10-01

    The laser wavefront of the NIF Beamlet demonstration system is corrected for static aberrations with a wavefront control system. The system operates closed loop with a probe beam prior to a shot and has a loop bandwidth of about 3 Hz. However, until recently the wavefront control system was disabled several minutes prior to the shot to allow time to manually reconfigure its attenuators and probe beam insertion mechanism to shot mode. Thermally-induced dynamic variations in gas density in the Beamlet main beam line produce significant wavefront error. After about 5-8 seconds, the wavefront error has increased to a new,more » higher level due to turbulence- induced aberrations no longer being corrected- This implies that there is a turbulence-induced aberration noise bandwidth of less than one Hertz, and that the wavefront controller could correct for the majority of turbulence-induced aberration (about one- third wave) by automating its reconfiguration to occur within one second of the shot, This modification was recently implemented on Beamlet; we call this modification the t{sub 0}-1 system.« less

  14. Compensation in the presence of deep turbulence using tiled-aperture architectures

    NASA Astrophysics Data System (ADS)

    Spencer, Mark F.; Brennan, Terry J.

    2017-05-01

    The presence of distributed-volume atmospheric aberrations or "deep turbulence" presents unique challenges for beam-control applications which look to sense and correct for disturbances found along the laser-propagation path. This paper explores the potential for branch-point-tolerant reconstruction algorithms and tiled-aperture architectures to correct for the branch cuts contained in the phase function due to deep-turbulence conditions. Using wave-optics simulations, the analysis aims to parameterize the fitting-error performance of tiled-aperture architectures operating in a null-seeking control loop with piston, tip, and tilt compensation of the individual optical beamlet trains. To evaluate fitting-error performance, the analysis plots normalized power in the bucket as a function of the Fried coherence diameter, the log-amplitude variance, and the number of subapertures for comparison purposes. Initial results show that tiled-aperture architectures with a large number of subapertures outperform filled-aperture architectures with continuous-face-sheet deformable mirrors.

  15. Depth-resolved dual-beamlet vibrometry based on Fourier domain low coherence interferometry

    PubMed Central

    Choudhury, Niloy; Chen, Fangyi; Wang, Ruikang K.; Jacques, Steven L.; Nuttall, Alfred L.

    2013-01-01

    Abstract. We present an optical vibrometer based on delay-encoded, dual-beamlet phase-sensitive Fourier domain interferometric system to provide depth-resolved subnanometer scale vibration information from scattering biological specimens. System characterization, calibration, and preliminary vibrometry with biological specimens were performed. The proposed system has the potential to provide both amplitude and direction of vibration of tissue microstructures on a single two-dimensional plane. PMID:23455961

  16. An EGSnrc Monte Carlo study of the microionization chamber for reference dosimetry of narrow irregular IMRT beamlets.

    PubMed

    Capote, Roberto; Sánchez-Doblado, Francisco; Leal, Antonio; Lagares, Juan Ignacio; Arráns, Rafael; Hartmann, Günther H

    2004-09-01

    Intensity modulated radiation therapy (IMRT) has evolved toward the use of many small radiation fields, or "beamlets," to increase the resolution of the intensity map. The size of smaller beamlets can be typically about 1-5 cm2. Therefore small ionization chambers (IC) with sensitive volumes < or = 0.1 cm3 are generally used for dose verification of IMRT treatment. The dosimetry of these narrow photon beams pertains to the so-called nonreference conditions for beam calibration. The use of ion chambers for such narrow beams remains questionable due to the lack of electron equilibrium in most of the field. The present contribution aims to estimate, by the Monte Carlo (MC) method, the total correction needed to convert the IBA-Wellhöfer NAC007 micro IC measured charge in such radiation field to the absolute dose to water. Detailed geometrical simulation of the microionization chamber was performed. The ion chamber was always positioned at a 10 cm depth in water, parallel to the beam axis. The delivered doses to air and water cavity were calculated using the CAVRZ EGSnrc user code. The 6 MV phase-spaces for Primus Clinac (Siemens) used as an input to the CAVRZnrc code were derived by BEAM/EGS4 modeling of the treatment head of the machine along with the multileaf collimator [Sánchez-Doblado et al., Phys. Med. Biol. 48, 2081-2099 (2003)] and contrasted with experimental measurements. Dose calculations were carried out for two irradiation geometries, namely, the reference 10x10 cm2 field and an irregular (approximately 2x2 cm2) IMRT beamlet. The dose measured by the ion chamber is estimated by MC simulation as a dose averaged over the air cavity inside the ion-chamber (Dair). The absorbed dose to water is derived as the dose deposited inside the same volume, in the same geometrical position, filled and surrounded by water (Dwater) in the absence of the ionization chamber. Therefore, the Dwater/Dair dose ratio is a MC direct estimation of the total correction factor needed to convert the absorbed dose in air to absorbed dose to water. The dose ratio was calculated for several chamber positions, starting from the penumbra region around the beamlet along the two diagonals crossing the radiation field. For this quantity from 0 up to a 3% difference is observed between the dose ratio values obtained within the small irregular IMRT beamlet in comparison with the dose ratio derived for the reference 10x10 cm2 field. Greater differences from the reference value up to 9% were obtained in the penumbra region of the small IMRT beamlet.

  17. Maskless micro-ion-beam reduction lithography system

    DOEpatents

    Leung, Ka-Ngo; Barletta, William A.; Patterson, David O.; Gough, Richard A.

    2005-05-03

    A maskless micro-ion-beam reduction lithography system is a system for projecting patterns onto a resist layer on a wafer with feature size down to below 100 nm. The MMRL system operates without a stencil mask. The patterns are generated by switching beamlets on and off from a two electrode blanking system or pattern generator. The pattern generator controllably extracts the beamlet pattern from an ion source and is followed by a beam reduction and acceleration column.

  18. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  19. MO-H-19A-02: Investigation of Modulated Electron Arc (MeArc) Therapy for the Treatment of Scalp Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldib, A; Al-Azhar University, Cairo; Jin, L

    2014-06-15

    Purpose: Electron arc therapy has long been proposed as the most suitable technique for the treatment of superficial tumors that follow circularly curved surfaces. However it was challenged by unsuitability of the conventional applicators and the lack of adequate 3-D dose calculation tools for arc electron beams in the treatment planning systems (TPS). Now with the availability of an electron specific multi-leaf collimator (eMLC) and an in-house Monte Carlo (MC) based TPS, we were motivated to investigate more advanced modulated electron arc (MeARC) therapy and its beneficial outcome. Methods: We initiated the study by a film measurement conducted in amore » head and neck phantom, where we delivered electron arcs in a step and shoot manner using the light field as a guide to avoid fields abutments. This step was done to insure enough clearance for the arcs with eMLC. MCBEAM and MCPLAN MC codes were used for the treatment head simulation and phantom dose calculation, respectively. Treatment plans were generated for targets drawn in real patient CTs and head and neck phantom. We utilized beams eye view available from a commercial planning system to create beamlets having same isocenter and adjoined at the scalp surface. Then dose-deposition coefficients from those beamlets were calculated for all electron energies using MCPLAN. An in-house optimization code was then used to find the optimum weights needed from individual beamlets. Results: MeARC showed a nicely tailored dose distribution around the circular curved target on the scalp. Some hot spots were noticed and could be attributed to fields abutment problem owing to the bulging nature of electron profiles. Brain dose was shown to be at lower levels compared to photon treatment. Conclusion: MeARC was shown to be a promising modality for treating scalp cases and could be beneficial to all superficial tumors with a circular curvature.« less

  20. Multi-Segment Radius Measurement Using an Absolute Distance Meter Through a Null Assembly

    NASA Technical Reports Server (NTRS)

    Merle, Cormic; Wick, Eric; Hayden, Joseph

    2011-01-01

    This system was one of the test methods considered for measuring the radius of curvature of one or more of the 18 segmented mirrors that form the 6.5 m diameter primary mirror (PM) of the James Webb Space Telescope (JWST). The assembled telescope will be tested at cryogenic temperatures in a 17-m diameter by 27-m high vacuum chamber at the Johnson Space Center. This system uses a Leica Absolute Distance Meter (ADM), at a wavelength of 780 nm, combined with beam-steering and beam-shaping optics to make a differential distance measurement between a ring mirror on the reflective null assembly and individual PM segments. The ADM is located inside the same Pressure-Tight Enclosure (PTE) that houses the test interferometer. The PTE maintains the ADM and interferometer at ambient temperature and pressure so that they are not directly exposed to the telescope s harsh cryogenic and vacuum environment. This system takes advantage of the existing achromatic objective and reflective null assembly used by the test interferometer to direct four ADM beamlets to four PM segments through an optical path that is coincident with the interferometer beam. A mask, positioned on a linear slide, contains an array of 1.25 mm diameter circular subapertures that map to each of the 18 PM segments as well as six positions around the ring mirror. A down-collimated 4 mm ADM beam simultaneously covers 4 adjacent PM segment beamlets and one ring mirror beamlet. The radius, or spacing, of all 18 segments can be measured with the addition of two orthogonally-oriented scanning pentaprisms used to steer the ADM beam to any one of six different sub-aperture configurations at the plane of the ring mirror. The interferometer beam, at a wavelength of 687 nm, and the ADM beamlets, at a wavelength of 780 nm, pass through the objective and null so that the rays are normally incident on the parabolic PM surface. After reflecting off the PM, both the ADM and interferometer beams return to their respective instruments on nearly the same path. A fifth beamlet, acting as a differential reference, reflects off a ring mirror attached to the objective and null and returns to the ADM. The spacings between the ring mirror, objective, and null are known through manufacturing tolerances as well as through an in situ null wavefront alignment of the interferometer test beam with a reflective hologram located near the caustic of the null. Since total path length between the ring mirror and PM segments is highly deterministic, any ADM-measured departures from the predicted path length can be attributed to either spacing error or radius error in the PM. It is estimated that the path length measurement between the ring mirror and a PM segment is accurate to better than 100 m. The unique features of this invention include the differential distance measuring capability and its integration into an existing cryogenic and vacuum compatible interferometric optical test.

  1. A non-voxel-based broad-beam (NVBB) framework for IMRT treatment planning.

    PubMed

    Lu, Weiguo

    2010-12-07

    We present a novel framework that enables very large scale intensity-modulated radiation therapy (IMRT) planning in limited computation resources with improvements in cost, plan quality and planning throughput. Current IMRT optimization uses a voxel-based beamlet superposition (VBS) framework that requires pre-calculation and storage of a large amount of beamlet data, resulting in large temporal and spatial complexity. We developed a non-voxel-based broad-beam (NVBB) framework for IMRT capable of direct treatment parameter optimization (DTPO). In this framework, both objective function and derivative are evaluated based on the continuous viewpoint, abandoning 'voxel' and 'beamlet' representations. Thus pre-calculation and storage of beamlets are no longer needed. The NVBB framework has linear complexities (O(N(3))) in both space and time. The low memory, full computation and data parallelization nature of the framework render its efficient implementation on the graphic processing unit (GPU). We implemented the NVBB framework and incorporated it with the TomoTherapy treatment planning system (TPS). The new TPS runs on a single workstation with one GPU card (NVBB-GPU). Extensive verification/validation tests were performed in house and via third parties. Benchmarks on dose accuracy, plan quality and throughput were compared with the commercial TomoTherapy TPS that is based on the VBS framework and uses a computer cluster with 14 nodes (VBS-cluster). For all tests, the dose accuracy of these two TPSs is comparable (within 1%). Plan qualities were comparable with no clinically significant difference for most cases except that superior target uniformity was seen in the NVBB-GPU for some cases. However, the planning time using the NVBB-GPU was reduced many folds over the VBS-cluster. In conclusion, we developed a novel NVBB framework for IMRT optimization. The continuous viewpoint and DTPO nature of the algorithm eliminate the need for beamlets and lead to better plan quality. The computation parallelization on a GPU instead of a computer cluster significantly reduces hardware and service costs. Compared with using the current VBS framework on a computer cluster, the planning time is significantly reduced using the NVBB framework on a single workstation with a GPU card.

  2. Measurements of Electron Density Profiles of Plasmas Produced by Nike KrF Laser for Laser Plasma Instability (LPI) Research

    NASA Astrophysics Data System (ADS)

    Oh, Jaechul; Weaver, J. L.; Obenschain, S. P.; Schmitt, A. J.; Kehne, D. M.; Karasik, M.; Chan, L.-Y.; Serlin, V.; Phillips, L.

    2013-10-01

    Knowing spatial profiles of electron density (ne) in the underdense coronal region (n

  3. Ion beam accelerator system

    NASA Technical Reports Server (NTRS)

    Aston, Graeme (Inventor)

    1984-01-01

    A system is described that combines geometrical and electrostatic focusing to provide high ion extraction efficiency and good focusing of an accelerated ion beam. The apparatus includes a pair of curved extraction grids (16, 18) with multiple pairs of aligned holes positioned to direct a group of beamlets (20) along converging paths. The extraction grids are closely spaced and maintained at a moderate potential to efficiently extract beamlets of ions and allow them to combine into a single beam (14). An accelerator electrode device (22) downstream from the extraction grids, is at a much lower potential than the grids to accelerate the combined beam.

  4. Ion beam accelerator system

    NASA Technical Reports Server (NTRS)

    Aston, G. (Inventor)

    1981-01-01

    A system is described that combines geometrical and electrostatic focusing to provide high ion extraction efficiency and good focusing of an accelerated ion beam. The apparatus includes a pair of curved extraction grids with multiple pairs of aligned holes positioned to direct a group of beamlets along converging paths. The extraction grids are closely spaced and maintained at a moderate potential to efficiently extract beamlets of ions and allow them to combine into a single beam. An accelerator electrode device downstream from the extraction grids is at a much lower potential than the grids to accelerate the combined beam. The application of the system to ion implantation is mentioned.

  5. Means and method for the focusing and acceleration of parallel beams of charged particles

    DOEpatents

    Maschke, Alfred W.

    1983-07-05

    A novel apparatus and method for focussing beams of charged particles comprising planar arrays of electrostatic quadrupoles. The quadrupole arrays may comprise electrodes which are shared by two or more quadrupoles. Such quadrupole arrays are particularly adapted to providing strong focussing forces for high current, high brightness, beams of charged particles, said beams further comprising a plurality of parallel beams, or beamlets, each such beamlet being focussed by one quadrupole of the array. Such arrays may be incorporated in various devices wherein beams of charged particles are accelerated or transported, such as linear accelerators, klystron tubes, beam transport lines, etc.

  6. Microfabricated Ion Beam Drivers for Magnetized Target Fusion

    NASA Astrophysics Data System (ADS)

    Persaud, Arun; Seidl, Peter; Ji, Qing; Ardanuc, Serhan; Miller, Joseph; Lal, Amit; Schenkel, Thomas

    2015-11-01

    Efficient, low-cost drivers are important for Magnetized Target Fusion (MTF). Ion beams offer a high degree of control to deliver the required mega joules of driver energy for MTF and they can be matched to several types of magnetized fuel targets, including compact toroids and solid targets. We describe an ion beam driver approach based on the MEQALAC concept (Multiple Electrostatic Quadrupole Array Linear Accelerator) with many beamlets in an array of micro-fabricated channels. The channels consist of a lattice of electrostatic quadrupoles (ESQ) for focusing and of radio-frequency (RF) electrodes for ion acceleration. Simulations with particle-in-cell and beam envelope codes predict >10x higher current densities compared to state-of-the-art ion accelerators. This increase results from dividing the total ion beam current up into many beamlets to control space charge forces. Focusing elements can be biased taking advantage of high breakdown electric fields in sub-mm structures formed using MEMS techniques (Micro-Electro-Mechanical Systems). We will present results on ion beam transport and acceleration in MEMS based beamlets. Acknowledgments: This work is supported by the U.S. DOE under Contract No. DE-AC02-05CH11231.

  7. Characterization of Electron Temperature and Density Profiles of Plasmas Produced by Nike KrF Laser for Laser Plasma Instability (LPI) Research

    NASA Astrophysics Data System (ADS)

    Oh, Jaechul; Weaver, J. L.; Phillips, L.; Obenschain, S. P.; Schmitt, A. J.; Kehne, D. M.; Chan, L.-Y.; Serlin, V.

    2011-10-01

    Previous experiments with Nike KrF laser (λ = 248 nm , Δν ~ 1 THz) observed LPI signatures near quarter critical density (nc / 4) in CH plasmas, however, detailed measurement of the temperature (Te) and density (ne) profiles was missing. The current Nike LPI campaign will perform experimental determination of the plasma profiles. A side-on grid imaging refractometer (GIR) is the main diagnostic to resolve Te and ne in space taking 2D snapshots of probe laser (λ = 266 nm , Δt = 8 psec) beamlets (50 μm spacing) refracted by the plasma at laser peak time. Ray tracing of the beamlets through hydrodynamically simulated (FASTRAD3D) plasma profiles estimates the refractometer may access densities up to ~ 0 . 2nc . With the measured Te and ne profiles in the plasma corona, we will discuss analysis of light data radiated from the plasmas in spectral ranges relevant to two plasmon decay and convective Raman instabilities. Validity of the (Te ,ne) data will also be discussed for the thermal transport study. Work supported by DoE/NNSA and ONR and performed at NRL.

  8. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  9. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  10. SU-E-T-593: Clinical Evaluation of Direct Aperture Optimization in Head/Neck and Prostate IMRT Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosini, M; GALAL, M; Emam, I

    2014-06-01

    Purpose: To investigate the planning and dosimetric advantages of direct aperture optimization (DAO) over beam-let optimization in IMRT treatment of head and neck (H/N) and prostate cancers. Methods: Five Head and Neck as well as five prostate patients were planned using the beamlet optimizer in Elekta-Xio ver 4.6 IMRT treatment planning system. Based on our experience in beamlet IMRT optimization, PTVs in H/N plans were prescribed to 70 Gy delivered by 7 fields. While prostate PTVs were prescribed to 76 Gy with 9 fields. In all plans, fields were set to be equally spaced. All cases were re-planed using Directmore » Aperture optimizer in Prowess Panther ver 5.01 IMRT planning system at same configurations and dose constraints. Plans were evaluated according to ICRU criteria, number of segments, number of monitor units and planning time. Results: For H/N plans, the near maximum dose (D2) and the dose that covers 95% D95 of PTV has improved by 4% in DAO. For organs at risk (OAR), DAO reduced the volume covered by 30% (V30) in spinal cord, right parotid, and left parotid by 60%, 54%, and 53% respectively. This considerable dosimetric quality improvement achieved using 25% less planning time and lower number of segments and monitor units by 46% and 51% respectively. In DAO prostate plans, Both D2 and D95 for the PTV were improved by only 2%. The V30 of the right femur, left femur and bladder were improved by 35%, 15% and 3% respectively. On the contrary, the rectum V30 got even worse by 9%. However, number of monitor units, and number of segments decreased by 20% and 25% respectively. Moreover the planning time reduced significantly too. Conclusion: DAO introduces considerable advantages over the beamlet optimization in regards to organs at risk sparing. However, no significant improvement occurred in most studied PTVs.« less

  11. Intense, brilliant micro γ-beams in nuclear physics and applications

    NASA Astrophysics Data System (ADS)

    Habs, D.; Gasilov, S.; Lang, C.; Thirolf, P. G.; Jentschel, M.; Diehl, R.; Schroer, C.; Barty, C. P. J.; Zamfir, N. V.

    2011-06-01

    The upcoming γ facilities MEGa-Ray (Livermore) and ELI-NP (Bucharest) will have a 105 times higher γ flux F0 = 1013/s and a ~30 times smaller band width (ΔEγ/Eγ = BW ~ 10-3) than the presently best γ beam facility. They will allow to extract a small γ beam of about 30 - 100 μm radius 1 m behind the γ production point, containing the dominant γ energy band width. One can collimate the γ beam down to ΘBW = √ BW/ γe , where γe = Ee/ mec2 is a measure of the energy Ee of the electron beam, from which the γ beam is produced by Compton back-scattering. Due to the γ energy - angle correlation, the angular collimation results at the same time in a reduction of the γ beam band width without loss of "good" γ quanta, however, the primary γ flux F0is reduced to about Fcoll ~ F0 . 1.5 . ΔEγ/Eγ. For γ rays in the (0.1-100) MeV range, the negative real part δ of the index of refraction n = 1- δ + iβ from coherent Rayleigh scattering (virtual photo effect) dominates over the positive δ contributions from coherent virtual Compton scattering and coherent virtual pair creation scattering (Delbrück scattering). The very small absolute value |δ| ~ 10-6 - 10-9 of the index of refraction of matter for hard X-rays and γ-rays and its negative sign--in contrast to usual optics--results in a very different γ-ray optics, e.g. focusing lenses become concave and we use stacks of N optimized lenses. It requires very small radii of curvature of the γ lenses and thus very small γ beam radii. This leads to a technical new solution, where the primary γ beam is subdivided into M γ beamlets, which do not interfere with each other, but contribute with their independent intensities. We send the γ beamlets into a two-dimensional array of closely packed cylindrical parabolic refractive lenses, where N ~ 103 lenses with very small radius of curvature are stacked behind each other, leading to contracted beam spots in one dimension. With a second 1D lens system turned by 900, we can obtain small spots for each of the beamlets. While focusing the beamlets to a much smaller spot size, we can bend them effectively with micro wedges to e.g. parallel beamlets. We can monochromatize these γ beamlets within the rocking curve of a common Laue crystal, using an additional angle selection by a collimator to reach a strongly reduced band width of 10-4 - 10-6. We propose the use of a further lens/wedge arrays or Bragg reflection to superimpose the beamlets to a very small total γ beam spot. Many experiments gain much from the high beam resolution and the smaller focal spot. This new γ optics requires high resolution diagnostics, where we want to optimize the focusing, using very thin target wires of a specific nuclear resonance fluorescence (NRF) isotope to monitor the focusing for the resonance energy. With such beams we can explore new nuclear physics of higher excited states with larger level densities. New phenomena, like the transition from chaotic to regular nuclear motion, weakly-bound halo states or states decaying by tunneling can be studied. The higher level density also allows to probe parity violating nuclear forces more sensitively. This γ optics improves many applications, like a more brilliant positron source, a more brilliant neutron source, higher specific activity of medical radioisotopes or NRF micro-imaging.

  12. Beamlets from stochastic acceleration

    NASA Astrophysics Data System (ADS)

    Perri, Silvia; Carbone, Vincenzo

    2008-09-01

    We investigate the dynamics of a realization of the stochastic Fermi acceleration mechanism. The model consists of test particles moving between two oscillating magnetic clouds and differs from the usual Fermi-Ulam model in two ways. (i) Particles can penetrate inside clouds before being reflected. (ii) Particles can radiate a fraction of their energy during the process. Since the Fermi mechanism is at work, particles are stochastically accelerated, even in the presence of the radiated energy. Furthermore, due to a kind of resonance between particles and oscillating clouds, the probability density function of particles is strongly modified, thus generating beams of accelerated particles rather than a translation of the whole distribution function to higher energy. This simple mechanism could account for the presence of beamlets in some space plasma physics situations.

  13. Electron Beam Pattern Rotation as a Method of Tunable Bunch Train Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halavanau, A.; Piot, P.

    Transversely modulated electron beams can be formed in photo injectors via microlens array (MLA) UV laser shap- ing technique. Microlenses can be arranged in polygonal lattices, with resulting transverse electron beam modula- tion mimicking the lenses pattern. Conventionally, square MLAs are used for UV laser beam shaping, and generated electron beam patterns form square beamlet arrays. The MLA setup can be placed on a rotational mount, thereby rotating electron beam distribution. In combination with transverse-to-longitudinal emittance exchange (EEX) beam line, it allows to vary beamlets horizontal projection and tune electron bunch train. In this paper, we extend the technique tomore » the case of different MLA lattice arrangements and explore the benefits of its rotational symmetries.« less

  14. Consequences of wave-particle interactions on chaotic acceleration

    NASA Technical Reports Server (NTRS)

    Schriver, David; Ashour-Abdalla, Maha

    1991-01-01

    The recent model of Ashour-Abdalla et al. (1991) has proposed that the earth's plasma sheet can be formed by chaotic acceleration in a magnetotail-like field configuration. The ion velocity distributions created by chaotic acceleration have unstable features and represent robust free energy sources for kinetic plasma waves that can modify the original distributions. In the plasma sheet boundary layer, field-aligned ion beamlets are formed which drive a host of instabilities creating a broadbanded noise spectrum and cause thermal spreading of the beamlets. In addition, there is strong heating of any cold background plasma that may be present. In the central plasma sheet, ion antiloss cone distributions are created which are unstable to very low frequency waves that saturate by filling the antiloss cone.

  15. Compact High-Current Heavy-Ion Injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westenskow, G.A.; Grote, D.P.; Kwan, J.W.

    2005-10-05

    To provide a compact high-brightness heavy-ion beam source for Heavy Ion Fusion (HIF), we have been experimenting with merging multi-beamlets in an injector which uses an RF plasma source. An array of converging beamlets was used to produce a beam with the envelope radius, convergence, and ellipticity matched to an electrostatic quadrupole (ESQ) channel. Experimental results were in good quantitative agreement with simulation and have demonstrated the feasibility of this concept. The size of a driver-scale injector system using this approach will be several times smaller than one designed using traditional single large-aperture beams. The success of this experiment hasmore » possible significant economical and technical impacts on the architecture of HIF drivers.« less

  16. Compact High-Current Heavy-Ion Injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westenskow, G A; Grote, D P; Kwan, J W

    2006-04-13

    To provide a compact high-brightness heavy-ion beam source for Heavy Ion Fusion (HIF), we have been experimenting with merging multi-beamlets in an injector which uses an RF plasma source. An array of converging beamlets was use to produce a beam with the envelope radius, convergence, and ellipticity matched to an electrostatic quadrupole (ESQ) channel. Experimental results were in good quantitative agreement with simulation and have demonstrated the feasibility of this concept. The size of a driver-scale injector system using this approach will be several times smaller than one designed using traditional single large-aperture beams. The success of this experiment hasmore » possible significant economical and technical impacts on the architecture of HIF drivers.« less

  17. MEMS based ion beams for fusion

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Seidl, P. A.; Ji, Q.; Waldron, W. L.; Schenkel, T.; Ardanuc, S.; Vinayakumar, K. B.; Schaffer, Z. A.; Lal, A.

    2016-10-01

    Micro-Electro-Mechanical Systems (MEMS) fabrication provides an exciting opportunity to shrink existing accelerator concepts to smaller sizes and to reduce cost by orders of magnitude. We revisit the concept of a Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) and show how, with current technologies, the concept can be downsized from gap distances of several cm to distances in the sub-mm regime. The basic concept implements acceleration gaps using radio frequency (RF) fields and electrostatic quadrupoles (ESQ) on silicon wafers. First results from proof-of-concept experiments using printed circuit boards to realize the MEQALAC structures are presented. We show results from accelerating structures that were used in an array of nine (3x3) parallel beamlets with He ions at 15 keV. We will also present results from an ESQ focusing lattice using the same beamlet layout showing beam transport and matching. We also will discuss our progress in fabricating MEMS devices in silicon wafers for both the RF and ESQ structures and integration of necessary RF-circuits on-chip. The concept can be scaled up to thousands of beamlets providing high power beams at low cost and can be used to form and compress a plasma for the development of magnetized target fusion approaches. This work was supported by the Office of Science of the US Department of Energy through the ARPA-e ALPHA program under contracts DE-AC0205CH11231 (LBNL).

  18. Progress in NEXT Ion Optics Modeling

    NASA Technical Reports Server (NTRS)

    Emhoff, Jerold W.; Boyd, Iain D.

    2004-01-01

    Results are presented from an ion optics simulation code applied to the NEXT ion thruster geometry. The error in the potential field solver of the code is characterized, and methods and requirements for reducing this error are given. Results from a study on electron backstreaming using the improved field solver are given and shown to compare much better to experimental results than previous studies. Results are also presented on a study of the beamlet behavior in the outer radial apertures of the NEXT thruster. The low beamlet currents in this region allow over-focusing of the beam, causing direct impingement of ions on the accelerator grid aperture wall. Different possibilities for reducing this direct impingement are analyzed, with the conclusion that, of the methods studied, decreasing the screen grid aperture diameter eliminates direct impingement most effectively.

  19. Beam motions near separatrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Ball et al.

    1999-05-04

    Experimental data on particle motion near the separatrix of the one dimensional (1-D) fourth-integer islands are an-alyzed. When the beam bunch is initially kicked to the separatrix orbit, we observed a strong decoherence in the coherent betatron motion. We find that, through intensive particle tracking simulation analysis, the decoherence has resulted from the beam being split into beamlets in the beta-tron phase space. However, we also observe an unexpected recoherence of coherence signal, which may result form a modulated closed orbit or the homoclinic structure near the separatrix.

  20. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.

    PubMed

    Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan

    2006-05-21

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.

  1. Transverse-To-Longitudinal Photocathode Distribution Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halavanau, A.; Qiang, G.; Ha, G.

    In this paper, we present a tunable picosecond-scale bunch train generation technique combining a microlens array (MLA) transverse laser shaper and a transverse-to-longitudinal emittance exchange (EEX) beamline. The modulated beamlet array is formed at the photocathode with the MLA setup. The resulting patterned electron beam is accelerated to 50 MeV and transported to the entrance of the EEX setup. A quadrupole channel is used to adjust the transverse spacing of the beamlet array upstream of the EEX, thereby enabling the generation of a bunch train with tunable separation downstream of the EEX beamline. Additionally, the MLA is mounted on amore » rotation stage which provides ad- ditional flexibility to produce high-frequency beam density modulation downstream of the EEX. Experimental results obtained at the Argonne Wakefield Accelerator (AWA) facil- ity are presented and compared with numerical simulations.« less

  2. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure

    DOE PAGES

    Persaud, A.; Ji, Q.; Feinberg, E.; ...

    2017-06-08

    Here, a new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number ofmore » parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further red ucing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Finally, ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.« less

  3. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Ji, Q.; Feinberg, E.; Seidl, P. A.; Waldron, W. L.; Schenkel, T.; Lal, A.; Vinayakumar, K. B.; Ardanuc, S.; Hammer, D. A.

    2017-06-01

    A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.

  4. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure.

    PubMed

    Persaud, A; Ji, Q; Feinberg, E; Seidl, P A; Waldron, W L; Schenkel, T; Lal, A; Vinayakumar, K B; Ardanuc, S; Hammer, D A

    2017-06-01

    A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.

  5. Electron bunch structure in energy recovery linac with high-voltage dc photoelectron gun

    NASA Astrophysics Data System (ADS)

    Saveliev, Y. M.; Jackson, F.; Jones, J. K.; McKenzie, J. W.

    2016-09-01

    The internal structure of electron bunches generated in an injector line with a dc photoelectron gun is investigated. Experiments were conducted on the ALICE (accelerators and lasers in combined experiments) energy recovery linac at Daresbury Laboratory. At a relatively low dc gun voltage of 230 kV, the bunch normally consisted of two beamlets with different electron energies, as well as transverse and longitudinal characteristics. The beamlets are formed at the head and the tail of the bunch. At a higher gun voltage of 325 kV, the beam substructure is much less pronounced and could be observed only at nonoptimal injector settings. Experiments and computer simulations demonstrated that the bunch structure develops during the initial beam acceleration in the superconducting rf booster cavity and can be alleviated either by increasing the gun voltage to the highest possible level or by controlling the beam acceleration from the gun voltage in the first accelerating structure.

  6. SU-D-19A-06: The Effect of Beam Parameters On Very High-Energy Electron Radiotherapy: A Planning Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palma, B; Bazalova, M; Qu, B

    Purpose: We evaluated the effect of very high-energy electron (VHEE) beam parameters on the planning of a lung cancer case by means of Monte Carlo simulations. Methods: We simulated VHEE radiotherapy plans using the EGSnrc/BEAMnrc-DOSXYZnrc code. We selected a lung cancer case that was treated with 6MV photon VMAT to be planned with VHEE. We studied the effect of beam energy (80 MeV, 100 MeV, and 120 MeV), number of equidistant beams (16 or 32), and beamlets sizes (3 mm, 5 mm or 7 mm) on PTV coverage, sparing of organs at risk (OARs) and dose conformity. Inverse-planning optimization wasmore » performed in a research version of RayStation (RaySearch Laboratories AB) using identical objective functions and constraints for all VHEE plans. Results: Similar PTV coverage and dose conformity was achieved by all the VHEE plans. The 100 MeV and 120 MeV VHEE plans were equivalent amongst them and were superior to the 80 MeV plan in terms of OARs sparing. The effect of using 16 or 32 equidistant beams was a mean difference in average dose of 2.4% (0%–7.7%) between the two plans. The use of 3 mm beamlet size systematically reduced the dose to all the OARs. Based on these results we selected the 100MeV-16beams-3mm-beamlet-size plan to compare it against VMAT. The selected VHEE plan was more conformal than VMAT and improved OAR sparing (heart and trachea received 125% and 177% lower dose, respectively) especially in the low-dose region. Conclusion: We determined the VHEE beam parameters that maximized the OAR dose sparing and dose conformity of the actually delivered VMAT plan of a lung cancer case. The selected parameters could be used for the planning of other treatment sites with similar size, shape, and location. For larger targets, a larger beamlet size might be used without significantly increasing the dose. B Palma: None. M Bazalova: None. B Hardemark: Employee, RaySearch Americas. E Hynning: Employee, RaySearch Americas. B Qu: None. B Loo Jr.: Research support, RaySearch, Varian. P Maxim: Research support, RaySearch, Varian.« less

  7. Fast online Monte Carlo-based IMRT planning for the MRI linear accelerator

    NASA Astrophysics Data System (ADS)

    Bol, G. H.; Hissoiny, S.; Lagendijk, J. J. W.; Raaymakers, B. W.

    2012-03-01

    The MRI accelerator, a combination of a 6 MV linear accelerator with a 1.5 T MRI, facilitates continuous patient anatomy updates regarding translations, rotations and deformations of targets and organs at risk. Accounting for these demands high speed, online intensity-modulated radiotherapy (IMRT) re-optimization. In this paper, a fast IMRT optimization system is described which combines a GPU-based Monte Carlo dose calculation engine for online beamlet generation and a fast inverse dose optimization algorithm. Tightly conformal IMRT plans are generated for four phantom cases and two clinical cases (cervix and kidney) in the presence of the magnetic fields of 0 and 1.5 T. We show that for the presented cases the beamlet generation and optimization routines are fast enough for online IMRT planning. Furthermore, there is no influence of the magnetic field on plan quality and complexity, and equal optimization constraints at 0 and 1.5 T lead to almost identical dose distributions.

  8. Spatial light modulator array with heat minimization and image enhancement features

    DOEpatents

    Jain, Kanti [Briarcliff Manor, NY; Sweatt, William C [Albuquerque, NM; Zemel, Marc [New Rochelle, NY

    2007-01-30

    An enhanced spatial light modulator (ESLM) array, a microelectronics patterning system and a projection display system using such an ESLM for heat-minimization and resolution enhancement during imaging, and the method for fabricating such an ESLM array. The ESLM array includes, in each individual pixel element, a small pixel mirror (reflective region) and a much larger pixel surround. Each pixel surround includes diffraction-grating regions and resolution-enhancement regions. During imaging, a selected pixel mirror reflects a selected-pixel beamlet into the capture angle of a projection lens, while the diffraction grating of the pixel surround redirects heat-producing unused radiation away from the projection lens. The resolution-enhancement regions of selected pixels provide phase shifts that increase effective modulation-transfer function in imaging. All of the non-selected pixel surrounds redirect all radiation energy away from the projection lens. All elements of the ESLM are fabricated by deposition, patterning, etching and other microelectronic process technologies.

  9. Compact neutron generator

    DOEpatents

    Leung, Ka-Ngo; Lou, Tak Pui

    2005-03-22

    A compact neutron generator has at its outer circumference a toroidal shaped plasma chamber in which a tritium (or other) plasma is generated. A RF antenna is wrapped around the plasma chamber. A plurality of tritium ion beamlets are extracted through spaced extraction apertures of a plasma electrode on the inner surface of the toroidal plasma chamber and directed inwardly toward the center of neutron generator. The beamlets pass through spaced acceleration and focusing electrodes to a neutron generating target at the center of neutron generator. The target is typically made of titanium tubing. Water is flowed through the tubing for cooling. The beam can be pulsed rapidly to achieve ultrashort neutron bursts. The target may be moved rapidly up and down so that the average power deposited on the surface of the target may be kept at a reasonable level. The neutron generator can produce fast neutrons from a T-T reaction which can be used for luggage and cargo interrogation applications. A luggage or cargo inspection system has a pulsed T-T neutron generator or source at the center, surrounded by associated gamma detectors and other components for identifying explosives or other contraband.

  10. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  11. Demonstration Of Fast, Single-Shot Photocathode QE Mapping Method Using Mla Pattern Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wisniewski, E. E.; Conde, M.; Doran, D. S.

    Quantum efficiency (QE) is the chief figure of merit in the characterization of photocathodes. Semiconductor photocathodes, especially when used in high rep-rate photoinjectors, are known to show QE degradation over time and must be replaced. The totalQE is the basic diagnosticwhich is used widely and is easy to obtain. However, a QE map indicating variations of QE across the cathode surface has greater utility. It can quickly diagnose problems of QE inhomogeneity. Most QE mapping techniques require hours to complete and are thus disruptive to a user facility schedule. A fast, single-shot method has been proposed using a micro-lens arraymore » (MLA) generated QE map. In this paper we report the implementation of the method at Argonne Wakefield Accelerator facility. A micro-lens array (MLA) is used to project an array of beamlets onto the photocathode. The resulting photoelectron beam in the form of an array of electron beamlets is imaged at a YAG screen. Four synchronized measurements are made and the results used to produce a QE map of the photocathode.« less

  12. Microcharge neutralization transport experiments and simulations for ion-driven inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, C.L.; Hanson, D.L.; Poukey, J.W.

    Space charge neutralization for intense beams for inertial confinement fusion is usually assumed to be perfect. However, small charge clumps in the beam will not be totally charge neutralized, and the residual net minimum potential set by electron trapping (e{phi} {approx} {1/2}m{sub e}v{sup 2}{sub i}, where m{sub e} is the electron mass and v{sub i} is the ion velocity) may lead to a substantial microdivergence. Experiments on the SABRE accelerator and simulations with the IPROP computer code are being performed to assess this mechanism. The authors have successfully created a 5 mrad beam on the SABRE accelerator, by expanding themore » beam (a process consistent with Liouville`s theorem) and, by passing the beam through a plate with pinholes, they have created low divergence beamlets to study this mechanism. Results clearly show: (1) at low pressures, trapping does neutralize the beamlets, but only down to e{phi} {approx} {1/2}m{sub e}v{sup 2}{sub i}; and (2) at higher pressures ({approx} 0.1-1 Torr), plasma shielding does remove the effect.« less

  13. TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelover, E; Wang, D; Hill, P

    2014-06-15

    Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS.more » Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.« less

  14. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  15. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Y; Li, Y; Tian, Z

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less

  16. Novel low-kVp beamlet system for choroidal melanoma

    PubMed Central

    Esquivel, Carlos; Fuller, Clifton D; Waggener, Robert G; Wong, Adrian; Meltz, Martin; Blough, Melissa; Eng, Tony Y; Thomas, Charles R

    2006-01-01

    Background Treatment of choroidal melanoma with radiation often involves placement of customized brachytherapy eye-plaques. However, the dosimetric properties inherent in source-based radiotherapy preclude facile dose optimization to critical ocular structures. Consequently, we have constructed a novel system for utilizing small beam low-energy radiation delivery, the Beamlet Low-kVp X-ray, or "BLOKX" system. This technique relies on an isocentric rotational approach to deliver dose to target volumes within the eye, while potentially sparing normal structures. Methods Monte Carlo N-Particle (MCNP) transport code version 5.0(14) was used to simulate photon interaction with normal and tumor tissues within modeled right eye phantoms. Five modeled dome-shaped tumors with a diameter and apical height of 8 mm and 6 mm, respectively, were simulated distinct positions with respect to the macula iteratively. A single fixed 9 × 9 mm2 beamlet, and a comparison COMS protocol plaque containing eight I-125 seeds (apparent activity of 8 mCi) placed on the scleral surface of the eye adjacent to the tumor, were utilized to determine dosimetric parameters at tumor and adjacent tissues. After MCNP simulation, comparison of dose distribution at each of the 5 tumor positions for each modality (BLOKX vs. eye-plaque) was performed. Results Tumor-base doses ranged from 87.1–102.8 Gy for the BLOKX procedure, and from 335.3–338.6 Gy for the eye-plaque procedure. A reduction of dose of at least 69% to tumor base was noted when using the BLOKX. The BLOKX technique showed a significant reduction of dose, 89.8%, to the macula compared to the episcleral plaque. A minimum 71.0 % decrease in dose to the optic nerve occurred when the BLOKX was used. Conclusion The BLOKX technique allows more favorable dose distribution in comparison to standard COMS brachytherapy, as simulated using a Monte Carlo iterative mathematical modeling. Future series to determine clinical utility of such an approach are warranted. PMID:16965624

  17. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  18. Numerical Assessment of the Diagnostic Capabilities of the Instrumented Calorimeter for SPIDER (STRIKE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalla Palma, M.; Pasqualotto, R.; Rizzolo, A.

    An important feature of the ITER project is represented by additional heating via injection of neutral beams from accelerated negative ions. To study and optimise their production, the SPIDER test facility is under construction in Padova, with the aim of testing beam characteristics and to verify the source proper operation.STRIKE (Short-Time Retractable Instrumented Kalorimeter Experiment) is a diagnostic to characterise the SPIDER negative ion beam during short operation (several seconds). During long pulse operations, STRIKE is parked off-beam in the vacuum vessel. The most important measurements are beam uniformity, beamlet divergence and stripping losses. STRIKE is directly exposed to themore » beam and is formed of 16 tiles, one for each beamlet groups. The measurements are provided by thermal cameras, current sensors, thermocouples and electrostatic sensors. This paper presents the investigation of the influence on the response of STRIKE of: thermal characteristics of the tile material, exposure angle, features of some dedicated diagnostics. The uniformity of the beam will be studied by measurements of the current flowing through each tile and by thermal cameras. Simulations show that it will be possible to verify experimentally whether the beam meets the ITER requirement about the maximum allowed beam non-uniformity (below {+-}10%). In the simulations also the influence of the beam halo has been included; the effect of off-perveance conditions has been studied. To estimate the beamlet divergence, STRIKE can be moved along the beam direction at two different distances from the accelerator. The optimal positions have been defined taking into account design constraints. The effect of stripping on the comparison between currents and heat loads has been assessed; this will allow to obtain an experimental estimate of stripping. Electrostatic simulations have provided the suitable tile biasing voltage in order to reabsorb secondary particles into the same tile as the one where they were emitted from.« less

  19. Contralateral Breast Dose After Whole-Breast Irradiation: An Analysis by Treatment Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Terence M.; Moran, Jean M., E-mail: jmmoran@med.umich.edu; Hsu, Shu-Hui

    2012-04-01

    Purpose: To investigate the contralateral breast dose (CBD) across a continuum of breast-conservation therapy techniques. Methods and Materials: An anthropomorphic phantom was CT-simulated, and six treatment plans were generated: open tangents, tangents with an external wedge on the lateral beam, tangents with lateral and medial external wedges, a simple segment plan (three segments per tangent), a complex segmental intensity-modulated radiotherapy (IMRT) plan (five segments per tangent), and a beamlet IMRT plan (>100 segments). For all techniques, the breast on the phantom was irradiated to 5000 cGy. Contralateral breast dose was measured at a uniform depth at the center and eachmore » quadrant using thermoluminescent detectors. Results: Contralateral breast dose varied with position and was 50 {+-} 7.3 cGy in the inner half, 24 {+-} 4.1 cGy at the center, and 16 {+-} 2.2 cGy in the outer half for the open tangential plan. Compared with an average dose of 31 cGy across all points for the open field, the average doses were simple segment 32 cGy (range, 99-105% compared with open technique), complex segment 34 cGy (range, 103-117% compared with open technique), beamlet IMRT 34 cGy (range, 103-124% compared with open technique), lateral wedge only 46 cGy (range, 133-175% compared with open technique), and medial and lateral wedge 96 cGy (range, 282-370% compared with open technique). Conclusions: Single or dual wedge techniques resulted in the highest CBD increases compared with open tangents. To obtain the desired homogeneity to the treated breast while minimizing CBD, segmental and IMRT techniques should be encouraged over external physical compensators.« less

  20. HiPEP Ion Optics System Evaluation Using Gridlets

    NASA Technical Reports Server (NTRS)

    Willliams, John D.; Farnell, Cody C.; Laufer, D. Mark; Martinez, Rafael A.

    2004-01-01

    Experimental measurements are presented for sub-scale ion optics systems comprised of 7 and 19 aperture pairs with geometrical features that are similar to the HiPEP ion optics system. Effects of hole diameter and grid-to-grid spacing are presented as functions of applied voltage and beamlet current. Recommendations are made for the beamlet current range where the ion optics system can be safely operated without experiencing direct impingement of high energy ions on the accelerator grid surface. Measurements are also presented of the accelerator grid voltage where beam plasma electrons backstream through the ion optics system. Results of numerical simulations obtained with the ffx code are compared to both the impingement limit and backstreaming measurements. An emphasis is placed on identifying differences between measurements and simulation predictions to highlight areas where more research is needed. Relatively large effects are observed in simulations when the discharge chamber plasma properties and ion optics geometry are varied. Parameters investigated using simulations include the applied voltages, grid spacing, hole-to-hole spacing, doubles-to-singles ratio, plasma potential, and electron temperature; and estimates are provided for the sensitivity of impingement limits on these parameters.

  1. 3-ω damage threshold evaluation of final optics components using Beamlet Mule and off-line testing

    NASA Astrophysics Data System (ADS)

    Kozlowski, Mark R.; Maricle, Stephen M.; Mouser, Ron P.; Schwartz, Sheldon; Wegner, Paul J.; Weiland, Timothy L.

    1999-07-01

    A statistics-based model is being develop to predict the laser-damage-limited lifetime of UV optical components on the NIF laser. In order to provide data for the mode, laser damage experiments were performed on the Beamlet laser system at LLNL. An early protoype NIF focus lens was exposed to twenty 351 nm pulses at an average fluence of 5 J/cm2, 3ns. Using a high resolution optic inspection inspection system a total of 353 damage sites was detected within the 1160 cm2 beam aperture. Through inspections of the lens before, after and, in some cases, during the campaign, pulse to pulse damage growth rates were measured for damage initiating both on the surface and at bulk inclusions. Growth rates as high as 79 micrometers /pulse were observed for damage initiating at pre-existing scratches in the surface. For most damage sites on the optic, both on the surface and at bulk inclusions. Growth rates as high as 79 micrometers /pulse were observed for damage initiating at per- existing scratches in the surface. For most damage sites on the optic, both surface and bulk, the damage growth rate was approximately 10(Mu) m/pulse.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozlowski, M.F.; Maricle, S.; Mouser, R.

    A statistics-based model is being developed to predict the laser-damage-limited lifetime of UV optical components on the NIF laser. In order to provide data for the model, laser damage experiments were performed on the Beamlet laser system at LLNL. An early prototype NIF focus lens was exposed to twenty 35 1 nm pulses at an average fluence of 5 J/cm{sup 2}, 3ns. Using a high resolution optic inspection system a total of 353 damage sites was detected within the 1160 cm{sup 2} beam aperture. Through inspections of the lens before, after and, in some cases, during the campaign, pulse tomore » pulse damage growth rates were measured for damage initiating both on the surface and at bulk inclusions. Growth rates as high as 79 {micro}m/pulse (surface diameter) were observed for damage initiating at pre-existing scratches in the surface. For most damage sites on the optic, both surface and bulk, the damage growth rate was approximately l0{micro}m/pulse. The lens was also used in Beamlet for a subsequent 1053 {micro}m/526 {micro}m campaign. The 352 {micro}m-initiated damage continued to grow during that campaign although at generally lower growth rate.« less

  3. Feasibility of a simple method of hybrid collimation for megavoltage grid therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almendral, Pedro; Mancha, Pedro J.; Roberto, Daniel

    2013-05-15

    Purpose: Megavoltage grid therapy is currently delivered with step-and-shoot multisegment techniques or using a high attenuation block with divergent holes. However, the commercial availability of grid blocks is limited, their construction is difficult, and step-and-shoot techniques require longer treatment times and are not practical with some multileaf collimators. This work studies the feasibility of a hybrid collimation system for grid therapy that does not require multiple segments and can be easily implemented with widely available technical means. Methods: The authors have developed a system to generate a grid of beamlets by the simultaneous use of two perpendicular sets of equallymore » spaced leaves that project stripe patterns in orthogonal directions. One of them is generated with the multileaf collimator integrated in the accelerator and the other with an in-house made collimator constructed with a low melting point alloy commonly available at radiation oncology departments. The characteristics of the grid fields for 6 and 18 MV have been studied with a shielded diode, an unshielded diode, and radiochromic film. Results: The grid obtained with the hybrid collimation is similar to some of the grids used clinically with respect to the beamlet size (about 1 cm) and the percentage of open beam (1/4 of the total field). The grid fields are less penetrating than the open fields of the same energy. Depending on the depth and the direction of the profiles (diagonal or along the principal axes), the measured valley-to-peak dose ratios range from 5% to 16% for 6 MV and from 9% to 20% for 18 MV. All the detectors yield similar results in the measurement of profiles and percent depth dose, but the shielded diode seems to overestimate the output factors. Conclusions: The combination of two stripe pattern collimators in orthogonal directions is a feasible method to obtain two-dimensional arrays of beamlets and has potential usefulness as an efficient way to deliver grid therapy. The implementation of this method is technically simpler than the construction of a conventional grid block.« less

  4. Feasibility of a simple method of hybrid collimation for megavoltage grid therapy.

    PubMed

    Almendral, Pedro; Mancha, Pedro J; Roberto, Daniel

    2013-05-01

    Megavoltage grid therapy is currently delivered with step-and-shoot multisegment techniques or using a high attenuation block with divergent holes. However, the commercial availability of grid blocks is limited, their construction is difficult, and step-and-shoot techniques require longer treatment times and are not practical with some multileaf collimators. This work studies the feasibility of a hybrid collimation system for grid therapy that does not require multiple segments and can be easily implemented with widely available technical means. The authors have developed a system to generate a grid of beamlets by the simultaneous use of two perpendicular sets of equally spaced leaves that project stripe patterns in orthogonal directions. One of them is generated with the multileaf collimator integrated in the accelerator and the other with an in-house made collimator constructed with a low melting point alloy commonly available at radiation oncology departments. The characteristics of the grid fields for 6 and 18 MV have been studied with a shielded diode, an unshielded diode, and radiochromic film. The grid obtained with the hybrid collimation is similar to some of the grids used clinically with respect to the beamlet size (about 1 cm) and the percentage of open beam (1/4 of the total field). The grid fields are less penetrating than the open fields of the same energy. Depending on the depth and the direction of the profiles (diagonal or along the principal axes), the measured valley-to-peak dose ratios range from 5% to 16% for 6 MV and from 9% to 20% for 18 MV. All the detectors yield similar results in the measurement of profiles and percent depth dose, but the shielded diode seems to overestimate the output factors. The combination of two stripe pattern collimators in orthogonal directions is a feasible method to obtain two-dimensional arrays of beamlets and has potential usefulness as an efficient way to deliver grid therapy. The implementation of this method is technically simpler than the construction of a conventional grid block.

  5. Automated generation of IMRT treatment plans for prostate cancer patients with metal hip prostheses: Comparison of different planning strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voet, Peter W. J.; Dirkx, Maarten L. P.; Breedveld, Sebastiaan

    2013-07-15

    Purpose: To compare IMRT planning strategies for prostate cancer patients with metal hip prostheses.Methods: All plans were generated fully automatically (i.e., no human trial-and-error interactions) using iCycle, the authors' in-house developed algorithm for multicriterial selection of beam angles and optimization of fluence profiles, allowing objective comparison of planning strategies. For 18 prostate cancer patients (eight with bilateral hip prostheses, ten with a right-sided unilateral prosthesis), two planning strategies were evaluated: (i) full exclusion of beams containing beamlets that would deliver dose to the target after passing a prosthesis (IMRT{sub remove}) and (ii) exclusion of those beamlets only (IMRT{sub cut}). Plansmore » with optimized coplanar and noncoplanar beam arrangements were generated. Differences in PTV coverage and sparing of organs at risk (OARs) were quantified. The impact of beam number on plan quality was evaluated.Results: Especially for patients with bilateral hip prostheses, IMRT{sub cut} significantly improved rectum and bladder sparing compared to IMRT{sub remove}. For 9-beam coplanar plans, rectum V{sub 60Gy} reduced by 17.5%{+-} 15.0% (maximum 37.4%, p= 0.036) and rectum D{sub mean} by 9.4%{+-} 7.8% (maximum 19.8%, p= 0.036). Further improvements in OAR sparing were achievable by using noncoplanar beam setups, reducing rectum V{sub 60Gy} by another 4.6%{+-} 4.9% (p= 0.012) for noncoplanar 9-beam IMRT{sub cut} plans. Large reductions in rectum dose delivery were also observed when increasing the number of beam directions in the plans. For bilateral implants, the rectum V{sub 60Gy} was 37.3%{+-} 12.1% for coplanar 7-beam plans and reduced on average by 13.5% (maximum 30.1%, p= 0.012) for 15 directions.Conclusions: iCycle was able to automatically generate high quality plans for prostate cancer patients with prostheses. Excluding only beamlets that passed through the prostheses (IMRT{sub cut} strategy) significantly improved OAR sparing. Noncoplanar beam arrangements and, to a larger extent, increasing the number of treatment beams further improved plan quality.« less

  6. The mosaic structure of plasma bulk flows in the Earth's magnetotail

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M.; Richard, R. L.; Zelenyi, L. M.; Peroomian, V.; Bosqued, J. M.

    1995-01-01

    Moments of plasma distributions observed in the magnetotail vary with different time scales. In this paper we attempt to explain the observed variability on intermediate timescales of approximately 10-20 min that result from the simultaneous energization and spatial structuring of solar wind plasma in the distant magnetotail. These processes stimulate the formation of a system of spatially disjointed. highly accelerated filaments (beamlets) in the tail. We use the results from large-scale kinetic modeling of magnetotail formation from a plasma mantle source to calculate moments of ion distribution functions throughout the tail. Statistical restrictions related to the limited number of particles in our system naturally reduce the spatial resolution of our results, but we show that our model is valid on intermediate spatial scales Delta(x) x Delta(z) equal to approximately 1 R(sub E) x 1000 km. For these spatial scales the resulting pattern, which resembles a mosaic, appears to be quite variable. The complexity of the pattern is related to the spatial interference between beamlets accelerated at various locations within the distant tail which mirror in the strong near-Earth magnetic field. Global motion of the magnetotail results in the displacement of spacecraft with respect to this mosaic pattern and can produce variations in all of the moments (especially the x-component of the bulk velocity) on intermediate timescales. The results obtained enable us to view the magnetotail plasma as consisting of two different populations: a tailward-Earthward system of highly accelerated beamlets interfering with each other, and an energized quasithermal population which gradually builds as the Earth is approached. In the near-Earth tail, these populations merge into a hot quasi-isotropic ion population typical of the near-Earth plasma sheet. The transformation of plasma sheet boundary layer (PSBL) beam energy into central plasma sheet (CPS) quasi-thermal energy occurs in the absence of collisions or noise. This paper also clarifies the relationship between the global scale where an MHD description might be appropriate and the lower intermediate scales where MHD fails and large-scale kinetic theory should be used.

  7. Parallel processing of embossing dies with ultrafast lasers

    NASA Astrophysics Data System (ADS)

    Jarczynski, Manfred; Mitra, Thomas; Brüning, Stephan; Du, Keming; Jenke, Gerald

    2018-02-01

    Functionalization of surfaces equips products and components with new features like hydrophilic behavior, adjustable gloss level, light management properties, etc. Small feature sizes demand diffraction-limited spots and adapted fluence for different materials. Through the availability of high power fast repeating ultrashort pulsed lasers and efficient optical processing heads delivering diffraction-limited small spot size of around 10μm it is feasible to achieve fluences higher than an adequate patterning requires. Hence, parallel processing is becoming of interest to increase the throughput and allow mass production of micro machined surfaces. The first step on the roadmap of parallel processing for cylinder embossing dies was realized with an eight- spot processing head based on ns-fiber laser with passive optical beam splitting, individual spot switching by acousto optical modulation and an advanced imaging. Patterning of cylindrical embossing dies shows a high efficiency of nearby 80%, diffraction-limited and equally spaced spots with pitches down to 25μm achieved by a compression using cascaded prism arrays. Due to the nanoseconds laser pulses the ablation shows the typical surrounding material deposition of a hot process. In the next step the processing head was adapted to a picosecond-laser source and the 500W fiber laser was replaced by an ultrashort pulsed laser with 300W, 12ps and a repetition frequency of up to 6MHz. This paper presents details about the processing head design and the analysis of ablation rates and patterns on steel, copper and brass dies. Furthermore, it gives an outlook on scaling the parallel processing head from eight to 16 individually switched beamlets to increase processing throughput and optimized utilization of the available ultrashort pulsed laser energy.

  8. Fundamental radiological and geometric performance of two types of proton beam modulated discrete scanning systems.

    PubMed

    Farr, J B; Dessy, F; De Wilde, O; Bietzer, O; Schönenberg, D

    2013-07-01

    The purpose of this investigation was to compare and contrast the measured fundamental properties of two new types of modulated proton scanning systems. This provides a basis for clinical expectations based on the scanned beam quality and a benchmark for computational models. Because the relatively small beam and fast scanning gave challenges to the characterization, a secondary purpose was to develop and apply new approaches where necessary to do so. The following performances of the proton scanning systems were investigated: beamlet alignment, static in-air beamlet size and shape, scanned in-air penumbra, scanned fluence map accuracy, geometric alignment of scanning system to isocenter, maximum field size, lateral and longitudinal field uniformity of a 1 l cubic uniform field, output stability over time, gantry angle invariance, monitoring system linearity, and reproducibility. A range of detectors was used: film, ionization chambers, lateral multielement and longitudinal multilayer ionization chambers, and a scintillation screen combined with a digital video camera. Characterization of the scanned fluence maps was performed with a software analysis tool. The resulting measurements and analysis indicated that the two types of delivery systems performed within specification for those aspects investigated. The significant differences were observed between the two types of scanning systems where one type exhibits a smaller spot size and associated penumbra than the other. The differential is minimum at maximum energy and increases inversely with decreasing energy. Additionally, the large spot system showed an increase in dose precision to a static target with layer rescanning whereas the small spot system did not. The measured results from the two types of modulated scanning types of system were consistent with their designs under the conditions tested. The most significant difference between the types of system was their proton spot size and associated resolution, factors of magnetic optics, and vacuum length. The need and benefit of mutielement detectors and high-resolution sensors was also shown. The use of a fluence map analytical software tool was particularly effective in characterizing the dynamic proton energy-layer scanning.

  9. Direct aperture optimization using an inverse form of back-projection.

    PubMed

    Zhu, Xiaofeng; Cullip, Timothy; Tracton, Gregg; Tang, Xiaoli; Lian, Jun; Dooley, John; Chang, Sha X

    2014-03-06

    Direct aperture optimization (DAO) has been used to produce high dosimetric quality intensity-modulated radiotherapy (IMRT) treatment plans with fast treatment delivery by directly modeling the multileaf collimator segment shapes and weights. To improve plan quality and reduce treatment time for our in-house treatment planning system, we implemented a new DAO approach without using a global objective function (GFO). An index concept is introduced as an inverse form of back-projection used in the CT multiplicative algebraic reconstruction technique (MART). The index, introduced for IMRT optimization in this work, is analogous to the multiplicand in MART. The index is defined as the ratio of the optima over the current. It is assigned to each voxel and beamlet to optimize the fluence map. The indices for beamlets and segments are used to optimize multileaf collimator (MLC) segment shapes and segment weights, respectively. Preliminary data show that without sacrificing dosimetric quality, the implementation of the DAO reduced average IMRT treatment time from 13 min to 8 min for the prostate, and from 15 min to 9 min for the head and neck using our in-house treatment planning system PlanUNC. The DAO approach has also shown promise in optimizing rotational IMRT with burst mode in a head and neck test case.

  10. Finite elements numerical codes as primary tool to improve beam optics in NIO1

    NASA Astrophysics Data System (ADS)

    Baltador, C.; Cavenago, M.; Veltri, P.; Serianni, G.

    2017-08-01

    The RF negative ion source NIO1, built at Consorzio RFX in Padua (Italy), is aimed to investigate general issues on ion source physics in view of the full-size ITER injector MITICA as well as DEMO relevant solutions, like energy recovery and alternative neutralization systems, crucial for neutral beam injectors in future fusion experiments. NIO1 has been designed to produce 9 H-beamlets (in a 3x3 pattern) of 15mA each and 60keV, using a three electrodes system downstream the plasma source. At the moment the source is at its early operational stage and only operation at low power and low beam energy is possible. In particular, NIO1 presents a too strong set of SmCo co-extraction electron suppression magnets (CESM) in the extraction grid (EG) that will be replaced by a weaker set of Ferrite magnets. A completely new set of magnets will be also designed and mounted on the new EG that will be installed next year, replacing the present one. In this paper, the finite element code OPERA 3D is used to investigate the effects of the three sets of magnets on beamlet optics. A comparison of numerical results with measurements will be provided where possible.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halavanau, A.; Ha, G.

    Intercepting multi-aperture masks (e.g. pepper pot or multislit mask) combined with a downstream transversedensity diagnostics (e.g. based on optical transition radiation or employing scintillating media) are commonly used for characterizing the phase space of charged particle beams and the associated emittances. The required data analysis relies on precise calculation of the RMS sizes and positions of the beamlets originated from the mask which drifted up to the analyzing diagnostics. Voronoi diagram is an efficient method for splitting a plane into subsets according to the distances between given vortices. The application of the method to analyze data from pepper pot andmore » multislit mask based measurement is validated via numerical simulation and applied to experimental data acquired at the Argonne Wakefield Accelerator (AWA) facility. We also discuss the application of the Voronoi diagrams to quantify transverselymodulated beams distortion.« less

  12. SU-E-T-368: Evaluating Dosimetric Outcome of Modulated Photon Radiotherapy (XMRT) Optimization for Head and Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGeachy, P; Villarreal-Barajas, JE; Khan, R

    2015-06-15

    Purpose: The dosimetric outcome of optimized treatment plans obtained by modulating the photon beamlet energy and fluence on a small cohort of four Head and Neck (H and N) patients was investigated. This novel optimization technique is denoted XMRT for modulated photon radiotherapy. The dosimetric plans from XMRT for H and N treatment were compared to conventional, 6 MV intensity modulated radiotherapy (IMRT) optimization plans. Methods: An arrangement of two non-coplanar and five coplanar beams was used for all four H and N patients. Both XMRT and IMRT were subject to the same optimization algorithm, with XMRT optimization allowing bothmore » 6 and 18 MV beamlets while IMRT was restricted to 6 MV only. The optimization algorithm was based on a linear programming approach with partial-volume constraints implemented via the conditional value-at-risk method. H and N constraints were based off of those mentioned in the Radiation Therapy Oncology Group 1016 protocol. XMRT and IMRT solutions were assessed using metrics suggested by International Commission on Radiation Units and Measurements report 83. The Gurobi solver was used in conjunction with the CVX package to solve each optimization problem. Dose calculations and analysis were done in CERR using Monte Carlo dose calculation with VMC{sub ++}. Results: Both XMRT and IMRT solutions met all clinical criteria. Trade-offs were observed between improved dose uniformity to the primary target volume (PTV1) and increased dose to some of the surrounding healthy organs for XMRT compared to IMRT. On average, IMRT improved dose to the contralateral parotid gland and spinal cord while XMRT improved dose to the brainstem and mandible. Conclusion: Bi-energy XMRT optimization for H and N patients provides benefits in terms of improved dose uniformity to the primary target and reduced dose to some healthy structures, at the expense of increased dose to other healthy structures when compared with IMRT.« less

  13. SU-F-T-513: Dosimetric Validation of Spatially Fractionated Radiotherapy Using Gel Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papanikolaou, P; Watts, L; Kirby, N

    2016-06-15

    Purpose: Spatially fractionated radiation therapy, also known as GRID therapy, is used to treat large solid tumors by irradiating the target to a single dose of 10–20Gy through spatially distributed beamlets. We have investigated the use of a 3D gel for dosimetric characterization of GRID therapy. Methods: GRID therapy is an external beam analog of volumetric brachytherapy, whereby we produce a distribution of hot and cold dose columns inside the tumor volume. Such distribution can be produced with a block or by using a checker-like pattern with MLC. We have studied both types of GRID delivery. A cube shaped acrylicmore » phantom was filled with polymer gel and served as a 3D dosimeter. The phantom was scanned and the CT images were used to produce two plans in Pinnacle, one with the grid block and one with the MLC defined grid. A 6MV beam was used for the plan with a prescription of 1500cGy at dmax. The irradiated phantom was scanned in a 3T MRI scanner. Results: 3D dose maps were derived from the MR scans of the gel dosimeter and were found to be in good agreement with the predicted dose distribution from the RTP system. Gamma analysis showed a passing rate of 93% for 5% dose and 2mm DTA scoring criteria. Both relative and absolute dose profiles are in good agreement, except in the peripheral beamlets where the gel measured slightly higher dose, possibly because of the changing head scatter conditions that the RTP is not fully accounting for. Our results have also been benchmarked against ionization chamber measurements. Conclusion: We have investigated the use of a polymer gel for the 3D dosimetric characterization and evaluation of GRID therapy. Our results demonstrated that the planning system can predict fairly accurately the dose distribution for GRID type therapy.« less

  14. Coherent Transition Radiation Generated from Transverse Electron Density Modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halavanau, A.; Piot, P.; Tyukhtin, A. V.

    Coherent Transition radiation (CTR) of a given frequency is commonly generated with longitudinal electron bunch trains. In this paper, we present a study of CTR properties produced from simultaneous electron transverse and longitudinal density modulation. We demonstrate via numerical simulations a simple technique to generate THz-scale frequencies from mm-scale transversely separated electron beamlets formed into a ps-scale bunch train. The results and a potential experimental setup are discussed.

  15. Numerical Modeling of Ion Dynamics in a Carbon Nanotube Field-Ionized Thruster

    DTIC Science & Technology

    2011-12-01

    30  Figure 13.  Equipotential plot, Ez as a function of z and r, Jreq=300 kA/m2, space charge off... Equipotential plots, Ez as a function of z and r, Jreq=300 kA/m2, space charge on. Plots are taken at time intervals of 0.05 ns...on the accelerating grids; under-perveance results in crossover, overlap of neighboring beamlets, and impingement on downstream surfaces . Optimum

  16. Ion beam lithography system

    DOEpatents

    Leung, Ka-Ngo

    2005-08-02

    A maskless plasma-formed ion beam lithography tool provides for patterning of sub-50 nm features on large area flat or curved substrate surfaces. The system is very compact and does not require an accelerator column and electrostatic beam scanning components. The patterns are formed by switching beamlets on or off from a two electrode blanking system with the substrate being scanned mechanically in one dimension. This arrangement can provide a maskless nano-beam lithography tool for economic and high throughput processing.

  17. On-line Adaptive Radiation Treatment of Prostate Cancer

    DTIC Science & Technology

    2009-01-01

    12]. For intensity modulated radiation therapy (IMRT) plans , the beamlet weight can be re-optimized on a daily basis to mini- mize the dose to the OAR...Thongphiew D, Wang Z, Mathayomchan B, Chankong V, Yoo S, et al. On-line re-optimization of prostate IMRT plans for adaptive radiation therapy . Phys Med Biol...time. The treatment planning method for VMAT however is not mature. We are developing a robust VMAT treatment planning method which incorporates

  18. Improvements of the magnetic field design for SPIDER and MITICA negative ion beam sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G., E-mail: chitarin@igi.cnr.it; University of Padova, Dept. of Management and Engineering, Strad. S. Nicola 3, 36100 Vicenza; Agostinetti, P.

    2015-04-08

    The design of the magnetic field configuration in the SPIDER and MITICA negative ion beam sources has evolved considerably during the past four years. This evolution was driven by three factors: 1) the experimental results of the large RF-driven ion sources at IPP, which have provided valuable indications on the optimal magnetic configurations for reliable RF plasma source operation and for large negative ion current extraction, 2) the comprehensive beam optics and heat load simulations, which showed that the magnetic field configuration in the accelerator is crucial for keeping the heat load due to electrons on the accelerator grids withinmore » tolerable limits, without compromising the optics of the negative ion beam in the foreseen operating scenarios, 3) the progress of the detailed mechanical design of the accelerator, which stimulated the evaluation of different solutions for the correction of beamlet deflections of various origin and for beamlet aiming. On this basis, new requirements and solution concepts for the magnetic field configuration in the SPIDER and MITICA beam sources have been progressively introduced and updated until the design converged. The paper presents how these concepts have been integrated into a final design solution based on a horizontal “long-range” field (few mT) in combination with a “local” vertical field of some tens of mT on the acceleration grids.« less

  19. Determination of neutral beam injection accelerator grid deformation using beam emission measurements

    NASA Astrophysics Data System (ADS)

    Nightingale, M. P. S.; Kugel, H.; Gee, S. J.; Price, M. N.

    1999-01-01

    Theoretical modeling of 1-2 MW positive hydrogen ion neutral injectors developed at Oak Ridge National Laboratory (ORNL) has suggested that the plasma grid temperature could rise by up to 180 °C at pulse lengths above 0.5 s, leading to a grid deformation on the order of 5 mm, with a consequent change in focal length (from 4 to 2 m) and beamlet focusing. One of these injectors (on loan from ORNL) was used to achieve record β values on the Small Tight Aspect Ratio Tokamak at Culham, and two more are to be used on the Mega-Ampere Spherical Tokamak (MAST) at pulse lengths of up to 5 s. Since the grid modeling has never been tested experimentally, a method for diagnosing changes in beam transport as a function of pulse length using light emitted by the beam is now under development at Culham to see if grid modifications are required for MAST. Initial experimental results, carried out using a 50 A 30 keV hydrogen beam, are presented (including comparison with thermocouple data using an EK98 graphite beam stop). These confirm that emission measurement should allow the accelerator focal length and beamlet divergence to be determined to accuracies of better than ±0.45 m and ±0.2°, respectively (compared to nominal values of 4 m and 1.2°).

  20. Inertial confinement fusion quarterly report, October--December 1992. Volume 3, No. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixit, S.N.

    1992-12-31

    This report contains papers on the following topics: The Beamlet Front End: Prototype of a new pulse generation system;imaging biological objects with x-ray lasers; coherent XUV generation via high-order harmonic generation in rare gases; theory of high-order harmonic generation; two-dimensional computer simulations of ultra- intense, short-pulse laser-plasma interactions; neutron detectors for measuring the fusion burn history of ICF targets; the recirculator; and lasnex evolves to exploit computer industry advances.

  1. Code OK3 - An upgraded version of OK2 with beam wobbling function

    NASA Astrophysics Data System (ADS)

    Ogoyski, A. I.; Kawata, S.; Popov, P. H.

    2010-07-01

    For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.

  2. Distribution of the background gas in the MITICA accelerator

    NASA Astrophysics Data System (ADS)

    Sartori, E.; Dal Bello, S.; Serianni, G.; Sonato, P.

    2013-02-01

    MITICA is the ITER neutral beam test facility to be built in Padova for the generation of a 40A D- ion beam with a 16×5×16 array of 1280 beamlets accelerated to 1MV. The background gas pressure distribution and the particle flows inside MITICA accelerator are critical aspects for stripping losses, generation of secondary particles and beam non-uniformities. To keep the stripping losses in the extraction and acceleration stages reasonably low, the source pressure should be 0.3 Pa or less. The gas flow in MITICA accelerator is being studied using a 3D Finite Element code, named Avocado. The gas-wall interaction model is based on the cosine law, and the whole vacuum system geometry is represented by a view factor matrix based on surface discretization and gas property definitions. Pressure distribution and mutual fluxes are then solved linearly. In this paper the result of a numerical simulation is presented, showing the steady-state pressure distribution inside the accelerator when gas enters the system at room temperature. The accelerator model is limited to a horizontal slice 400 mm high (1/4 of the accelerator height). The pressure profile at solid walls and through the beamlet axis is obtained, allowing the evaluation and the discussion of the background gas distribution and nonuniformity. The particle flux at the inlet and outlet boundaries (namely the grounded grid apertures and the lateral conductances respectively) will be discussed.

  3. Treatment planning, optimization, and beam delivery technqiues for intensity modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Sengbusch, Evan R.

    Physical properties of proton interactions in matter give them a theoretical advantage over photons in radiation therapy for cancer treatment, but they are seldom used relative to photons. The primary barriers to wider acceptance of proton therapy are the technical feasibility, size, and price of proton therapy systems. Several aspects of the proton therapy landscape are investigated, and new techniques for treatment planning, optimization, and beam delivery are presented. The results of these investigations suggest a means by which proton therapy can be delivered more efficiently, effectively, and to a much larger proportion of eligible patients. An analysis of the existing proton therapy market was performed. Personal interviews with over 30 radiation oncology leaders were conducted with regard to the current and future use of proton therapy. In addition, global proton therapy market projections are presented. The results of these investigations serve as motivation and guidance for the subsequent development of treatment system designs and treatment planning, optimization, and beam delivery methods. A major factor impacting the size and cost of proton treatment systems is the maximum energy of the accelerator. Historically, 250 MeV has been the accepted value, but there is minimal quantitative evidence in the literature that supports this standard. A retrospective study of 100 patients is presented that quantifies the maximum proton kinetic energy requirements for cancer treatment, and the impact of those results with regard to treatment system size, cost, and neutron production is discussed. This study is subsequently expanded to include 100 cranial stereotactic radiosurgery (SRS) patients, and the results are discussed in the context of a proposed dedicated proton SRS treatment system. Finally, novel proton therapy optimization and delivery techniques are presented. Algorithms are developed that optimize treatment plans over beam angle, spot size, spot spacing, beamlet weight, the number of delivered beamlets, and the number of delivery angles. These methods are evaluated via treatment planning studies including left-sided whole breast irradiation, lung stereotactic body radiotherapy, nasopharyngeal carcinoma, and whole brain radiotherapy with hippocampal avoidance. Improvements in efficiency and efficacy relative to traditional proton therapy and intensity modulated photon radiation therapy are discussed.

  4. Beam tracking phase tomography with laboratory sources

    NASA Astrophysics Data System (ADS)

    Vittoria, F. A.; Endrizzi, M.; Kallon, G. K. N.; Hagen, C. K.; Diemoz, P. C.; Zamir, A.; Olivo, A.

    2018-04-01

    An X-ray phase-contrast laboratory system is presented, based on the beam-tracking method. Beam-tracking relies on creating micro-beamlets of radiation by placing a structured mask before the sample, and analysing them by using a detector with sufficient resolution. The system is used in tomographic configuration to measure the three dimensional distribution of the linear attenuation coefficient, difference from unity of the real part of the refractive index, and of the local scattering power of specimens. The complementarity of the three signals is investigated, together with their potential use for material discrimination.

  5. Astrophysical blast wave data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Nathan; Geissel, Matthias; Lewis, Sean M

    2015-03-01

    The data described in this document consist of image files of shadowgraphs of astrophysically relevant laser driven blast waves. Supporting files include Mathematica notebooks containing design calculations, tabulated experimental data and notes, and relevant publications from the open research literature. The data was obtained on the Z-Beamlet laser from July to September 2014. Selected images and calculations will be published as part of a PhD dissertation and in associated publications in the open research literature, with Sandia credited as appropriate. The authors are not aware of any restrictions that could affect the release of the data.

  6. Negative ion beam characterisation in BATMAN by mini-STRIKE: Improved design and new measurements

    NASA Astrophysics Data System (ADS)

    Serianni, G.; Bonomo, F.; Brombin, M.; Cervaro, V.; Chitarin, G.; Cristofaro, S.; Delogu, R.; De Muri, M.; Fasolo, D.; Fonnesu, N.; Franchin, L.; Franzen, P.; Ghiraldelli, R.; Molon, F.; Muraro, A.; Pasqualotto, R.; Ruf, B.; Schiesko, L.; Tollin, M.; Veltri, P.

    2015-04-01

    The ITER project requires additional heating provided by two injectors of neutral beams resulting from the neutralisation of accelerated negative ions. To study and optimise negative ion production, the SPIDER test facility (particle energy 100keV; beam current 50A) is under construction in Padova, with the aim of testing beam characteristics and to verify the source proper operation. The SPIDER beam will be characterised by the instrumented calorimeter STRIKE, whose main components are one-directional carbon fibre carbon composite tiles. Some prototype tiles have been employed in 2012 as a small-scale version (mini-STRIKE) of the entire system to investigate the features of the beam from BATMAN at IPP-Garching. As the BATMAN beamlets are superposed at the measurement position, about 1m from the grounded grid, an actively cooled copper mask is located in front of the tiles; holes in the mask create an artificial beamlet structure. Recently the mini-STRIKE has been updated, taking into account the results obtained in the first campaign. In particular the spatial resolution of the system has been improved by increasing the number of the copper mask holes. Moreover a custom measurement system has been realized for the thermocouple signals and employed in BATMAN in view of its use in SPIDER. The present contribution gives a description of the new design of the system as well as of the thermocouple measurements system and its field test. A new series of measurements has been carried out in BATMAN. The BATMAN beam characterisation in different experimental conditions is presented.

  7. Negative ion beam characterisation in BATMAN by mini-STRIKE: Improved design and new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serianni, G., E-mail: gianluigi.serianni@igi.cnr.it; Brombin, M.; Cervaro, V.

    The ITER project requires additional heating provided by two injectors of neutral beams resulting from the neutralisation of accelerated negative ions. To study and optimise negative ion production, the SPIDER test facility (particle energy 100keV; beam current 50A) is under construction in Padova, with the aim of testing beam characteristics and to verify the source proper operation. The SPIDER beam will be characterised by the instrumented calorimeter STRIKE, whose main components are one-directional carbon fibre carbon composite tiles. Some prototype tiles have been employed in 2012 as a small-scale version (mini-STRIKE) of the entire system to investigate the features ofmore » the beam from BATMAN at IPP-Garching. As the BATMAN beamlets are superposed at the measurement position, about 1m from the grounded grid, an actively cooled copper mask is located in front of the tiles; holes in the mask create an artificial beamlet structure. Recently the mini-STRIKE has been updated, taking into account the results obtained in the first campaign. In particular the spatial resolution of the system has been improved by increasing the number of the copper mask holes. Moreover a custom measurement system has been realized for the thermocouple signals and employed in BATMAN in view of its use in SPIDER. The present contribution gives a description of the new design of the system as well as of the thermocouple measurements system and its field test. A new series of measurements has been carried out in BATMAN. The BATMAN beam characterisation in different experimental conditions is presented.« less

  8. A 9700-hour durability test of a five centimeter diameter ion thruster

    NASA Technical Reports Server (NTRS)

    Nakanishi, S.; Finke, R. C.

    1973-01-01

    A modified Hughes SIT-5 thruster was life-tested at the Lewis Research Center. The final 2700 hours of the test are described with a charted history of thruster operating parameters and off-normal events. Performance and operating characteristics were nearly constant throughout the test except for neutralizer heater power requirements and accelerator drain current. A post-shutdown inspection revealed sputter erosion of ion chamber components and component flaking of sputtered metal. Several flakes caused beamlet divergence and anomalous grid erosion, causing the test to be terminated. All sputter erosion sources were identified.

  9. Development of a versatile multiaperture negative ion sourcea)

    NASA Astrophysics Data System (ADS)

    Cavenago, M.; Kulevoy, T.; Petrenko, S.; Serianni, G.; Antoni, V.; Bigi, M.; Fellin, F.; Recchia, M.; Veltri, P.

    2012-02-01

    A 60 kV ion source (9 beamlets of 15 mA each of H-) and plasma generators are being developed at Consorzio RFX and INFN-LNL, for their versatility in experimental campaigns and for training. Unlike most experimental sources, the design aimed at continuous operation. Magnetic configuration can achieve a minimum |B| trap, smoothly merged with the extraction filter. Modular design allows for quick substitution and upgrading of parts such as the extraction and postacceleration grids or the electrodes in contact with plasma. Experiments with a radio frequency plasma generator and Faraday cage inside the plasma are also described.

  10. Development of a versatile multiaperture negative ion source.

    PubMed

    Cavenago, M; Kulevoy, T; Petrenko, S; Serianni, G; Antoni, V; Bigi, M; Fellin, F; Recchia, M; Veltri, P

    2012-02-01

    A 60 kV ion source (9 beamlets of 15 mA each of H(-)) and plasma generators are being developed at Consorzio RFX and INFN-LNL, for their versatility in experimental campaigns and for training. Unlike most experimental sources, the design aimed at continuous operation. Magnetic configuration can achieve a minimum ∣B∣ trap, smoothly merged with the extraction filter. Modular design allows for quick substitution and upgrading of parts such as the extraction and postacceleration grids or the electrodes in contact with plasma. Experiments with a radio frequency plasma generator and Faraday cage inside the plasma are also described.

  11. Experimental measurement of the 4-d transverse phase space map of a heavy ion beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, H S

    1997-12-01

    The development and employment of a new diagnostic instrument for characterizing intense, heavy ion beams is reported on. This instrument, the ''Gated Beam Imager'' or ''GBI'' was designed for use on Lawrence Livermore National Laboratory Heavy Ion Fusion Project's ''Small Recirculator'', an integrated, scaled physics experiment and engineering development project for studying the transport and control of intense heavy ion beams as inertial fusion drivers in the production of electric power. The GBI allows rapid measurement and calculation of a heavy ion beam's characteristics to include all the first and second moments of the transverse phase space distribution, transverse emittance,more » envelope parameters and beam centroid. The GBI, with appropriate gating produces a time history of the beam resulting in a 4-D phase-space and time ''map'' of the beam. A unique capability of the GBI over existing diagnostic instruments is its ability to measure the ''cross'' moments between the two transverse orthogonal directions. Non-zero ''cross'' moments in the alternating gradient lattice of the Small Recirculator are indicative of focusing element rotational misalignments contributing to beam emittance growth. This emittance growth, while having the same effect on the ability to focus a beam as emittance growth caused by non-linear effects, is in principle removable by an appropriate number of focusing elements. The instrument uses the pepperpot method of introducing a plate with many pinholes into the beam and observing the images of the resulting beamlets as they interact with a detector after an appropriate drift distance. In order to produce adequate optical signal and repeatability, the detector was chosen to be a microchannel plate (MCP) with a phosphor readout screen. The heavy ions in the pepperpot beamlets are stopped in the MCP's thin front metal anode and the resulting secondary electron signal is amplified and proximity-focused onto the phosphor while maintaining the spatial and intensity characteristics of the heavy ion beamlets. The MCP used in this manner is a sensitive, accurate, and long-lasting detector, resistant against signal degradation experienced by previous methods of intense heavy ion beam detection and imaging. The performance of the GBI was benchmarked against existing mechanical emittance diagnostics and the results of sophisticated beam transport numerical simulation codes to demonstrate its usefulness as a diagnostic tool. A method of beam correction to remove the effects of quadrupole focusing element rotational misalignments is proposed using data obtainable from a GBI. An optimizing code was written to determine the parameters of the correction system elements based on input from the GBI. The results of this code for the Small Recirculator beam are reported on.« less

  12. Technical Note: A treatment plan comparison between dynamic collimation and a fixed aperture during spot scanning proton therapy for brain treatment

    PubMed Central

    Smith, Blake; Gelover, Edgar; Moignier, Alexandra; Wang, Dongxu; Flynn, Ryan T.; Lin, Liyong; Kirk, Maura; Solberg, Tim; Hyer, Daniel E.

    2016-01-01

    Purpose: To quantitatively assess the advantages of energy-layer specific dynamic collimation system (DCS) versus a per-field fixed aperture for spot scanning proton therapy (SSPT). Methods: Five brain cancer patients previously planned and treated with SSPT were replanned using an in-house treatment planning system capable of modeling collimated and uncollimated proton beamlets. The uncollimated plans, which served as a baseline for comparison, reproduced the target coverage and organ-at-risk sparing of the clinically delivered plans. The collimator opening for the fixed aperture-based plans was determined from the combined cross sections of the target in the beam’s eye view over all energy layers which included an additional margin equivalent to the maximum beamlet displacement for the respective energy of that energy layer. The DCS-based plans were created by selecting appropriate collimator positions for each row of beam spots during a Raster-style scanning pattern which were optimized to maximize the dose contributions to the target and limited the dose delivered to adjacent normal tissue. Results: The reduction of mean dose to normal tissue adjacent to the target, as defined by a 10 mm ring surrounding the target, averaged 13.65% (range: 11.8%–16.9%) and 5.18% (2.9%–7.1%) for the DCS and fixed aperture plans, respectively. The conformity index, as defined by the ratio of the volume of the 50% isodose line to the target volume, yielded an average improvement of 21.35% (19.4%–22.6%) and 8.38% (4.7%–12.0%) for the DCS and fixed aperture plans, respectively. Conclusions: The ability of the DCS to provide collimation to each energy layer yielded better conformity in comparison to fixed aperture plans. PMID:27487886

  13. Chamber transport for heavy ion fusion

    NASA Astrophysics Data System (ADS)

    Olson, Craig L.

    2014-01-01

    A brief review is given of research on chamber transport for HIF (heavy ion fusion) dating from the first HIF Workshop in 1976 to the present. Chamber transport modes are categorized into ballistic transport modes and channel-like modes. Four major HIF reactor studies are summarized (HIBALL-II, HYLIFE-II, Prometheus-H, OSIRIS), with emphasis on the chamber transport environment. In general, many beams are used to provide the required symmetry and to permit focusing to the required small spots. Target parameters are then discussed, with a summary of the individual heavy ion beam parameters required for HIF. The beam parameters are then classified as to their line charge density and perveance, with special emphasis on the perveance limits for radial space charge spreading, for the space charge limiting current, and for the magnetic (Alfven) limiting current. The major experiments on ballistic transport (SFFE, Sabre beamlets, GAMBLE II, NTX, NDCX) are summarized, with specific reference to the axial electron trapping limit for charge neutralization. The major experiments on channel-like transport (GAMBLE II channel, GAMBLE II self-pinch, LBNL channels, GSI channels) are discussed. The status of current research on HIF chamber transport is summarized, and the value of future NDCX-II transport experiments for the future of HIF is noted.

  14. Ion accelerator systems for high power 30 cm thruster operation

    NASA Technical Reports Server (NTRS)

    Aston, G.

    1982-01-01

    Two and three-grid accelerator systems for high power ion thruster operation were investigated. Two-grid translation tests show that over compensation of the 30 cm thruster SHAG grid set spacing the 30 cm thruster radial plasma density variation and by incorporating grid compensation only sufficient to maintain grid hole axial alignment, it is shown that beam current gains as large as 50% can be realized. Three-grid translation tests performed with a simulated 30 cm thruster discharge chamber show that substantial beamlet steering can be reliably affected by decelerator grid translation only, at net-to-total voltage ratios as low as 0.05.

  15. A 9700-hour durability test of a five centimeter diameter ion thruster

    NASA Technical Reports Server (NTRS)

    Nakanishi, S.; Finke, R. C.

    1973-01-01

    A modified Hughes SIT-5 thrustor has been life-tested at the Lewis Research Center. The final 2700 hours of the test are described with a charted history of thrustor operating parameters and off-normal events. Performance and operating characteristics were nearly constant throughout the test except for neutralizer heater power requirements and accelerator drain current. A post-shutdown inspection revealed sputter erosion of ion chamber components and component flaking of sputtered metal. Several flakes caused beamlet divergence and anomalous grid erosion, causing the test to be terminated. All sputter erosion sources have been identified and promising sputter resistant components are currently being evaluated.

  16. Sensitivity of 30-cm mercury bombardment ion thruster characteristics to accelerator grid design

    NASA Technical Reports Server (NTRS)

    Rawlin, V. K.

    1978-01-01

    The design of ion optics for bombardment thrusters strongly influences overall performance and lifetime. The operation of a 30 cm thruster with accelerator grid open area fractions ranging from 43 to 24 percent, was evaluated and compared with experimental and theoretical results. Ion optics properties measured included the beam current extraction capability, the minimum accelerator grid voltage to prevent backstreaming, ion beamlet diameter as a function of radial position on the grid and accelerator grid hole diameter, and the high energy, high angle ion beam edge location. Discharge chamber properties evaluated were propellant utilization efficiency, minimum discharge power per beam amp, and minimum discharge voltage.

  17. Use of the focusing multi-slit ion optical system at RUssian Diagnostic Injector (RUDI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Listopad, A.; Davydenko, V.; Ivanov, A.

    2012-02-15

    The upgrade of the diagnostic neutral beam injector RUDI in 2010 was performed to increase the beam density at the focal plane in accordance with the requirements of charge-exchange recombination spectroscopy diagnostics. A new focusing ion-optical system (IOS) with slit beamlets and an enlarged aperture was optimized for 50% higher nominal beam current and reduced angular divergence with respect to the previous multi-aperture IOS version. The upgraded injector provides the beam current up to 3 A, the measured beam divergence in the direction along the slits is 0.35 deg. Additionally, the plasma generator was modified to extend the beam pulsemore » to 8 s.« less

  18. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  19. UV-laser-based longitudinal illuminated diffuser (LID) incorporating diffractive and Lambertian reflectance for the disinfection of beverages

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd

    2010-08-01

    A novel laser beam shaping system was designed to demonstrate the potential of using high power UV laser sources for large scale disinfection of liquids used in the production of food products, such as juices, beer, milk and other beverage types. The design incorporates a patented assembly of optical components including a diffractive beam splitting/shaping element and a faceted pyramidal or conically shaped Lambertian diffuser made from a compression molded PTFE compounds. When properly sintered to an appropriate density, as an example between 1.10 and 1.40 grams per cubic centimeter, the compressed PTFE compounds show a ~99% reflectance at wavelengths ranging from 300 nm to 1500 nm, and a ~98.5% refection of wavelengths from 250 nm to 2000 nm [1]. The unique diffuser configuration also benefits from the fact that the PTFE compounds do not degrade when exposed to ultraviolet radiation as do barium sulfate materials and silver or aluminized mirror coatings [2]. These components are contained within a hermetically sealed quartz tube. Once assembled a laser beam is directed through one end of the tube. This window takes the form of a computer generated diffractive splitter or other diffractive shaper element to split the laser beam into a series of spot beamlets, circular rings or other geometric shapes. As each of the split beamlets or rings cascade downward, they illuminate various points along the tapered PTFE cone or faceted pyramidal form. As they strike the surface they each diffuse in a Lambertian reflectance pattern creating a pseudo-uniform circumferential illuminator along the length of the quartz tube enclosing the assembly. The compact tubular structure termed Longitudinal Illuminated Diffuser (LID) provides a unique UV disinfection source that can be placed within a centrifugal reactor or a pipe based reactor chamber. This paper will review the overall design principle, key component design parameters, preliminary analytic and bench operational testing results.

  20. Incorporating uncertainty and motion in Intensity Modulated Radiation Therapy treatment planning

    NASA Astrophysics Data System (ADS)

    Martin, Benjamin Charles

    In radiation therapy, one seeks to destroy a tumor while minimizing the damage to surrounding healthy tissue. Intensity Modulated Radiation Therapy (IMRT) uses overlapping beams of x-rays that add up to a high dose within the target and a lower dose in the surrounding healthy tissue. IMRT relies on optimization techniques to create high quality treatments. Unfortunately, the possible conformality is limited by the need to ensure coverage even if there is organ movement or deformation. Currently, margins are added around the tumor to ensure coverage based on an assumed motion range. This approach does not ensure high quality treatments. In the standard IMRT optimization problem, an objective function measures the deviation of the dose from the clinical goals. The optimization then finds the beamlet intensities that minimize the objective function. When modeling uncertainty, the dose delivered from a given set of beamlet intensities is a random variable. Thus the objective function is also a random variable. In our stochastic formulation we minimize the expected value of this objective function. We developed a problem formulation that is both flexible and fast enough for use on real clinical cases. While working on accelerating the stochastic optimization, we developed a technique of voxel sampling. Voxel sampling is a randomized algorithms approach to a steepest descent problem based on estimating the gradient by only calculating the dose to a fraction of the voxels within the patient. When combined with an automatic sampling rate adaptation technique, voxel sampling produced an order of magnitude speed up in IMRT optimization. We also develop extensions of our results to Intensity Modulated Proton Therapy (IMPT). Due to the physics of proton beams the stochastic formulation yields visibly different and better plans than normal optimization. The results of our research have been incorporated into a software package OPT4D, which is an IMRT and IMPT optimization tool that we developed.

  1. Laser-induced damage and fracture in fused silica vacuum windows

    NASA Astrophysics Data System (ADS)

    Campbell, John H.; Hurst, Patricia A.; Heggins, Dwight D.; Steele, William A.; Bumpas, Stanley E.

    1997-05-01

    Laser induced damage, that initiates catastrophic fracture, has been observed in large, fused silica lenses that also serve as vacuum barriers in high-fluence positions on the Nova and Beamlet lasers. In nearly all cases damage occurs on the vacuum side of the lens. The damage can lead to catastrophic crack growth if the flaw size exceeds the critical flaw size for SiO2. If the elastic stored energy in the lens in high enough, the lens will fracture into many pieces resulting in an implosion. The consequences of such an implosion can be severe, particularly for large vacuum systems. Three parameters control the degree of fracture in the vacuum barrier window: (1) the elastic stored energy, (2) the ratio of the window thickness to flaw depth and (3) secondary crack propagation. Fracture experiments have ben carried our on 15-cm diameter fused silica windows that contain surface flaws caused by laser damage. The results of these experiments, combined with data from window failures on Beamlet and Nova have been sued to develop design criteria for a 'fail-safe' lens. Specifically the window must be made thick enough such that the peak tensile stress is less than 500 psi and the corresponding ratio of the thickness to critical flaw size is less than 6. Under these conditions a properly mounted window, upon failure, will break into only tow pieces and will not implode. One caveat to these design criteria is that the air leak through the window before secondary crack growth occurs. Finite element stress calculations of a window before and immediately following fracture into two pieces show that the elastic stored energy is redistributed if the fragments 'lock' in place and thereby bridge the opening. In such cases, the peak stresses at the flaw site can increase leading to further crack growth.

  2. Monte Carlo simulations of patient dose perturbations in rotational-type radiotherapy due to a transverse magnetic field: A tomotherapy investigation

    PubMed Central

    Yang, Y. M.; Geurts, M.; Smilowitz, J. B.; Sterpin, E.; Bednarz, B. P.

    2015-01-01

    Purpose: Several groups are exploring the integration of magnetic resonance (MR) image guidance with radiotherapy to reduce tumor position uncertainty during photon radiotherapy. The therapeutic gain from reducing tumor position uncertainty using intrafraction MR imaging during radiotherapy could be partially offset if the negative effects of magnetic field-induced dose perturbations are not appreciated or accounted for. The authors hypothesize that a more rotationally symmetric modality such as helical tomotherapy will permit a systematic mediation of these dose perturbations. This investigation offers a unique look at the dose perturbations due to homogeneous transverse magnetic field during the delivery of Tomotherapy® Treatment System plans under varying degrees of rotational beamlet symmetry. Methods: The authors accurately reproduced treatment plan beamlet and patient configurations using the Monte Carlo code geant4. This code has a thoroughly benchmarked electromagnetic particle transport physics package well-suited for the radiotherapy energy regime. The three approved clinical treatment plans for this study were for a prostate, head and neck, and lung treatment. The dose heterogeneity index metric was used to quantify the effect of the dose perturbations to the target volumes. Results: The authors demonstrate the ability to reproduce the clinical dose–volume histograms (DVH) to within 4% dose agreement at each DVH point for the target volumes and most planning structures, and therefore, are able to confidently examine the effects of transverse magnetic fields on the plans. The authors investigated field strengths of 0.35, 0.7, 1, 1.5, and 3 T. Changes to the dose heterogeneity index of 0.1% were seen in the prostate and head and neck case, reflecting negligible dose perturbations to the target volumes, a change from 5.5% to 20.1% was observed with the lung case. Conclusions: This study demonstrated that the effect of external magnetic fields can be mitigated by exploiting a more rotationally symmetric treatment modality. PMID:25652485

  3. Technical Note: A treatment plan comparison between dynamic collimation and a fixed aperture during spot scanning proton therapy for brain treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Blake, E-mail: bsmith34@wisc.edu; Gelover,

    Purpose: To quantitatively assess the advantages of energy-layer specific dynamic collimation system (DCS) versus a per-field fixed aperture for spot scanning proton therapy (SSPT). Methods: Five brain cancer patients previously planned and treated with SSPT were replanned using an in-house treatment planning system capable of modeling collimated and uncollimated proton beamlets. The uncollimated plans, which served as a baseline for comparison, reproduced the target coverage and organ-at-risk sparing of the clinically delivered plans. The collimator opening for the fixed aperture-based plans was determined from the combined cross sections of the target in the beam’s eye view over all energy layersmore » which included an additional margin equivalent to the maximum beamlet displacement for the respective energy of that energy layer. The DCS-based plans were created by selecting appropriate collimator positions for each row of beam spots during a Raster-style scanning pattern which were optimized to maximize the dose contributions to the target and limited the dose delivered to adjacent normal tissue. Results: The reduction of mean dose to normal tissue adjacent to the target, as defined by a 10 mm ring surrounding the target, averaged 13.65% (range: 11.8%–16.9%) and 5.18% (2.9%–7.1%) for the DCS and fixed aperture plans, respectively. The conformity index, as defined by the ratio of the volume of the 50% isodose line to the target volume, yielded an average improvement of 21.35% (19.4%–22.6%) and 8.38% (4.7%–12.0%) for the DCS and fixed aperture plans, respectively. Conclusions: The ability of the DCS to provide collimation to each energy layer yielded better conformity in comparison to fixed aperture plans.« less

  4. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Seidl, P. A.; Ji, Q.; Feinberg, E.; Waldron, W. L.; Schenkel, T.; Ardanuc, S.; Vinayakumar, K. B.; Lal, A.

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3 × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.

  5. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    DOE PAGES

    Persaud, A.; Seidl, P. A.; Ji, Q.; ...

    2017-10-26

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3more » × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.« less

  6. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Persaud, A.; Seidl, P. A.; Ji, Q.

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3more » × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.« less

  7. Ion beam collimating grid to reduce added defects

    DOEpatents

    Lindquist, Walter B.; Kearney, Patrick A.

    2003-01-01

    A collimating grid for an ion source located after the exit grid. The collimating grid collimates the ion beamlets and disallows beam spread and limits the beam divergence during transients and steady state operation. The additional exit or collimating grid prevents beam divergence during turn-on and turn-off and prevents ions from hitting the periphery of the target where there is re-deposited material or from missing the target and hitting the wall of the vessel where there is deposited material, thereby preventing defects from being deposited on a substrate to be coated. Thus, the addition of a collimating grid to an ion source ensures that the ion beam will hit and be confined to a specific target area.

  8. A SIMPLE METHOD FOR MEASURING THE ELECTRON-BEAM MAGNETIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halavanau, A.; Qiang, G.; Wisniewski, E.

    2016-10-18

    There are a number of projects that require magnetized beams, such as electron cooling or aiding in “flat” beam transforms. Here we explore a simple technique to characterize the magnetization, observed through the angular momentum of magnetized beams. These beams are produced through photoemission. The generating drive laser first passes through microlens arrays (fly-eye light condensers) to form a transversely modulated pulse incident on the photocathode surface [1]. The resulting charge distribution is then accelerated from the photocathode. We explore the evolution of the pattern via the relative shearing of the beamlets, providing information about the angular momentum. This methodmore » is illustrated through numerical simulations and preliminary measurements carried out at the Argonne Wakefield Accelerator (AWA) facility are presented.« less

  9. Analysis and experimental demonstration of conformal adaptive phase-locked fiber array for laser communications and beam projection applications

    NASA Astrophysics Data System (ADS)

    Liu, Ling

    The primary goal of this research is the analysis, development, and experimental demonstration of an adaptive phase-locked fiber array system for free-space optical communications and laser beam projection applications. To our knowledge, the developed adaptive phase-locked system composed of three fiber collimators (subapertures) with tip-tilt wavefront phase control at each subaperture represents the first reported fiber array system that implements both phase-locking control and adaptive wavefront tip-tilt control capabilities. This research has also resulted in the following innovations: (a) The first experimental demonstration of a phase-locked fiber array with tip-tilt wave-front aberration compensation at each fiber collimator; (b) Development and demonstration of the fastest currently reported stochastic parallel gradient descent (SPGD) system capable of operation at 180,000 iterations per second; (c) The first experimental demonstration of a laser communication link based on a phase-locked fiber array; (d) The first successful experimental demonstration of turbulence and jitter-induced phase distortion compensation in a phase-locked fiber array optical system; (e) The first demonstration of laser beam projection onto an extended target with a randomly rough surface using a conformal adaptive fiber array system. Fiber array optical systems, the subject of this study, can overcome some of the draw-backs of conventional monolithic large-aperture transmitter/receiver optical systems that are usually heavy, bulky, and expensive. The primary experimental challenges in the development of the adaptive phased-locked fiber-array included precise (<5 microrad) alignment of the fiber collimators and development of fast (100kHz-class) phase-locking and wavefront tip-tilt control systems. The precise alignment of the fiber collimator array is achieved through a specially developed initial coarse alignment tool based on high precision piezoelectric picomotors and a dynamic fine alignment mechanism implemented with specially designed and manufactured piezoelectric fiber positioners. Phase-locking of the fiber collimators is performed by controlling the phases of the output beams (beamlets) using integrated polarization-maintaining (PM) fiber-coupled LiNbO3 phase shifters. The developed phase-locking controllers are based on either the SPGD algorithm or the multi-dithering technique. Subaperture wavefront phase tip-tilt control is realized using piezoelectric fiber positioners that are controlled using a computer-based SPGD controller. Both coherent (phase-locked) and incoherent beam combining in the fiber array system are analyzed theoretically and experimentally. Two special fiber-based beam-combining testbeds have been built to demonstrate the technical feasibility of phase-locking compensation prior to free-space operation. In addition, the reciprocity of counter-propagating beams in a phase-locked fiber array system has been investigated. Coherent beam combining in a phase-locking system with wavefront phase tip-tilt compensation at each subaperture is successfully demonstrated when laboratory-simulated turbulence and wavefront jitters are present in the propagation path of the beamlets. In addition, coherent beam combining with a non-cooperative extended target in the control loop is successfully demonstrated.

  10. Time-resolved dosimetry using a pinpoint ionization chamber as quality assurance for IMRT and VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louwe, Robert J. W., E-mail: rob.louwe@ccdbh.org.nz; Satherley, Thomas; Day, Rebecca A.

    Purpose: To develop a method to verify the dose delivery in relation to the individual control points of intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) using an ionization chamber. In addition to more effective problem solving during patient-specific quality assurance (QA), the aim is to eventually map out the limitations in the treatment chain and enable a targeted improvement of the treatment technique in an efficient way. Methods: Pretreatment verification was carried out for 255 treatment plans that included a broad range of treatment indications in two departments using the equipment of different vendors. In-house developed softwaremore » was used to enable calculation of the dose delivery for the individual beamlets in the treatment planning system (TPS), for data acquisition, and for analysis of the data. The observed deviations were related to various delivery and measurement parameters such as gantry angle, field size, and the position of the detector with respect to the field edge to distinguish between error sources. Results: The average deviation of the integral fraction dose during pretreatment verification of the planning target volume dose was −2.1% ± 2.2% (1 SD), −1.7% ± 1.7% (1 SD), and 0.0% ± 1.3% (1 SD) for IMRT at the Radboud University Medical Center (RUMC), VMAT (RUMC), and VMAT at the Wellington Blood and Cancer Centre, respectively. Verification of the dose to organs at risk gave very similar results but was generally subject to a larger measurement uncertainty due to the position of the detector at a high dose gradient. The observed deviations could be related to limitations of the TPS beam models, attenuation of the treatment couch, as well as measurement errors. The apparent systematic error of about −2% in the average deviation of the integral fraction dose in the RUMC results could be explained by the limitations of the TPS beam model in the calculation of the beam penumbra. Conclusions: This study showed that time-resolved dosimetry using an ionization chamber is feasible and can be largely automated which limits the required additional time compared to integrated dose measurements. It provides a unique QA method which enables identification and quantification of the contribution of various error sources during IMRT and VMAT delivery.« less

  11. Beam-specific planning volumes for scattered-proton lung radiotherapy

    NASA Astrophysics Data System (ADS)

    Flampouri, S.; Hoppe, B. S.; Slopsema, R. L.; Li, Z.

    2014-08-01

    This work describes the clinical implementation of a beam-specific planning treatment volume (bsPTV) calculation for lung cancer proton therapy and its integration into the treatment planning process. Uncertainties incorporated in the calculation of the bsPTV included setup errors, machine delivery variability, breathing effects, inherent proton range uncertainties and combinations of the above. Margins were added for translational and rotational setup errors and breathing motion variability during the course of treatment as well as for their effect on proton range of each treatment field. The effect of breathing motion and deformation on the proton range was calculated from 4D computed tomography data. Range uncertainties were considered taking into account the individual voxel HU uncertainty along each proton beamlet. Beam-specific treatment volumes generated for 12 patients were used: a) as planning targets, b) for routine plan evaluation, c) to aid beam angle selection and d) to create beam-specific margins for organs at risk to insure sparing. The alternative planning technique based on the bsPTVs produced similar target coverage as the conventional proton plans while better sparing the surrounding tissues. Conventional proton plans were evaluated by comparing the dose distributions per beam with the corresponding bsPTV. The bsPTV volume as a function of beam angle revealed some unexpected sources of uncertainty and could help the planner choose more robust beams. Beam-specific planning volume for the spinal cord was used for dose distribution shaping to ensure organ sparing laterally and distally to the beam.

  12. Focused electron and ion beam systems

    DOEpatents

    Leung, Ka-Ngo; Reijonen, Jani; Persaud, Arun; Ji, Qing; Jiang, Ximan

    2004-07-27

    An electron beam system is based on a plasma generator in a plasma ion source with an accelerator column. The electrons are extracted from a plasma cathode in a plasma ion source, e.g. a multicusp plasma ion source. The beam can be scanned in both the x and y directions, and the system can be operated with multiple beamlets. A compact focused ion or electron beam system has a plasma ion source and an all-electrostatic beam acceleration and focusing column. The ion source is a small chamber with the plasma produced by radio-frequency (RF) induction discharge. The RF antenna is wound outside the chamber and connected to an RF supply. Ions or electrons can be extracted from the source. A multi-beam system has several sources of different species and an electron beam source.

  13. Experimental demonstration of spatially coherent beam combining using optical parametric amplification.

    PubMed

    Kurita, Takashi; Sueda, Keiichi; Tsubakimoto, Koji; Miyanaga, Noriaki

    2010-07-05

    We experimentally demonstrated coherent beam combining using optical parametric amplification with a nonlinear crystal pumped by random-phased multiple-beam array of the second harmonic of a Nd:YAG laser at 10-Hz repetition rate. In the proof-of-principle experiment, the phase jump between two pump beams was precisely controlled by a motorized actuator. For the demonstration of multiple-beam combining a random phase plate was used to create random-phased beamlets as a pump pulse. Far-field patterns of the pump, the signal, and the idler indicated that the spatially coherent signal beams were obtained on both cases. This approach allows scaling of the intensity of optical parametric chirped pulse amplification up to the exa-watt level while maintaining diffraction-limited beam quality.

  14. Single-shot terahertz time-domain spectroscopy in pulsed high magnetic fields.

    PubMed

    Noe, G Timothy; Katayama, Ikufumi; Katsutani, Fumiya; Allred, James J; Horowitz, Jeffrey A; Sullivan, David M; Zhang, Qi; Sekiguchi, Fumiya; Woods, Gary L; Hoffmann, Matthias C; Nojiri, Hiroyuki; Takeda, Jun; Kono, Junichiro

    2016-12-26

    We have developed a single-shot terahertz time-domain spectrometer to perform optical-pump/terahertz-probe experiments in pulsed, high magnetic fields up to 30 T. The single-shot detection scheme for measuring a terahertz waveform incorporates a reflective echelon to create time-delayed beamlets across the intensity profile of the optical gate beam before it spatially and temporally overlaps with the terahertz radiation in a ZnTe detection crystal. After imaging the gate beam onto a camera, we can retrieve the terahertz time-domain waveform by analyzing the resulting image. To demonstrate the utility of our technique, we measured cyclotron resonance absorption of optically excited carriers in the terahertz frequency range in intrinsic silicon at high magnetic fields, with results that agree well with published values.

  15. Advanced ion thruster and electrochemical launcher research

    NASA Technical Reports Server (NTRS)

    Wilbur, P. J.

    1983-01-01

    The theoretical model of orificed hollow cathode operation predicted experimentally observed cathode performance with reasonable accuracy. The deflection and divergence characteristics of ion beamlets emanating from a two grid optics system as a function of the relative offset of screen and accel grids hole axes were described. Ion currents associated with discharge chamber operation were controlled to improve ion thruster performance markedly. Limitations imposed by basic physical laws on reductions in screen grid hole size and grid spacing for ion optics systems were described. The influence of stray magnetic fields in the vicinity of a neutralizer on the performance of that neutralizer was demonstrated. The ion current density extracted from a thruster was enhanced by injecting electrons into the region between its ion accelerating grids. Theoretical analysis of the electrothermal ramjet concept of launching space bound payloads at high acceleration levels is described. The operation of this system is broken down into two phases. In the light gas gun phase the payload is accelerated to the velocity at which the ramjet phase can commence. Preliminary models of operation are examined and shown to yield overall energy efficiences for a typical Earth escape launch of 60 to 70%. When shock losses are incorporated these efficiencies are still observed to remain at the relatively high values of 40 to 50%.

  16. SU-E-T-17: A Mathematical Model for PinPoint Chamber Correction in Measuring Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T; Zhang, Y; Li, X

    2014-06-01

    Purpose: For small field dosimetry, such as measuring the cone output factor for stereotactic radiosurgery, ion chambers often result in underestimation of the dose, due to both the volume averaging effect and the lack of electron equilibrium. The purpose of this work is to develop a mathematical model, specifically for the pinpoint chamber, to calculate the correction factors corresponding to different type of small fields, including single cone-based circular field and non-standard composite fields. Methods: A PTW 0.015cc PinPoint chamber was used in the study. Its response in a certain field was modeled as the total contribution of many smallmore » beamlets, each with different response factor depending on the relative strength, radial distance to the chamber axis, and the beam angle. To get these factors, 12 cone-shaped circular fields (5mm,7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm) were irradiated and measured with the PinPoint chamber. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. These readings were then compared with the theoretical doses as obtained with Monte Carlo calculation. A penalized-least-square optimization algorithm was developed to find out the beamlet response factors. After the parameter fitting, the established mathematical model was validated with the same MC code for other non-circular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the Monte Carlo calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for the PinPoint chamber for dosimetric measurement of small fields. The current model is applicable only when the beam axis is perpendicular to the chamber axis. It can be applied to non-standard composite fields. Further validation with other type of detectors is being conducted.« less

  17. Calculation and Prediction of the Effect of Respiratory Motion on Whole Breast Radiation Therapy Dose Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Junsheng; Roeske, John C.; Chmura, Steve J.

    2009-07-01

    The standard treatment technique used for whole-breast irradiation can result in undesirable dose distributions in the treatment site, leading to skin reaction/fibrosis and pulmonary and cardiac toxicities. Hence, the technique has evolved from conventional wedged technique (CWT) to segment intensity-modulated radiation therapy (SIMRT) and beamlet IMRT (IMRT). However, these newer techniques feature more highly modulated dose distributions that may be affected by respiration. The purpose of this work was to conduct a simple study of the clinical impact of respiratory motion on breast radiotherapy dose distributions for the three treatment planning techniques. The ultimate goal was to determine which patientsmore » would benefit most from the use of motion management. Eight patients with early-stage breast cancer underwent a free-breathing (FB) computed tomography (CT) simulation, with medial and lateral markers placed on the skin. Two additional CT scans were obtained at the end of inspiration (EI) and the end of expiration (EE). The FB-CT scan was used to develop treatment plans using each technique. Each plan was then applied to EI and EE-CT scans. Compared with the FB CT scan, the medial markers moved up to 1.8 cm in the anterior-superior direction at the end of inspiration (EI-scan), and on average 8 mm. The CWT and SIMRT techniques were not 'sensitive' to respiratory motion, because the % clinical target volume (CTV) receiving 95% of the prescription dose (V{sub 95%}) remained constant for both techniques. For patients that had large respiratory motion indicated by marker movement >0.6 cm, differences in coverage of the CTV at the V100% between FB and EI for beamlet IMRT plans were on the order of >10% and up to 18%. A linear model was developed to relate the dosimetric coverage difference introduced by respiration with the motion information. With this model, the dosimetric coverage difference introduced by respiratory motion could be evaluated during patient CT simulation. An appropriate treatment method can be chosen after the simulation.« less

  18. Concepts for the magnetic design of the MITICA neutral beam test facility ion accelerator.

    PubMed

    Chitarin, G; Agostinetti, P; Marconato, N; Marcuzzi, D; Sartori, E; Serianni, G; Sonato, P

    2012-02-01

    The megavolt ITER injector concept advancement neutral injector test facility will be constituted by a RF-driven negative ion source and by an electrostatic Accelerator, designed to produce a negative Ion with a specific energy up to 1 MeV. The beam is then neutralized in order to obtain a focused 17 MW neutral beam. The magnetic configuration inside the accelerator is of crucial importance for the achievement of a good beam efficiency, with the early deflection of the co-extracted and stripped electrons, and also of the required beam optic quality, with the correction of undesired ion beamlet deflections. Several alternative magnetic design concepts have been considered, comparing in detail the magnetic and beam optics simulation results, evidencing the advantages and drawbacks of each solution both from the physics and engineering point of view.

  19. Two-Photon Imaging with Diffractive Optical Elements

    PubMed Central

    Watson, Brendon O.; Nikolenko, Volodymyr; Yuste, Rafael

    2009-01-01

    Two-photon imaging has become a useful tool for optical monitoring of neural circuits, but it requires high laser power and serial scanning of each pixel in a sample. This results in slow imaging rates, limiting the measurements of fast signals such as neuronal activity. To improve the speed and signal-to-noise ratio of two-photon imaging, we introduce a simple modification of a two-photon microscope, using a diffractive optical element (DOE) which splits the laser beam into several beamlets that can simultaneously scan the sample. We demonstrate the advantages of DOE scanning by enhancing the speed and sensitivity of two-photon calcium imaging of action potentials in neurons from neocortical brain slices. DOE scanning can easily improve the detection of time-varying signals in two-photon and other non-linear microscopic techniques. PMID:19636390

  20. Single-shot terahertz time-domain spectroscopy in pulsed high magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noe, II, G. Timothy; Katayama, Ikufumi; Katsutani, Fumiya

    Here, we have developed a single-shot terahertz time-domain spectrometer to perform optical-pump/terahertz-probe experiments in pulsed, high magnetic fields up to 30 T. The single-shot detection scheme for measuring a terahertz waveform incorporates a reflective echelon to create time-delayed beamlets across the intensity profile of the optical gate beam before it spatially and temporally overlaps with the terahertz radiation in a ZnTe detection crystal. After imaging the gate beam onto a camera, we can retrieve the terahertz time-domain waveform by analyzing the resulting image. To demonstrate the utility of our technique, we measured cyclotron resonance absorption of optically excited carriers inmore » the terahertz frequency range in intrinsic silicon at high magnetic fields, with results that agree well with published values.« less

  1. Single-shot terahertz time-domain spectroscopy in pulsed high magnetic fields

    DOE PAGES

    Noe, II, G. Timothy; Katayama, Ikufumi; Katsutani, Fumiya; ...

    2016-12-22

    Here, we have developed a single-shot terahertz time-domain spectrometer to perform optical-pump/terahertz-probe experiments in pulsed, high magnetic fields up to 30 T. The single-shot detection scheme for measuring a terahertz waveform incorporates a reflective echelon to create time-delayed beamlets across the intensity profile of the optical gate beam before it spatially and temporally overlaps with the terahertz radiation in a ZnTe detection crystal. After imaging the gate beam onto a camera, we can retrieve the terahertz time-domain waveform by analyzing the resulting image. To demonstrate the utility of our technique, we measured cyclotron resonance absorption of optically excited carriers inmore » the terahertz frequency range in intrinsic silicon at high magnetic fields, with results that agree well with published values.« less

  2. Full characterization of an attosecond pulse generated using an infrared driver

    PubMed Central

    Zhang, Chunmei; Brown, Graham G.; Kim, Kyung Taec; Villeneuve, D. M.; Corkum, P. B.

    2016-01-01

    The physics of attosecond pulse generation requires using infrared driving wavelength to reach the soft X-rays. However, with longer driving wavelength, the harmonic conversion efficiency drops significantly. It makes the conventional attosecond pulse measurement using streaking very difficult due to the low photoionization cross section in the soft X-rays region. In-situ measurement was developed for precisely this purpose. We use in-situ measurement to characterize, in both space and time, an attosecond pulse produced by ultrafast wavefront rotation of a 1.8 μm fundamental beam. We confirm what models suggest – that each beamlet is an isolated attosecond pulse in the time domain. We get almost constant flat wavefront curvature through the whole photon energy range. The measurement method is scalable to the soft X-ray spectral region. PMID:27230961

  3. Method of constructing dished ion thruster grids to provide hole array spacing compensation

    NASA Technical Reports Server (NTRS)

    Banks, B. A. (Inventor)

    1976-01-01

    The center-to-center spacings of a photoresist pattern for an array of holes applied to a thin metal sheet are increased by uniformly stretching the thin metal sheet in all directions along the plane of the sheet. The uniform stretching is provided by securely clamping the periphery of the sheet and applying an annular force against the face of the sheet, within the periphery of the sheet and around the photoresist pattern. The technique is used in the construction of ion thruster grid units where the outer or downstream grid is subjected to uniform stretching prior to convex molding. The technique provides alignment of the holes of grid pairs so as to direct the ion beamlets in a direction parallel to the axis of the grid unit and thereby provide optimization of the available thrust.

  4. Overview of Heavy Ion Fusion Accelerator Research in the U. S.

    NASA Astrophysics Data System (ADS)

    Friedman, Alex

    2002-12-01

    This article provides an overview of current U.S. research on accelerators for Heavy Ion Fusion, that is, inertial fusion driven by intense beams of heavy ions with the goal of energy production. The concept, beam requirements, approach, and major issues are introduced. An overview of a number of new experiments is presented. These include: the High Current Experiment now underway at Lawrence Berkeley National Laboratory; studies of advanced injectors (and in particular an approach based on the merging of multiple beamlets), being investigated experimentally at Lawrence Livermore National Laboratory); the Neutralized (chamber) Transport Experiment being assembled at Lawrence Berkeley National Laboratory; and smaller experiments at the University of Maryland and at Princeton Plasma Physics Laboratory. The comprehensive program of beam simulations and theory is outlined. Finally, prospects and plans for further development of this promising approach to fusion energy are discussed.

  5. Cleanliness for the NIF 1ω Laser Amplifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spaeth, M. L.; Manes, K. R.; Honig, J.

    During the years before the National Ignition Facility (NIF) laser system, a set of generally accepted cleaning procedures had been developed for the large 1ω amplifiers of an inertial confinement fusion laser, and up until 1999 similar procedures were planned for NIF. Several parallel sets of test results were obtained from 1992 to 1999 for large amplifiers using these accepted cleaning procedures in the Beamlet physics test bed and in the Amplifier Module Prototype Laboratory (AMPLAB), a four-slab-high prototype large amplifier structure. Both of these showed damage to their slab surfaces that, if projected to operating conditions for NIF, wouldmore » lead to higher than acceptable slab-refurbishment rates. Finally, this study tracks the search for the smoking gun origin of this damage and describes the solution employed in NIF for avoiding flashlamp-induced aerosol damage to its 1ω amplifier slabs.« less

  6. Cleanliness for the NIF 1ω Laser Amplifiers

    DOE PAGES

    Spaeth, M. L.; Manes, K. R.; Honig, J.

    2017-03-23

    During the years before the National Ignition Facility (NIF) laser system, a set of generally accepted cleaning procedures had been developed for the large 1ω amplifiers of an inertial confinement fusion laser, and up until 1999 similar procedures were planned for NIF. Several parallel sets of test results were obtained from 1992 to 1999 for large amplifiers using these accepted cleaning procedures in the Beamlet physics test bed and in the Amplifier Module Prototype Laboratory (AMPLAB), a four-slab-high prototype large amplifier structure. Both of these showed damage to their slab surfaces that, if projected to operating conditions for NIF, wouldmore » lead to higher than acceptable slab-refurbishment rates. Finally, this study tracks the search for the smoking gun origin of this damage and describes the solution employed in NIF for avoiding flashlamp-induced aerosol damage to its 1ω amplifier slabs.« less

  7. Multi-beam linear accelerator EVT

    NASA Astrophysics Data System (ADS)

    Teryaev, Vladimir E.; Kazakov, Sergey Yu.; Hirshfield, Jay L.

    2016-09-01

    A novel electron multi-beam accelerator is presented. The accelerator, short-named EVT (Electron Voltage Transformer) belongs to the class of two-beam accelerators. It combines an RF generator and essentially an accelerator within the same vacuum envelope. Drive beam-lets and an accelerated beam are modulated in RF modulators and then bunches pass into an accelerating structure, comprising uncoupled with each other and inductive tuned cavities, where the energy transfer from the drive beams to the accelerated beam occurs. A phasing of bunches is solved by choice correspond distances between gaps of the adjacent cavities. Preliminary results of numerical simulations and the initial specification of EVT operating in S-band, with a 60 kV gun and generating a 2.7 A, 1.1 MV beam at its output is presented. A relatively high efficiency of 67% and high design average power suggest that EVT can find its use in industrial applications.

  8. Evolution of Gas Cell Targets for Magnetized Liner Inertial Fusion Experiments at the Sandia National Laboratories PECOS Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paguio, R. R.; Smith, G. E.; Taylor, J. L.

    Z-Beamlet (ZBL) experiments conducted at the PECOS test facility at Sandia National Laboratories (SNL) investigated the nonlinear processes in laser plasma interaction (or laserplasma instabilities LPI) that complicate the deposition of laser energy by enhanced absorption, backscatter, filamentation and beam-spray that can occur in large-scale laser-heated gas cell targets. These targets and experiments were designed to provide better insight into the physics of the laser preheat stage of the Magnetized Liner Inertial Fusion (MagLIF) scheme being tested on the SNL Z-machine. The experiments aim to understand the tradeoffs between laser spot size, laser pulse shape, laser entrance hole (LEH) windowmore » thickness, and fuel density for laser preheat. Gas cell target design evolution and fabrication adaptations to accommodate the evolving experiment and scientific requirements are also described in this paper.« less

  9. Multi-beam linear accelerator EVT

    DOE PAGES

    Teryaev, Vladimir E.; Kazakov, Sergey Yu.; Hirshfield, Jay L.

    2016-03-29

    A novel electron multi-beam accelerator is presented. The accelerator, short-named EVT (Electron Voltage Transformer) belongs to the class of two-beam accelerators. It combines an RF generator and essentially an accelerator within the same vacuum envelope. Drive beam-lets and an accelerated beam are modulated in RF modulators and then bunches pass into an accelerating structure, comprising uncoupled with each other and inductive tuned cavities, where the energy transfer from the drive beams to the accelerated beam occurs. A phasing of bunches is solved by choice correspond distances between gaps of the adjacent cavities. Preliminary results of numerical simulations and the initialmore » specification of EVT operating in S-band, with a 60 kV gun and generating a 2.7 A, 1.1 MV beam at its output is presented. Furthermore, a relatively high efficiency of 67% and high design average power suggest that EVT can find its use in industrial applications.« less

  10. Evolution of Gas Cell Targets for Magnetized Liner Inertial Fusion Experiments at the Sandia National Laboratories PECOS Test Facility

    DOE PAGES

    Paguio, R. R.; Smith, G. E.; Taylor, J. L.; ...

    2017-12-04

    Z-Beamlet (ZBL) experiments conducted at the PECOS test facility at Sandia National Laboratories (SNL) investigated the nonlinear processes in laser plasma interaction (or laserplasma instabilities LPI) that complicate the deposition of laser energy by enhanced absorption, backscatter, filamentation and beam-spray that can occur in large-scale laser-heated gas cell targets. These targets and experiments were designed to provide better insight into the physics of the laser preheat stage of the Magnetized Liner Inertial Fusion (MagLIF) scheme being tested on the SNL Z-machine. The experiments aim to understand the tradeoffs between laser spot size, laser pulse shape, laser entrance hole (LEH) windowmore » thickness, and fuel density for laser preheat. Gas cell target design evolution and fabrication adaptations to accommodate the evolving experiment and scientific requirements are also described in this paper.« less

  11. Design and characterization of electron beam focusing for X-ray generation in novel medical imaging architecturea

    PubMed Central

    Bogdan Neculaes, V.; Zou, Yun; Zavodszky, Peter; Inzinna, Louis; Zhang, Xi; Conway, Kenneth; Caiafa, Antonio; Frutschy, Kristopher; Waters, William; De Man, Bruno

    2014-01-01

    A novel electron beam focusing scheme for medical X-ray sources is described in this paper. Most vacuum based medical X-ray sources today employ a tungsten filament operated in temperature limited regime, with electrostatic focusing tabs for limited range beam optics. This paper presents the electron beam optics designed for the first distributed X-ray source in the world for Computed Tomography (CT) applications. This distributed source includes 32 electron beamlets in a common vacuum chamber, with 32 circular dispenser cathodes operated in space charge limited regime, where the initial circular beam is transformed into an elliptical beam before being collected at the anode. The electron beam optics designed and validated here are at the heart of the first Inverse Geometry CT system, with potential benefits in terms of improved image quality and dramatic X-ray dose reduction for the patient. PMID:24826066

  12. Multiaperture ion beam extraction from gas-dynamic electron cyclotron resonance source of multicharged ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidorov, A.; Dorf, M.; Zorin, V.

    2008-02-15

    Electron cyclotron resonance ion source with quasi-gas-dynamic regime of plasma confinement (ReGIS), constructed at the Institute of Applied Physics, Russia, provides opportunities for extracting intense and high-brightness multicharged ion beams. Despite the short plasma lifetime in a magnetic trap of a ReGIS, the degree of multiple ionization may be significantly enhanced by the increase in power and frequency of the applied microwave radiation. The present work is focused on studying the intense beam quality of this source by the pepper-pot method. A single beamlet emittance measured by the pepper-pot method was found to be {approx}70 {pi} mm mrad, and themore » total extracted beam current obtained at 14 kV extraction voltage was {approx}25 mA. The results of the numerical simulations of ion beam extraction are found to be in good agreement with experimental data.« less

  13. Achievement and improvement of the JT-60U negative ion source for JT-60 Super Advanced (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kojima, A.; Hanada, M.; Tanaka, Y.

    2010-02-15

    Developments of the large negative ion source have been progressed in the high-energy, high-power, and long-pulse neutral beam injector for JT-60 Super Advanced. Countermeasures have been studied and tested for critical issues of grid heat load and voltage holding capability. As for the heat load of the acceleration grids, direct interception of D{sup -} ions was reduced by adjusting the beamlet steering. As a result, the heat load was reduced below an allowable level for long-pulse injections. As for the voltage holding capability, local electric field was mitigated by tuning gap lengths between large-area acceleration grids in the accelerator. Asmore » a result, the voltage holding capability was improved up to the rated value of 500 kV. To investigate the voltage holding capability during beam acceleration, the beam acceleration test is ongoing with new extended gap.« less

  14. Developing the RAL front end test stand source to deliver a 60 mA, 50 Hz, 2 ms H- beam

    NASA Astrophysics Data System (ADS)

    Faircloth, Dan; Lawrie, Scott; Letchford, Alan; Gabor, Christoph; Perkins, Mike; Whitehead, Mark; Wood, Trevor; Tarvainen, Olli; Komppula, Jani; Kalvas, Taneli; Dudnikov, Vadim; Pereira, Hugo; Izaola, Zunbeltz; Simkin, John

    2013-02-01

    All the Front End Test Stand (FETS) beam requirements have been achieved, but not simultaneously [1]. At 50 Hz repetition rates beam current droop becomes unacceptable for pulse lengths longer than 1 ms. This is fundamental limitation of the present source design. Previous researchers [2] have demonstrated that using a physically larger Penning surface plasma source should overcome these limitations. The scaled source development strategy is outlined in this paper. A study of time-varying plasma behavior has been performed using a V-UV spectrometer. Initial experiments to test scaled plasma volumes are outlined. A dedicated plasma and extraction test stand (VESPA-Vessel for Extraction and Source Plasma Analysis) is being developed to allow new source and extraction designs to be appraised. The experimental work is backed up by modeling and simulations. A detailed ANSYS thermal model has been developed. IBSimu is being used to design extraction and beam transport. A novel 3D plasma modeling code using beamlets is being developed by Cobham Vector Fields using SCALA OPERA, early source modeling results are very promising. Hardware on FETS is also being developed in preparation to run the scaled source. A new 2 ms, 50 Hz, 25 kV pulsed extraction voltage power supply has been constructed and a new discharge power supply is being designed. The design of the post acceleration electrode assembly has been improved.

  15. Optical beam classification using deep learning: a comparison with rule- and feature-based classification

    NASA Astrophysics Data System (ADS)

    Alom, Md. Zahangir; Awwal, Abdul A. S.; Lowe-Webb, Roger; Taha, Tarek M.

    2017-08-01

    Deep-learning methods are gaining popularity because of their state-of-the-art performance in image classification tasks. In this paper, we explore classification of laser-beam images from the National Ignition Facility (NIF) using a novel deeplearning approach. NIF is the world's largest, most energetic laser. It has nearly 40,000 optics that precisely guide, reflect, amplify, and focus 192 laser beams onto a fusion target. NIF utilizes four petawatt lasers called the Advanced Radiographic Capability (ARC) to produce backlighting X-ray illumination to capture implosion dynamics of NIF experiments with picosecond temporal resolution. In the current operational configuration, four independent short-pulse ARC beams are created and combined in a split-beam configuration in each of two NIF apertures at the entry of the pre-amplifier. The subaperture beams then propagate through the NIF beampath up to the ARC compressor. Each ARC beamlet is separately compressed with a dedicated set of four gratings and recombined as sub-apertures for transport to the parabola vessel, where the beams are focused using parabolic mirrors and pointed to the target. Small angular errors in the compressor gratings can cause the sub-aperture beams to diverge from one another and prevent accurate alignment through the transport section between the compressor and parabolic mirrors. This is an off-normal condition that must be detected and corrected. The goal of the off-normal check is to determine whether the ARC beamlets are sufficiently overlapped into a merged single spot or diverged into two distinct spots. Thus, the objective of the current work is three-fold: developing a simple algorithm to perform off-normal classification, exploring the use of Convolutional Neural Network (CNN) for the same task, and understanding the inter-relationship of the two approaches. The CNN recognition results are compared with other machine-learning approaches, such as Deep Neural Network (DNN) and Support Vector Machine (SVM). The experimental results show around 96% classification accuracy using CNN; the CNN approach also provides comparable recognition results compared to the present feature-based off-normal detection. The feature-based solution was developed to capture the expertise of a human expert in classifying the images. The misclassified results are further studied to explain the differences and discover any discrepancies or inconsistencies in current classification.

  16. FusionArc optimization: a hybrid volumetric modulated arc therapy (VMAT) and intensity modulated radiation therapy (IMRT) planning strategy.

    PubMed

    Matuszak, Martha M; Steers, Jennifer M; Long, Troy; McShan, Daniel L; Fraass, Benedick A; Romeijn, H Edwin; Ten Haken, Randall K

    2013-07-01

    To introduce a hybrid volumetric modulated arc therapy/intensity modulated radiation therapy (VMAT/IMRT) optimization strategy called FusionArc that combines the delivery efficiency of single-arc VMAT with the potentially desirable intensity modulation possible with IMRT. A beamlet-based inverse planning system was enhanced to combine the advantages of VMAT and IMRT into one comprehensive technique. In the hybrid strategy, baseline single-arc VMAT plans are optimized and then the current cost function gradients with respect to the beamlets are used to define a metric for predicting which beam angles would benefit from further intensity modulation. Beams with the highest metric values (called the gradient factor) are converted from VMAT apertures to IMRT fluence, and the optimization proceeds with the mixed variable set until convergence or until additional beams are selected for conversion. One phantom and two clinical cases were used to validate the gradient factor and characterize the FusionArc strategy. Comparisons were made between standard IMRT, single-arc VMAT, and FusionArc plans with one to five IMRT∕hybrid beams. The gradient factor was found to be highly predictive of the VMAT angles that would benefit plan quality the most from beam modulation. Over the three cases studied, a FusionArc plan with three converted beams achieved superior dosimetric quality with reductions in final cost ranging from 26.4% to 48.1% compared to single-arc VMAT. Additionally, the three beam FusionArc plans required 22.4%-43.7% fewer MU∕Gy than a seven beam IMRT plan. While the FusionArc plans with five converted beams offer larger reductions in final cost--32.9%-55.2% compared to single-arc VMAT--the decrease in MU∕Gy compared to IMRT was noticeably smaller at 12.2%-18.5%, when compared to IMRT. A hybrid VMAT∕IMRT strategy was implemented to find a high quality compromise between gantry-angle and intensity-based degrees of freedom. This optimization method will allow patients to be simultaneously planned for dosimetric quality and delivery efficiency without switching between delivery techniques. Example phantom and clinical cases suggest that the conversion of only three VMAT segments to modulated beams may result in a good combination of quality and efficiency.

  17. Pulsed-power-driven cylindrical liner implosions of laser preheated fuel magnetized with an axial field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slutz, S. A.; Herrmann, M. C.; Vesey, R. A.

    2010-05-15

    The radial convergence required to reach fusion conditions is considerably higher for cylindrical than for spherical implosions since the volume is proportional to r{sup 2} versus r{sup 3}, respectively. Fuel magnetization and preheat significantly lowers the required radial convergence enabling cylindrical implosions to become an attractive path toward generating fusion conditions. Numerical simulations are presented indicating that significant fusion yields may be obtained by pulsed-power-driven implosions of cylindrical metal liners onto magnetized (>10 T) and preheated (100-500 eV) deuterium-tritium (DT) fuel. Yields exceeding 100 kJ could be possible on Z at 25 MA, while yields exceeding 50 MJ could bemore » possible with a more advanced pulsed power machine delivering 60 MA. These implosions occur on a much shorter time scale than previously proposed implosions, about 100 ns as compared to about 10 mus for magnetic target fusion (MTF) [I. R. Lindemuth and R. C. Kirkpatrick, Nucl. Fusion 23, 263 (1983)]. Consequently the optimal initial fuel density (1-5 mg/cc) is considerably higher than for MTF (approx1 mug/cc). Thus the final fuel density is high enough to axially trap most of the alpha-particles for cylinders of approximately 1 cm in length with a purely axial magnetic field, i.e., no closed field configuration is required for ignition. According to the simulations, an initial axial magnetic field is partially frozen into the highly conducting preheated fuel and is compressed to more than 100 MG. This final field is strong enough to inhibit both electron thermal conduction and the escape of alpha-particles in the radial direction. Analytical and numerical calculations indicate that the DT can be heated to 200-500 eV with 5-10 kJ of green laser light, which could be provided by the Z-Beamlet laser. The magneto-Rayleigh-Taylor (MRT) instability poses the greatest threat to this approach to fusion. Two-dimensional Lasnex simulations indicate that the liner walls must have a substantial initial thickness (10-20% of the radius) so that they maintain integrity throughout the implosion. The Z and Z-Beamlet experiments are now being planned to test the various components of this concept, e.g., the laser heating of the fuel and the robustness of liner implosions to the MRT instability.« less

  18. Performance limits of ion extraction systems with non-circular apertures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shagayda, A., E-mail: shagayda@gmail.com; Madeev, S.

    A three-dimensional computer simulation is used to determine the perveance limitations of ion extraction systems with non-circular apertures. The objective of the study is to analyze the possibilities to improve mechanical strength of the ion optics made of carbon-carbon composite materials. Non-circular grid apertures are better suited to the physical structure of carbon-carbon composite materials, than conventionally used circular holes in a hexagonal pattern, because they allow a fewer number of cut fibers. However, the slit-type accelerating systems, usually regarded as the main alternative to the conventional ion optics, have an intolerably narrow range of operating perveance values at whichmore » there is no direct ion impingement on the acceleration grid. This paper presents results of comparative analysis of a number of different ion optical systems with non-circular apertures and conventional ion optical systems with circular apertures. It has been revealed that a relatively wide perveance range without direct ion impingement may be obtained with apertures shaped as a square with rounded corners. Numerical simulations show that this geometry may have equivalent perveance range as the traditional geometry with circular apertures while being more mechanically robust. In addition, such important characteristics, as the effective transparency for both the ions and the neutral atoms, the height of the potential barrier reflecting the downstream plasma electrons and the angular divergence of the beamlet also can be very close to these parameters for the optics with circular apertures.« less

  19. Performance limits of ion extraction systems with non-circular apertures.

    PubMed

    Shagayda, A; Madeev, S

    2016-04-01

    A three-dimensional computer simulation is used to determine the perveance limitations of ion extraction systems with non-circular apertures. The objective of the study is to analyze the possibilities to improve mechanical strength of the ion optics made of carbon-carbon composite materials. Non-circular grid apertures are better suited to the physical structure of carbon-carbon composite materials, than conventionally used circular holes in a hexagonal pattern, because they allow a fewer number of cut fibers. However, the slit-type accelerating systems, usually regarded as the main alternative to the conventional ion optics, have an intolerably narrow range of operating perveance values at which there is no direct ion impingement on the acceleration grid. This paper presents results of comparative analysis of a number of different ion optical systems with non-circular apertures and conventional ion optical systems with circular apertures. It has been revealed that a relatively wide perveance range without direct ion impingement may be obtained with apertures shaped as a square with rounded corners. Numerical simulations show that this geometry may have equivalent perveance range as the traditional geometry with circular apertures while being more mechanically robust. In addition, such important characteristics, as the effective transparency for both the ions and the neutral atoms, the height of the potential barrier reflecting the downstream plasma electrons and the angular divergence of the beamlet also can be very close to these parameters for the optics with circular apertures.

  20. An adaptive optics system for solid-state laser systems used in inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salmon, J.T.; Bliss, E.S.; Byrd, J.L.

    1995-09-17

    Using adaptive optics the authors have obtained nearly diffraction-limited 5 kJ, 3 nsec output pulses at 1.053 {micro}m from the Beamlet demonstration system for the National Ignition Facility (NIF). The peak Strehl ratio was improved from 0.009 to 0.50, as estimated from measured wavefront errors. They have also measured the relaxation of the thermally induced aberrations in the main beam line over a period of 4.5 hours. Peak-to-valley aberrations range from 6.8 waves at 1.053 {micro}m within 30 minutes after a full system shot to 3.9 waves after 4.5 hours. The adaptive optics system must have enough range to correctmore » accumulated thermal aberrations from several shots in addition to the immediate shot-induced error. Accumulated wavefront errors in the beam line will affect both the design of the adaptive optics system for NIF and the performance of that system.« less

  1. First experiments with the negative ion source NIO1.

    PubMed

    Cavenago, M; Serianni, G; De Muri, M; Agostinetti, P; Antoni, V; Baltador, C; Barbisan, M; Baseggio, L; Bigi, M; Cervaro, V; Degli Agostini, F; Fagotti, E; Kulevoy, T; Ippolito, N; Laterza, B; Minarello, A; Maniero, M; Pasqualotto, R; Petrenko, S; Poggi, M; Ravarotto, D; Recchia, M; Sartori, E; Sattin, M; Sonato, P; Taccogna, F; Variale, V; Veltri, P; Zaniol, B; Zanotto, L; Zucchetti, S

    2016-02-01

    Neutral Beam Injectors (NBIs), which need to be strongly optimized in the perspective of DEMO reactor, request a thorough understanding of the negative ion source used and of the multi-beamlet optics. A relatively compact radio frequency (rf) ion source, named NIO1 (Negative Ion Optimization 1), with 9 beam apertures for a total H(-) current of 130 mA, 60 kV acceleration voltage, was installed at Consorzio RFX, including a high voltage deck and an X-ray shield, to provide a test bench for source optimizations for activities in support to the ITER NBI test facility. NIO1 status and plasma experiments both with air and with hydrogen as filling gas are described. Transition from a weak plasma to an inductively coupled plasma is clearly evident for the former gas and may be triggered by rising the rf power (over 0.5 kW) at low pressure (equal or below 2 Pa). Transition in hydrogen plasma requires more rf power (over 1.5 kW).

  2. Inertial Confinement Fusion Annual Report 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Correll, D

    The ICF Annual Report provides documentation of the achievements of the LLNL ICF Program during the fiscal year by the use of two formats: (1) an Overview that is a narrative summary of important results for the fiscal year and (2) a compilation of the articles that previously appeared in the ICF Quarterly Report that year. Both the Overview and Quarterly Report are also on the Web at http://lasers.llnl.gov/lasers/pubs/icfq.html. Beginning in Fiscal Year 1997, the fourth quarter issue of the ICF Quarterly was no longer printed as a separate document but rather included in the ICF Annual. This change providedmore » a more efficient process of documenting our accomplishments with-out unnecessary duplication of printing. In addition we introduced a new document, the ICF Program Monthly Highlights. Starting with the September 1997 issue and each month following, the Monthly Highlights will provide a brief description of noteworthy activities of interest to our DOE sponsors and our stakeholders. The underlying theme for LLNL's ICF Program research continues to be defined within DOE's Defense Programs missions and goals. In support of these missions and goals, the ICF Program advances research and technology development in major interrelated areas that include fusion target theory and design, target fabrication, target experiments, and laser and optical science and technology. While in pursuit of its goal of demonstrating thermonuclear fusion ignition and energy gain in the laboratory, the ICF Program provides research and development opportunities in fundamental high-energy-density physics and supports the necessary research base for the possible long-term application of inertial fusion energy for civilian power production. ICF technologies continue to have spin-off applications for additional government and industrial use. In addition to these topics, the ICF Annual Report covers non-ICF funded, but related, laser research and development and associated applications. We also provide a short summary of the quarterly activities within Nova laser operations, Beamlet laser operations, and National Ignition Facility laser design. LLNL's ICF Program falls within DOE's national ICF program, which includes the Nova and Beamlet (LLNL), OMEGA (University of Rochester Laboratory for Laser Energetics), Nike (Naval Research Laboratory), and Trident (Los Alamos National Laboratory) laser facilities. The Particle Beam Fusion Accelerator (Z) and Saturn pulsed-power facilities are at Sandia National Laboratories. General Atomics, Inc., develops and provides many of the targets for the above experimental facilities. Many of the ICF Annual Report articles are co-authored with our colleagues from these other ICF institutions.« less

  3. Experimental demonstration of fusion-relevant conditions in magnetized liner inertial fusion

    DOE PAGES

    Gomez, Matthew R.; Slutz, Stephen A..; Sefkow, Adam B.; ...

    2014-10-06

    This Letter presents results from the first fully integrated experiments testing the magnetized liner inertial fusion concept [S.A. Slutz et al., Phys. Plasmas 17, 056303 (2010)], in which a cylinder of deuterium gas with a preimposed axial magnetic field of 10 T is heated by Z beamlet, a 2.5 kJ, 1 TW laser, and magnetically imploded by a 19 MA current with 100 ns rise time on the Z facility. Despite a predicted peak implosion velocity of only 70 km/s, the fuel reaches a stagnation temperature of approximately 3 keV, with T e ≈ T i, and produces up tomore » 2e12 thermonuclear DD neutrons. In this study, X-ray emission indicates a hot fuel region with full width at half maximum ranging from 60 to 120 μm over a 6 mm height and lasting approximately 2 ns. The number of secondary deuterium-tritium neutrons observed was greater than 10 10, indicating significant fuel magnetization given that the estimated radial areal density of the plasma is only 2 mg/cm 2.« less

  4. Dual-resolution dose assessments for proton beamlet using MCNPX 2.6.0

    NASA Astrophysics Data System (ADS)

    Chao, T. C.; Wei, S. C.; Wu, S. W.; Tung, C. J.; Tu, S. J.; Cheng, H. W.; Lee, C. C.

    2015-11-01

    The purpose of this study is to access proton dose distribution in dual resolution phantoms using MCNPX 2.6.0. The dual resolution phantom uses higher resolution in Bragg peak, area near large dose gradient, or heterogeneous interface and lower resolution in the rest. MCNPX 2.6.0 was installed in Ubuntu 10.04 with MPI for parallel computing. FMesh1 tallies were utilized to record the energy deposition which is a special designed tally for voxel phantoms that converts dose deposition from fluence. 60 and 120 MeV narrow proton beam were incident into Coarse, Dual and Fine resolution phantoms with pure water, water-bone-water and water-air-water setups. The doses in coarse resolution phantoms are underestimated owing to partial volume effect. The dose distributions in dual or high resolution phantoms agreed well with each other and dual resolution phantoms were at least 10 times more efficient than fine resolution one. Because the secondary particle range is much longer in air than in water, the dose of low density region may be under-estimated if the resolution or calculation grid is not small enough.

  5. PASOTRON high-energy microwave source

    NASA Astrophysics Data System (ADS)

    Goebel, Dan M.; Schumacher, Robert W.; Butler, Jennifer M.; Hyman, Jay, Jr.; Santoru, Joseph; Watkins, Ron M.; Harvey, Robin J.; Dolezal, Franklin A.; Eisenhart, Robert L.; Schneider, Authur J.

    1992-04-01

    A unique, high-energy microwave source, called PASOTRON (Plasma-Assisted Slow-wave Oscillator), has been developed. The PASOTRON utilizes a long-pulse E-gun and plasma- filled slow-wave structure (SWS) to produce high-energy pulses from a simple, lightweight device that utilizes no externally produced magnetic fields. Long pulses are obtained from a novel E-gun that employs a low-pressure glow discharge to provide a stable, high current- density electron source. The electron accelerator consists of a high-perveance, multi-aperture array. The E-beam is operated in the ion-focused regime where the plasma filling the SWS space-charge neutralizes the beam, and the self-pinch force compresses the beamlets and increases the beam current density. A scale-model PASOTRON, operating as a backward- wave oscillator in C-band with a 100-kV E-beam, has produced output powers in the 3 to 5 MW range and pulse lengths of over 100 microsecond(s) ec, corresponding to an integrated energy per pulse of up to 500 J. The E-beam to microwave-radiation power conversion efficiency is about 20%.

  6. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  7. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  8. Transverse beam splitting made operational: Key features of the multiturn extraction at the CERN Proton Synchrotron

    NASA Astrophysics Data System (ADS)

    Huschauer, A.; Blas, A.; Borburgh, J.; Damjanovic, S.; Gilardoni, S.; Giovannozzi, M.; Hourican, M.; Kahle, K.; Le Godec, G.; Michels, O.; Sterbini, G.; Hernalsteens, C.

    2017-06-01

    Following a successful commissioning period, the multiturn extraction (MTE) at the CERN Proton Synchrotron (PS) has been applied for the fixed-target physics programme at the Super Proton Synchrotron (SPS) since September 2015. This exceptional extraction technique was proposed to replace the long-serving continuous transfer (CT) extraction, which has the drawback of inducing high activation in the ring. MTE exploits the principles of nonlinear beam dynamics to perform loss-free beam splitting in the horizontal phase space. Over multiple turns, the resulting beamlets are then transferred to the downstream accelerator. The operational deployment of MTE was rendered possible by the full understanding and mitigation of different hardware limitations and by redesigning the extraction trajectories and nonlinear optics, which was required due to the installation of a dummy septum to reduce the activation of the magnetic extraction septum. This paper focuses on these key features including the use of the transverse damper and the septum shadowing, which allowed a transition from the MTE study to a mature operational extraction scheme.

  9. Propagation of radially polarized multi-cosine Gaussian Schell-model beams in non-Kolmogorov turbulence

    NASA Astrophysics Data System (ADS)

    Tang, Miaomiao; Zhao, Daomu; Li, Xinzhong; Wang, Jingge

    2018-01-01

    Recently, we introduced a new class of radially polarized beams with multi-cosine Gaussian Schell-model(MCGSM) correlation function based on the partially coherent theory (Tang et al., 2017). In this manuscript, we extend the work to study the statistical properties such as the spectral density, the degree of coherence, the degree of polarization, and the state of polarization of the beam propagating in isotropic turbulence with a non-Kolmogorov power spectrum. Analytical formulas for the cross-spectral density matrix elements of a radially polarized MCGSM beam in non-Kolmogorov turbulence are derived. Numerical results show that lattice-like intensity pattern of the beam, which keeps propagation-invariant in free space, is destroyed by the turbulence when it passes at sufficiently large distances from the source. It is also shown that the polarization properties are mainly affected by the source correlation functions, and change in the turbulent statistics plays a relatively small effect. In addition, the polarization state exhibits self-splitting property and each beamlet evolves into radially polarized structure upon propagation.

  10. Fluid simulation of relativistic electron beam driven wakefield in a cold plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bera, Ratan Kumar; Sengupta, Sudip; Das, Amita

    Excitation of wakefield in a cold homogeneous plasma, driven by an ultra-relativistic electron beam is studied in one dimension using fluid simulation techniques. For a homogeneous rigid beam having density (n{sub b}) less than or equal to half the plasma density (n{sub 0}), simulation results are found to be in good agreement with the analytical work of Rosenzweig [Phys. Rev. Lett. 58, 555 (1987)]. Here, Rosenzweig's work has been analytically extended to regimes where the ratio of beam density to plasma density is greater than half and results have been verified using simulation. Further in contrast to Rosenzweig's work, ifmore » the beam is allowed to evolve in a self-consistent manner, several interesting features are observed in simulation viz. splitting of the beam into beam-lets (for l{sub b} > λ{sub p}) and compression of the beam (for l{sub b} < λ{sub p}), l{sub b} and λ{sub p}, respectively, being the initial beam length and plasma wavelength.« less

  11. Poster — Thur Eve — 69: Computational Study of DVH-guided Cancer Treatment Planning Optimization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghomi, Pooyan Shirvani; Zinchenko, Yuriy

    2014-08-15

    Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less

  12. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less

  13. SU-E-T-187: Collimation Methods in Spot Scanning Proton Therapy: A Treatment Plan Comparison Between a Fixed Aperture and a Dynamic Collimation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, B; Gelover, E; Wang, D

    2015-06-15

    Purpose: Low-energy treatments during spot scanning proton therapy (SSPT) suffer from poor conformity due to increased spot size. Collimation devices can reduce the lateral penumbra of a proton therapy dose distribution and improve the overall plan quality. The purpose of this work was to study the advantages of individual energy-layer collimation, which is unique to a recently proposed Dynamic Collimation System (DCS), in comparison to a standard, fixed aperture that allows only a single shape for all energy layers. Methods: Three brain patients previously planned and treated with SSPT were re-planned using an in-house treatment planning system capable of modelingmore » collimated and un-collimated proton beamlets. The un-collimated plans, which served as a baseline for comparison, reproduced the target coverage of the clinically delivered plans. The collimator opening for the aperture based plans included a 0.6 cm expansion of the largest cross section of the target in the Beam’s Eye View, while the DCS based plans were created by optimizing the collimator position for beam spots near the periphery of the target in each energy layer. Results: The reduction of mean dose to normal tissue adjacent to the target, as defined by a 10 mm ring, averaged 9.13% and 3.48% for the DCS and aperture plans, respectively. The conformity index, as defined by the ratio of the volume of the 50% isodose line to the target volume, yielded an average improvement of 16.42% and 8.16% for the DCS and aperture plans, respectively. Conclusion: Collimation reduces the dose to normal tissue adjacent to the target and increases dose conformity to the target region for low-energy SSPT. The ability of the DCS to provide collimation to each energy layer yields better conformity in comparison to fixed aperture plans. This work was partially funded by IBA (Ion Beam Applications S.A.)« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  15. Effect of beam types on the scintillations: a review

    NASA Astrophysics Data System (ADS)

    Baykal, Yahya; Eyyuboglu, Halil T.; Cai, Yangjian

    2009-02-01

    When different incidences are launched in atmospheric turbulence, it is known that the intensity fluctuations exhibit different characteristics. In this paper we review our work done in the evaluations of the scintillation index of general beam types when such optical beams propagate in horizontal atmospheric links in the weak fluctuations regime. Variation of scintillation indices versus the source and medium parameters are examined for flat-topped-Gaussian, cosh- Gaussian, cos-Gaussian, annular, elliptical Gaussian, circular (i.e., stigmatic) and elliptical (i.e., astigmatic) dark hollow, lowest order Bessel-Gaussian and laser array beams. For flat-topped-Gaussian beam, scintillation is larger than the single Gaussian beam scintillation, when the source sizes are much less than the Fresnel zone but becomes smaller for source sizes much larger than the Fresnel zone. Cosh-Gaussian beam has lower on-axis scintillations at smaller source sizes and longer propagation distances as compared to Gaussian beams where focusing imposes more reduction on the cosh- Gaussian beam scintillations than that of the Gaussian beam. Intensity fluctuations of a cos-Gaussian beam show favorable behaviour against a Gaussian beam at lower propagation lengths. At longer propagation lengths, annular beam becomes advantageous. In focused cases, the scintillation index of annular beam is lower than the scintillation index of Gaussian and cos-Gaussian beams starting at earlier propagation distances. Cos-Gaussian beams are advantages at relatively large source sizes while the reverse is valid for annular beams. Scintillations of a stigmatic or astigmatic dark hollow beam can be smaller when compared to stigmatic or astigmatic Gaussian, annular and flat-topped beams under conditions that are closely related to the beam parameters. Intensity fluctuation of an elliptical Gaussian beam can also be smaller than a circular Gaussian beam depending on the propagation length and the ratio of the beam waist size along the long axis to that along the short axis (i.e., astigmatism). Comparing against the fundamental Gaussian beam on equal source size and equal power basis, it is observed that the scintillation index of the lowest order Bessel-Gaussian beam is lower at large source sizes and large width parameters. However, for excessively large width parameters and beyond certain propagation lengths, the advantage of the lowest order Bessel-Gaussian beam seems to be lost. Compared to Gaussian beam, laser array beam exhibits less scintillations at long propagation ranges and at some midrange radial displacement parameters. When compared among themselves, laser array beams tend to have reduced scintillations for larger number of beamlets, longer wavelengths, midrange radial displacement parameters, intermediate Gaussian source sizes, larger inner scales and smaller outer scales of turbulence. The number of beamlets used does not seem to be so effective in this improvement of the scintillations.

  16. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  17. Scintillation and bit error rate analysis of a phase-locked partially coherent flat-topped array laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Kashani, Fatemeh Dabbagh; Golmohammady, Shole; Mashal, Ahmad

    2017-12-01

    In this paper, the performance of underwater wireless optical communication (UWOC) links, which is made up of the partially coherent flat-topped (PCFT) array laser beam, has been investigated in detail. Providing high power, array laser beams are employed to increase the range of UWOC links. For characterization of the effects of oceanic turbulence on the propagation behavior of the considered beam, using the extended Huygens-Fresnel principle, an analytical expression for cross-spectral density matrix elements and a semi-analytical one for fourth-order statistical moment have been derived. Then, based on these expressions, the on-axis scintillation index of the mentioned beam propagating through weak oceanic turbulence has been calculated. Furthermore, in order to quantify the performance of the UWOC link, the average bit error rate (BER) has also been evaluated. The effects of some source factors and turbulent ocean parameters on the propagation behavior of the scintillation index and the BER have been studied in detail. The results of this investigation indicate that in comparison with the Gaussian array beam, when the source size of beamlets is larger than the first Fresnel zone, the PCFT array laser beam with the higher flatness order is found to have a lower scintillation index and hence lower BER. Specifically, in the sense of scintillation index reduction, using the PCFT array laser beams has a considerable benefit in comparison with the single PCFT or Gaussian laser beams and also Gaussian array beams. All the simulation results of this paper have been shown by graphs and they have been analyzed in detail.

  18. Impact of pelvic nodal irradiation with intensity-modulated radiotherapy on treatment of prostate cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Robert A.; Hannoun-Levi, Jean-Michel; Horwitz, Eric

    2006-10-01

    Purpose: The aim of this study was to evaluate the feasibility of treating the pelvic lymphatic regions during prostate intensity-modulated radiotherapy (IMRT) with respect to our routine acceptance criteria. Methods and Materials: A series of 10 previously treated prostate patients were randomly selected and the pelvic lymphatic regions delineated on the fused magnetic resonance/computed tomography data sets. A targeting progression was formed from the prostate and proximal seminal vesicles only to the inclusion of all pelvic lymphatic regions and presacral region resulting in 5 planning scenarios of increasing geometric difficulty. IMRT plans were generated for each stage for two acceleratormore » manufacturers. Dose volume histogram data were analyzed with respect to dose to the planning target volumes, rectum, bladder, bowel, and normal tissue. Analysis was performed for the number of segments required, monitor units, 'hot spots,' and treatment time. Results: Both rectal endpoints were met for all targets. Bladder endpoints were not met and the bowel endpoint was met in 40% of cases with the inclusion of the extended and presacral lymphatics. A significant difference was found in the number of segments and monitor units with targeting progression and between accelerators, with the smaller beamlets yielding poorer results. Treatment times between the 2 linacs did not exhibit a clinically significant difference when compared. Conclusions: Many issues should be considered with pelvic lymphatic irradiation during IMRT delivery for prostate cancer including dose per fraction, normal structure dose/volume limits, planning target volumes generation, localization, treatment time, and increased radiation leakage. We would suggest that, at a minimum, the endpoints used in this work be evaluated before beginning IMRT pelvic nodal irradiation.« less

  19. Amorphization of hard crystalline materials by electrosprayed nanodroplet impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamero-Castaño, Manuel, E-mail: mgameroc@uci.edu; Torrents, Anna; Borrajo-Pelaez, Rafael

    2014-11-07

    A beam of electrosprayed nanodroplets impacting on single-crystal silicon amorphizes a thin surface layer of a thickness comparable to the diameter of the drops. The phase transition occurs at projectile velocities exceeding a threshold, and is caused by the quenching of material melted by the impacts. This article demonstrates that the amorphization of silicon is a general phenomenon, as nanodroplets impacting at sufficient velocity also amorphize other covalently bonded crystals. In particular, we bombard single-crystal wafers of Si, Ge, GaAs, GaP, InAs, and SiC in a range of projectile velocities, and characterize the samples via electron backscatter diffraction and transmissionmore » electron microscopy to determine the aggregation state under the surface. InAs requires the lowest projectile velocity to develop an amorphous layer, followed by Ge, Si, GaAs, and GaP. SiC is the only semiconductor that remains fully crystalline, likely due to the relatively low velocities of the beamlets used in this study. The resiliency of each crystal to amorphization correlates well with the specific energy needed to melt it except for Ge, which requires projectile velocities higher than expected.« less

  20. Demonstration of space-resolved x-ray Thomson scattering capability for warm dense matter experiments on the Z accelerator

    DOE PAGES

    Ao, T.; Harding, E. C.; Bailey, J. E.; ...

    2016-01-13

    Experiments on the Sandia Z pulsed-power accelerator demonstrated the ability to produce warm dense matter (WDM) states with unprecedented uniformity, duration, and size, which are ideal for investigations of fundamental WDM properties. For the first time, space-resolved x-ray Thomson scattering (XRTS) spectra from shocked carbon foams were recorded on Z. The large (> 20 MA) electrical current produced by Z was used to launch Al flyer plates up to 25 km/s. The impact of the flyer plate on a CH 2 foam target produced a shocked state with an estimated pressure of 0.75 Mbar, density of 0.52 g/cm 3, andmore » temperature of 4.3 eV. Both unshocked and shocked portions of the foam target were probed with 6.2 keV x-rays produced by focusing the Z-Beamlet laser onto a nearby Mn foil. The data is composed of three spatially distinct spectra that were simultaneously captured with a single spectrometer with high spectral (4.8 eV) and spatial (190 μm) resolutions. Furthermore, these spectra provide detailed information on three target locations: the laser spot, the unshocked foam, and the shocked foam.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng Hansheng

    The ICF Program in China has made significant progress with multilabs' efforts in the past years. The eight-beam SG-II laser facility, upgraded from the two-beam SG-I facility, is nearly completed for 1.05 {mu}m light output and is about to be operated for experiments. Some benchmark experiments have been conducted for disk targets. Advanced diagnostic techniques, such as an x-ray microscope with a 7-{mu}m spatial resolution and x-ray framing cameras with a temporal resolution better than 65ps, have been developed. Lower energy pumping with prepulse technique for Ne-like Ti laser at 32.6nm has succeeded and shadowgraphy of a fine mesh hasmore » been demonstrated with the Ti laser beam. A national project, SG-III laser facility, has been proposed to produce 60 kJ blue light for target physics experiments and is being conceptually designed. New laser technology, including maltipass amplification, large aperture plasma electrode switches and laser glass with fewer platinum grains have been developed to meet the requirements of the SG-III Project. The Technical Integration Line (TIL) as a scientific prototype beamlet of SG-III will be first built in the next few years.« less

  2. Determination of output factor for 6 MV small photon beam: comparison between Monte Carlo simulation technique and microDiamond detector

    NASA Astrophysics Data System (ADS)

    Krongkietlearts, K.; Tangboonduangjit, P.; Paisangittisakul, N.

    2016-03-01

    In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%.

  3. Reconstruction of Axial Energy Deposition in Magnetic Liner Inertial Fusion Based on PECOS Shadowgraph Unfolds Using the AMR Code FLASH

    NASA Astrophysics Data System (ADS)

    Adams, Marissa; Jennings, Christopher; Slutz, Stephen; Peterson, Kyle; Gourdain, Pierre; U. Rochester-Sandia Collaboration

    2017-10-01

    Magnetic Liner Inertial Fusion (MagLIF) experiments incorporate a laser to preheat a deuterium filled capsule before compression via a magnetically imploding liner. In this work, we focus on the blast wave formed in the fuel during the laser preheat component of MagLIF, where approximately 1kJ of energy is deposited in 3ns into the capsule axially before implosion. To model blast waves directly relevant to experiments such as MagLIF, we inferred deposited energy from shadowgraphy of laser-only experiments preformed at the PECOS target chamber using the Z-Beamlet laser. These energy profiles were used to initialize 2-dimensional simulations using by the adaptive mesh refinement code FLASH. Gradients or asymmetries in the energy deposition may seed instabilities that alter the fuel's distribution, or promote mix, as the blast wave interacts with the liner wall. The AMR capabilities of FLASH allow us to study the development and dynamics of these instabilities within the fuel and their effect on the liner before implosion. Sandia Natl Labs is managed by NTES of Sandia, LLC., a subsidiary of Honeywell International, Inc, for the U.S. DOEs NNSA under contract DE-NA0003525.

  4. Ion collector design for an energy recovery test proposal with the negative ion source NIO1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Variale, V., E-mail: vincenzo.variale@ba.infn.it; Cavenago, M.; Agostinetti, P.

    2016-02-15

    Commercial viability of thermonuclear fusion power plants depends also on minimizing the recirculation power used to operate the reactor. The neutral beam injector (NBI) remains one of the most important method for plasma heating and control. For the future fusion power plant project DEMO, a NBI wall plug efficiency at least of 0.45 is required, while efficiency of present NBI project is about 0.25. The D{sup −} beam from a negative ion source is partially neutralized by a gas cell, which leaves more than 40% of energy in residual beams (D{sup −} and D{sup +}), so that an ion beammore » energy recovery system can significantly contribute to optimize efficiency. Recently, the test negative ion source NIO1 (60 keV, 9 beamlets with 15 mA H{sup −} each) has been designed and built at RFX (Padua) for negative ion production efficiency and the beam quality optimization. In this paper, a study proposal to use the NIO1 source also for a beam energy recovery test experiment is presented and a preliminary design of a negative ion beam collector with simulations of beam energy recovery is discussed.« less

  5. A wire calorimeter for the SPIDER beam: Experimental tests and feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqualotto, R., E-mail: roberto.pasqualotto@igi.cnr.it; Serianni, G.; Veltri, P.

    2015-04-08

    To study and optimize negative ion production and acceleration, in view of the use of neutral beam injectors in the ITER project, the SPIDER test facility (particle energy 100keV; beam current 50A, distributed over 1280 beamlets) is under construction in Padova, with the aim of testing beam characteristics and to verify the source proper operation, by means of several diagnostic systems. An array of tungsten wires, directly exposed to the beam and consequently heated to high temperature, is used in similar experiments at IPP-Garching to study the beam optics, which is one of the most important issues, in a qualitativemore » way. The present contribution gives a description of an experimental investigation of the behavior of tungsten wires under high heat loads in vacuum. Samples of tungsten wires are heated by electrical currents and the emitted light is measured by a camera in the 400-1100nm wavelength range, which is proposed as a calibration tool. Simultaneously, the voltage applied to the wire is measured to study the dependency of emissivity on temperature. The feasibility study of a wire calorimeter for SPIDER is finally proposed; to this purpose, the expected behaviour of tungsten with the two-dimensional beam profile in SPIDER is numerically addressed.« less

  6. GPU-based ultra-fast dose calculation using a finite size pencil beam model.

    PubMed

    Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B

    2009-10-21

    Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.

  7. Prediction of scaling physics laws for proton acceleration with extended parameter space of the NIF ARC

    NASA Astrophysics Data System (ADS)

    Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy

    2017-10-01

    The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  8. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  9. SU-F-T-380: Comparing the Effect of Respiration On Dose Distribution Between Conventional Tangent Pair and IMRT Techniques for Adjuvant Radiotherapy in Early Stage Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M; Ramaseshan, R

    2016-06-15

    Purpose: In this project, we compared the conventional tangent pair technique to IMRT technique by analyzing the dose distribution. We also investigated the effect of respiration on planning target volume (PTV) dose coverage in both techniques. Methods: In order to implement IMRT technique a template based planning protocol, dose constrains and treatment process was developed. Two open fields with optimized field weights were combined with two beamlet optimization fields in IMRT plans. We compared the dose distribution between standard tangential pair and IMRT. The improvement in dose distribution was measured by parameters such as conformity index, homogeneity index and coveragemore » index. Another end point was the IMRT technique will reduce the planning time for staff. The effect of patient’s respiration on dose distribution was also estimated. The four dimensional computed tomography (4DCT) for different phase of breathing cycle was used to evaluate the effect of respiration on IMRT planned dose distribution. Results: We have accumulated 10 patients that acquired 4DCT and planned by both techniques. Based on the preliminary analysis, the dose distribution in IMRT technique was better than conventional tangent pair technique. Furthermore, the effect of respiration in IMRT plan was not significant as evident from the 95% isodose line coverage of PTV drawn on all phases of 4DCT. Conclusion: Based on the 4DCT images, the breathing effect on dose distribution was smaller than what we expected. We suspect that there are two reasons. First, the PTV movement due to respiration was not significant. It might be because we used a tilted breast board to setup patients. Second, the open fields with optimized field weights in IMRT technique might reduce the breathing effect on dose distribution. A further investigation is necessary.« less

  10. WE-AB-207B-07: Dose Cloud: Generating “Big Data” for Radiation Therapy Treatment Plan Optimization Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkerts, MM; University of California San Diego, La Jolla, California; Long, T

    Purpose: To provide a tool to generate large sets of realistic virtual patient geometries and beamlet doses for treatment optimization research. This tool enables countless studies exploring the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions for various algorithms and modalities. Methods: Generating realistic virtual patient geometries requires a small set of real patient data. We developed a normalized patient shape model (PSM) which captures organ and target contours in a correspondence-preserving manner. Using PSM-processed data, we perform principal component analysis (PCA) to extract major modes of variation from the population. These PCA modes canmore » be shared without exposing patient information. The modes are re-combined with different weights to produce sets of realistic virtual patient contours. Because virtual patients lack imaging information, we developed a shape-based dose calculation (SBD) relying on the assumption that the region inside the body contour is water. SBD utilizes a 2D fluence-convolved scatter kernel, derived from Monte Carlo simulations, and can compute both full dose for a given set of fluence maps, or produce a dose matrix (dose per fluence pixel) for many modalities. Combining the shape model with SBD provides the data needed for treatment plan optimization research. Results: We used PSM to capture organ and target contours for 96 prostate cases, extracted the first 20 PCA modes, and generated 2048 virtual patient shapes by randomly sampling mode scores. Nearly half of the shapes were thrown out for failing anatomical checks, the remaining 1124 were used in computing dose matrices via SBD and a standard 7-beam protocol. As a proof of concept, and to generate data for later study, we performed fluence map optimization emphasizing PTV coverage. Conclusions: We successfully developed and tested a tool for creating customizable sets of virtual patients suitable for large-scale radiation therapy optimization research.« less

  11. SU-E-T-151: Breathing Synchronized Delivery (BSD) Planning for RapicArc Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Chen, M; Jiang, S

    2015-06-15

    Purpose: To propose a workflow for breathing synchronized delivery (BSD) planning for RapicArc treatment. Methods: The workflow includes three stages: screening/simulation, planning, and delivery. In the screening/simulation stage, a 4D CT with the corresponding breathing pattern is acquired for each of the selected patients, who are able to follow their own breathing pattern. In the planning stage, one breathing phase is chosen as the reference, and contours are delineated on the reference image. Deformation maps to other phases are performed along with contour propagation. Based on the control points of the initial 3D plan for the reference phase and themore » respiration trace, the correlation with respiration phases, the leaf sequence and gantry angles is determined. The beamlet matrices are calculated with the corresponding breathing phase and deformed to the reference phase. Using the 4D dose evaluation tool and the original 3D plan DVHs criteria, the leaf sequence is further optimized to meet the planning objectives and the machine constraints. In the delivery stage, the patients are instructed to follow the programmed breathing patterns of their own, and all other parts are the same as the conventional Rapid-Arc delivery. Results: Our plan analysis is based on comparison of the 3D plan with a static target (SD), 3D plan with motion delivery (MD), and the BSD plan. Cyclic motion of range 0 cm to 3 cm was simulated for phantoms and lung CT. The gain of the BSD plan over MD is significant and concordant for both simulation and lung 4DCT, indicating the benefits of 4D planning. Conclusion: Our study shows that the BSD plan can approach the SD plan quality. However, such BSD scheme relies on the patient being able to follow the same breathing curve that is used in the planning stage during radiation delivery. Funded by Varian Medical Systems.« less

  12. LSNR Airborne LIDAR Mapping System Design and Early Results (Invited)

    NASA Astrophysics Data System (ADS)

    Shrestha, K.; Carter, W. E.; Slatton, K. C.

    2009-12-01

    Low signal-to-noise ratio (LSNR) detection techniques allow for implementation of airborne light detection and range (LIDAR) instrumentation aboard platforms with prohibitive power, size, and weight restrictions. The University of Florida has developed the Coastal Area Tactical-mapping System (CATS), a prototype LSNR LIDAR system capable of single photon laser ranging. CATS is designed to operate in a fixed-wing aircraft flying 600 m above ground level, producing 532 nm, 480 ps, 3 μJ output pulses at 8 kHz. To achieve continuous coverage of the terrain with 20 cm spatial resolution in a single pass, a 10x10 array of laser beamlets is scanned. A Risley prism scanner (two rotating V-coated optical wedges) allows the array of laser beamlets to be deflected in a variety of patterns, including conical, spiral, and lines at selected angles to the direction of flight. Backscattered laser photons are imaged onto a 100 channel (10x10 segmented-anode) photomultiplier tube (PMT) with a micro-channel plate (MCP) amplifier. Each channel of the PMT is connected to a multi-stop 2 GHz event timer. Here we report on tests in which ranges for known targets were accumulated for repeated laser shots and statistical analyses were applied to evaluate range accuracy, minimum separation distance, bathymetric mapping depth, and atmospheric scattering. Ground-based field test results have yielded 10 cm range accuracy and sub-meter feature identification at variable scan settings. These experiments also show that a secondary surface can be detected at a distance of 15 cm from the first. Range errors in secondary surface identification for six separate trials were within 7.5 cm, or within the timing resolution limit of the system. Operating at multi-photon sensitivity may have value for situations in which high ambient noise precludes single-photon sensitivity. Low reflectivity targets submerged in highly turbid waters can cause detection issues. CATS offers the capability to adjust the sensitivity of the sensor by changing the PMT supply voltage. For heavily turbid water, the multi-photon state (2300 V, 2.5*10^5 gain) was not sufficient for feature identification. Extraction of the bottom signal in a heavily turbid suspension necessitated maximum MCP-PMT gain (2500 V, 8*10^5 gain). Extrapolation of bathymetric test results suggest that the density of data points from the sea bottom should be sufficient to establish near-shore depths (up to 5 m) at a spatial resolution of 1 meter, in moderately turbid water. Initial airborne tests over fresh water lakes in central Florida indicate that scan patterns containing near nadir laser points produce strong returns from the surface of the water that cause oscillations in the PMT—preventing the detection of the lake bottom in shallow clear water. These results suggest that it may be necessary to tilt the sensor head in its mount, or use a scan pattern that does not include nadir points, such as a circular scan, for bathymetric mapping. Additional tests are ongoing to optimize the performance of the CATS LSNR airborne LIDAR system for both high spatial resolution terrain mapping and shallow water bathymetric mapping.

  13. Structured illumination to spatially map chromatin motions.

    PubMed

    Bonin, Keith; Smelser, Amanda; Moreno, Naike Salvador; Holzwarth, George; Wang, Kevin; Levy, Preston; Vidi, Pierre-Alexandre

    2018-05-01

    We describe a simple optical method that creates structured illumination of a photoactivatable probe and apply this method to characterize chromatin motions in nuclei of live cells. A laser beam coupled to a diffractive optical element at the back focal plane of an excitation objective generates an array of near diffraction-limited beamlets with FWHM of 340  ±  30  nm, which simultaneously photoactivate a 7  ×  7 matrix pattern of GFP-labeled histones, with spots 1.70  μm apart. From the movements of the photoactivated spots, we map chromatin diffusion coefficients at multiple microdomains of the cell nucleus. The results show correlated motions of nearest chromatin microdomain neighbors, whereas chromatin movements are uncorrelated at the global scale of the nucleus. The method also reveals a DNA damage-dependent decrease in chromatin diffusion. The diffractive optical element instrumentation can be easily and cheaply implemented on commercial inverted fluorescence microscopes to analyze adherent cell culture models. A protocol to measure chromatin motions in nonadherent human hematopoietic stem and progenitor cells is also described. We anticipate that the method will contribute to the identification of the mechanisms regulating chromatin mobility, which influences most genomic processes and may underlie the biogenesis of genomic translocations associated with hematologic malignancies. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. The 15 cm mercury ion thruster research 1975

    NASA Technical Reports Server (NTRS)

    Wilbur, P. J.

    1975-01-01

    Doubly charged ion current measurements in the beam of a SERT II thruster are shown to introduce corrections which bring its calculated thrust into close agreement with that measured during flight testing. A theoretical model of doubly charged ion production and loss in mercury electron bombardment thrusters is discussed and is shown to yield doubly-to-singly charged ion density ratios that agree with experimental measurements obtained on a 15 cm diameter thruster over a range of operating conditions. Single cusp magnetic field thruster operation is discussed and measured ion beam profiles, performance data, doubly charged ion densities, and discharge plasma characteristics are presented for a range of operating conditions and thruster geometries. Variations in the characteristics of this thruster are compared to those observed in the divergent field thruster and the cusped field thruster is shown to yield flatter ion beam profiles at about the same discharge power and propellant utilization operating point. An ion optics test program is described and the measured effects of grid system dimensions on ion beamlet half angle and diameter are examined. The effectiveness of hollow cathode startup using a thermionically emitting filament within the cathode is examined over a range of mercury flow rates and compared to results obtained with a high voltage tickler startup technique. Results of cathode plasma property measurement tests conducted within the cathode are presented.

  15. Optics for multimode lasers with elongated depth of field

    NASA Astrophysics Data System (ADS)

    Laskin, Alexander; Laskin, Vadim; Ostrun, Aleksei

    2017-02-01

    Modern multimode high-power lasers are widely used in industrial applications and control of their radiation, especially by focusing, is of great importance. Because of relatively low optical quality, characterized by high values of specifications Beam Parameter Product (BPP) or M², the depth of field by focusing of multimode laser radiation is narrow. At the same time laser technologies like deep penetration welding, cutting of thick metal sheets get benefits from elongated depth of field in area of focal plane, therefore increasing of zone along optical axis with minimized spot size is important technical task. As a solution it is suggested to apply refractive optical systems splitting an initial laser beam into several beamlets, which are focused in different foci separated along optical axis with providing reliable control of energy portions in each separate focus, independently of beam size or mode structure. With the multi-focus optics, the length of zone of material processing along optical axis is defined rather by distances between separate foci, which are determined by optical design of the optics and can be chosen according to requirements of a particular laser technology. Due to stability of the distances between foci there is provided stability of a technology process. This paper describes some design features of refractive multi-focus optics, examples of real implementations and experimental results will be presented as well.

  16. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  17. Measurements of Electron Temperature and Density Profiles of Plasmas Produced by Nike KrF Laser for Laser Plasma Instability (LPI) Research

    NASA Astrophysics Data System (ADS)

    Oh, Jaechul; Weaver, J. L.; Obenschain, S. P.; Schmitt, A. J.; Kehne, D. M.; Karasik, M.; Chan, L.-Y.; Serlin, V.; Phillips, L.

    2012-10-01

    ExperimentsfootnotetextJ. Oh, et al, GO5.4, APS DPP (2010).^,footnotetextJ. L. Weaver, et al, GO5.3, APS DPP (2010). using Nike KrF laser observed LPI signatures from CH plasmas at the laser intensities above ˜1x10^15 W/cm^2. Knowing spatial profiles of temperature (Te) and density (ne) in the underdense coronal region (0 < n < nc/4) of the plasma is essential to understanding the LPI observation. However, numerical simulation was the only way to access the profiles for the previous experiments. In the current Nike LPI experiment, a side-on grid imaging refractometer (GIR)footnotetextR. S. Craxton, et al, Phys. Fluids B 5, 4419 (1993). is being deployed for measuring the underdense plasma profiles. The GIR will resolve Te and ne in space taking a 2D snapshot of probe laser (λ= 263 nm, δt = 10 psec) beamlets (50μm spacing) refracted by the plasma at a selected time during the laser illumination. Time-resolved spectrometers with an absolute-intensity-calibrated photodiode array and a streak camera will simultaneously monitor light emission from the plasma in spectral ranges relevant to Raman (SRS) and two plasmon decay (TDP) instabilities. The experimental study of effects of the plasma profiles on the LPI initiation will be presented.

  18. The development of an injection system for a compact H(-) cyclotron, the concomitant measurement of injected beam properties and the experimental characterization of the spiral inflector

    NASA Astrophysics Data System (ADS)

    Dehnel, Morgan Patrick

    1998-11-01

    This thesis addresses two major problems. One is of interest to commercial cyclotron manufacturers and the other is of interest to the accelerator physics community. The industrial problem was to produce a compact and modular ion source and injection system for the new TR13 H- cyclotron, which is capable of transporting and injecting a high quality and well matched beam into the cyclotron. The accelerator physics problem was to advance the science of inflector ion optical design, analysis and troubleshooting from the realm of pure simulation to the realm of measurement and experimentation. The industrial problem was solved by designing candidate injection systems in parallel with the TR13 cyclotron design. These systems were fabricated and then experimentally optimized along with the ion source on a 1 MeV test cyclotron. This work resulted in a set of ion source and injection systems with well documented and understood properties. The recommended solution for the TR13 was a cost effective injection system composed of only two axially rotated quadrupole magnets. The accelerator physics problem is the lack of measured cyclotron inflector optical data and beam related properties in the immediate vicinity of a cyclotron inflector. This required the development of an experimental technique to overcome the numerous technical difficulties associated with making measurements near a device as inaccessible as a cyclotron inflector. A diverse assembly of equipment and procedures was required: a well understood injection system, a pinhole collimator for producing beamlets for ray-tracing, a specially configured center region to expose the inflector to view, a system of scintillators in close proximity to the inflector for producing visible beamspots, a TV camera and frame grabber to record images and a set of image analysis and data processing procedures. The results obtained using this technique were: (a) measured constraints on the coefficients of an inflector's transport matrix, (b) measurement of the beam's centering, size, shape and orientation in phase space at the entrance and exit of an inflector, (c) measurements of beam displacement as a function of field and energy perturbations at an inflector exit and (d) comparison of an inflector simulation code's capabilities against detailed measured data. Such properties of a beam have not heretofore been determined experimentally.

  19. TU-EF-304-11: Therapeutic Benefits of Collimation in Spot Scanning Proton Therapy in the Treatment of Brain Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, A; Gelover, E; Wang, D

    Purpose: A dynamic collimation system (DCS) based on two orthogonal pairs of mobile trimmer blades has recently been proposed to reduce the lateral penumbra in spot scanning proton therapy (SSPT). The purpose of this work is to quantify the therapeutic benefit of using the DCS for SSPT of brain cancer by comparing un-collimated and collimated treatment plans. Methods: Un-collimated and collimated brain treatment plans were created for five patients, previously treated with SSPT, using an in-house treatment planning system capable of modeling collimated and un-collimated beamlets. Un-collimated plans reproduced the clinically delivered plans in terms of target coverage and organ-at-riskmore » (OAR) sparing, whereas collimated plans were re-optimized to improve the organ-at-risk sparing while maintaining target coverage. Physical and biological comparison metrics such as dose distribution conformity, mean and maximum doses, normal tissue complication probability (NTCP) and risk of secondary brain cancer were used to evaluate the plans. Results: The DCS systematically improved the dose distribution conformity while preserving the target coverage. The average reduction of the mean dose to the 10-mm ring surrounding the target and the healthy brain were 7.1% (95% CI: 4.2%–9.9%; p<0.01) and 14.3% (95% CI: 7.8%–20.8%; p<0.01), respectively. This yielded an average reduction of 12.0% (95% CI: 8.2%–15.7%; p<0.01) for the brain necrosis NTCP using the Flickinger model, and 14.2% (95% CI: 7.7%–20.8%; p<0.01) for the risk of secondary brain cancer. The average maximum dose reductions for the brainstem, chiasm, optic nerves, cochleae and pituitary gland when comparing un-collimated and collimated plans were 14.3%, 10.4%, 11.2%, 13.0%, 12.9% and 3.4%, respectively. Evaluating individual plans using the Lyman-Kutcher-Burman NTCP model also yielded improvements. Conclusion: The lateral penumbra reduction performed by the DCS increases the normal tissue sparing capabilities of SSPT for brain tumor treatment while preserving the target coverage. This research was financially supported by Ion Beam Applications S.A. (IBA, Louvain-La-Neuve, Belgium)« less

  20. Multi-GPU implementation of a VMAT treatment plan optimization algorithm.

    PubMed

    Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B

    2015-06-01

    Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.

  1. Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGeachy, Philip; Villarreal-Barajas, Jose Eduardo

    Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetricmore » prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.« less

  2. Laser parametric instability experiments of a 3ω, 15 kJ, 6-ns laser pulse in gas-filled hohlraums at the Ligne d'Intégration Laser facility

    NASA Astrophysics Data System (ADS)

    Rousseaux, C.; Huser, G.; Loiseau, P.; Casanova, M.; Alozy, E.; Villette, B.; Wrobel, R.; Henry, O.; Raffestin, D.

    2015-02-01

    Experimental investigation of stimulated Raman (SRS) and Brillouin (SBS) scattering have been obtained at the Ligne-d'Intégration-Laser facility (LIL, CEA-Cesta, France). The parametric instabilities (LPI) are driven by firing four laser beamlets (one quad) into millimeter size, gas-filled hohlraum targets. A quad delivers energy on target of 15 kJ at 3ω in a 6-ns shaped laser pulse. The quad is focused by means of 3ω gratings and is optically smoothed with a kinoform phase plate and with smoothing by spectral dispersion-like 2 GHz and/or 14 GHz laser bandwidth. Open- and closed-geometry hohlraums have been used, all being filled with 1-atm, neo-pentane (C5H12) gas. For SRS and SBS studies, the light backscattered into the focusing optics is analyzed with spectral and time resolutions. Near-backscattered light at 3ω and transmitted light at 3ω are also monitored in the open geometry case. Depending on the target geometry (plasma length and hydrodynamic evolution of the plasma), it is shown that, at maximum laser intensity about 9 × 1014 W/cm2, Raman reflectivity noticeably increases up to 30% in 4-mm long plasmas while SBS stays below 10%. Consequently, laser transmission through long plasmas drops to about 10% of incident energy. Adding 14 GHz bandwidth to the laser always reduces LPI reflectivities, although this reduction is not dramatic.

  3. Advanced electric propulsion and space plasma contactor research

    NASA Technical Reports Server (NTRS)

    Wilbur, Paul J.

    1987-01-01

    A theory of the plasma contacting process is described and experimental results obtained using three different hollow cathode-based plasma contactors are presented. The existence of a sheath across which the bulk of the voltage drop associated with the contacting process occurs is demonstrated. Test results are shown to agree with a model of a spherical, space-charge-limited double sheath. The concept of ignited mode contactor operation is discussed, which is shown to enhance contactor efficiency when it is collecting electrons. An investigation of the potentials in the plasma plumes downstream of contactors operating at typical conditions is presented. Results of tests performed on hollow cathodes operating at high interelectrode pressures (up to about 1000 Torr) on ammonia are presented and criteria that are necessary to ensure that the cathode will operate properly in this regime are presented. These results suggest that high pressure hollow cathode operation is difficult to achieve and that special care must be taken to assure that the electron emission region remains diffuse and attached to the low work function insert. Experiments conducted to verify results obtained previously using a ring cusp ion source equipped with a moveable anode are described and test results are reported. A theoretical study of hollow cathode operation at high electron emission currents is presented. Preliminary experiments using the constrained sheath optics concept to achieve ion extraction under conditions of high beam current density, low net accelerating voltage and well columniated beamlet formation are discussed.

  4. A 7.2 keV spherical crystal backlighter system for Sandia's Z Pulsed Power Facility

    NASA Astrophysics Data System (ADS)

    Schollmeier, M.; Knapp, P. F.; Ampleford, D. J.; Loisel, G. P.; Robertson, G.; Shores, J. E.; Smith, I. C.; Speas, C. S.; Porter, J. L.; McBride, R. D.

    2016-10-01

    Many experiments on Sandia's Z facility, a 30 MA, 100 ns rise-time, pulsed-power driver, use a monochromatic Quartz crystal imaging backlighter system at 1.865 keV (Si Heα) or 6.151 keV (Mn Heα) x-ray energy to radiograph an imploding liner (cylindrical tube) or wire array. The x-ray source is generated by the Z-Beamlet Laser (ZBL), which provides up to 4.5 kJ at 527 nm during a 6 ns window. Radiographs of an imploding thick-walled Beryllium liner at a convergence ratio of about 20 [CR =Rin . (0) /Rin . (t) ] were too opaque to identify the inner surface of the liner with high confidence, demonstrating the need for a higher-energy x-ray backlighter between 6 and 10 keV. We present the design, test and first application of a Ge (335) spherical crystal x-ray backlighter system using the 7.242 keV Co Heα resonance line. The system operates at an almost identical Bragg angle as the existing 1.865 and 6.151 keV backlighters, enhancing our capabilities such as two-color, two-frame radiography, without changing detector shielding hardware. SAND No: SAND2016-6724 A. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. DoE NNSA under contract DE-AC04-94AL85000.

  5. A first characterization of the NIO1 particle beam by means of a diagnostic calorimeter

    NASA Astrophysics Data System (ADS)

    Pimazzoni, A.; Cavenago, M.; Cervaro, V.; Fasolo, D.; Serianni, G.; Tollin, M.; Veltri, P.

    2017-08-01

    Powerful neutral beam injectors (NBI) are required as heating and current drive systems for tokamaks like ITER. The development of negative ion sources and accelerators (40 A; 1 MeV D- beam) in particular, is a crucial point and many issues still require a better understanding. In this framework, the experiment NIO1 (9 beamlets of 15 mA H- each, 60 kV) operated at Consorzio RFX started operation in 2014[1]. Both its RF negative ion source (up to 2.5 kW) and its beamline are equipped with many diagnostics [2]. For the early tests on the extraction system, oxygen has been used as well as hydrogen due to its higher electronegativity, which allows reaching currents large enough to test the beam diagnostics even without caesium injection. In particular a 1D-CFC (carbon-fibre-carbon composite) tile is used as a calorimeter to determine the beam power deposition by observing the rear surface of the tile with an infra-red camera; the same design is applied as for STRIKE [3], one of the diagnostics of SPIDER (the ITER-like ion source prototype [4]) whose facility is currently under construction at Consorzio RFX. From this diagnostic it is also possible to assess the beam divergence and thus the beam optics. The present contribution describes the characterization of the NIO1 particle beam by means of temperature and current measurements with different source and accelerator parameters.

  6. A 7.2 keV spherical x-ray crystal backlighter for two-frame, two-color backlighting at Sandia's Z Pulsed Power Facility

    NASA Astrophysics Data System (ADS)

    Schollmeier, M. S.; Knapp, P. F.; Ampleford, D. J.; Harding, E. C.; Jennings, C. A.; Lamppa, D. C.; Loisel, G. P.; Martin, M. R.; Robertson, G. K.; Shores, J. E.; Smith, I. C.; Speas, C. S.; Weis, M. R.; Porter, J. L.; McBride, R. D.

    2017-10-01

    Many experiments on Sandia National Laboratories' Z Pulsed Power Facility—a 30 MA, 100 ns rise-time, pulsed-power driver—use a monochromatic quartz crystal backlighter system at 1.865 keV (Si He α ) or 6.151 keV (Mn He α ) x-ray energy to radiograph an imploding liner (cylindrical tube) or wire array z-pinch. The x-ray source is generated by the Z-Beamlet laser, which provides two 527-nm, 1 kJ, 1-ns laser pulses. Radiographs of imploding, thick-walled beryllium liners at convergence ratios CR above 15 [ C R = r i ( 0 ) / r i ( t ) ] using the 6.151-keV backlighter system were too opaque to identify the inner radius r i of the liner with high confidence, demonstrating the need for a higher-energy x-ray radiography system. Here, we present a 7.242 keV backlighter system using a Ge(335) spherical crystal with the Co He α resonance line. This system operates at a similar Bragg angle as the existing 1.865 keV and 6.151 keV backlighters, enhancing our capabilities for two-color, two-frame radiography without modifying the system integration at Z. The first data taken at Z include 6.2-keV and 7.2-keV two-color radiographs as well as radiographs of low-convergence (CR about 4-5), high-areal-density liner implosions.

  7. A 7.2 keV spherical x-ray crystal backlighter for two-frame, two-color backlighting at Sandia’s Z Pulsed Power Facility

    DOE PAGES

    Schollmeier, M. S.; Knapp, P. F.; Ampleford, D. J.; ...

    2017-10-10

    Many experiments on Sandia National Laboratories’ Z Pulsed Power Facility—a 30 MA, 100 ns rise-time, pulsed-power driver—use a monochromatic quartz crystal backlighter system at 1.865 keV (Si He α) or 6.151 keV (Mn He α) x-ray energy to radiograph an imploding liner (cylindrical tube) or wire array z-pinch. The x-ray source is generated by the Z-Beamlet laser, which provides two 527-nm, 1 kJ, 1-ns laser pulses. Radiographs of imploding, thick-walled beryllium liners at convergence ratios C R above 15 [C R=r i(0)/r i(t)] using the 6.151-keV backlighter system were too opaque to identify the inner radius ri of the linermore » with high confidence, demonstrating the need for a higher-energy x-ray radiography system. Here, we present a 7.242 keV backlighter system using a Ge(335) spherical crystal with the Co He α resonance line. This system operates at a similar Bragg angle as the existing 1.865 keV and 6.151 keV backlighters, enhancing our capabilities for two-color, two-frame radiography without modifying the system integration at Z. The first data taken at Z include 6.2-keV and 7.2-keV two-color radiographs as well as radiographs of low-convergence (C R about 4-5), high-areal-density liner implosions.« less

  8. A 7.2 keV spherical x-ray crystal backlighter for two-frame, two-color backlighting at Sandia’s Z Pulsed Power Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schollmeier, M. S.; Knapp, P. F.; Ampleford, D. J.

    Many experiments on Sandia National Laboratories’ Z Pulsed Power Facility—a 30 MA, 100 ns rise-time, pulsed-power driver—use a monochromatic quartz crystal backlighter system at 1.865 keV (Si He α) or 6.151 keV (Mn He α) x-ray energy to radiograph an imploding liner (cylindrical tube) or wire array z-pinch. The x-ray source is generated by the Z-Beamlet laser, which provides two 527-nm, 1 kJ, 1-ns laser pulses. Radiographs of imploding, thick-walled beryllium liners at convergence ratios C R above 15 [C R=r i(0)/r i(t)] using the 6.151-keV backlighter system were too opaque to identify the inner radius ri of the linermore » with high confidence, demonstrating the need for a higher-energy x-ray radiography system. Here, we present a 7.242 keV backlighter system using a Ge(335) spherical crystal with the Co He α resonance line. This system operates at a similar Bragg angle as the existing 1.865 keV and 6.151 keV backlighters, enhancing our capabilities for two-color, two-frame radiography without modifying the system integration at Z. The first data taken at Z include 6.2-keV and 7.2-keV two-color radiographs as well as radiographs of low-convergence (C R about 4-5), high-areal-density liner implosions.« less

  9. Study of Perturbations on High Mach Number Blast Waves in Various Gasses

    NASA Astrophysics Data System (ADS)

    Edens, A.; Adams, R.; Rambo, P.; Shores, J.; Smith, I.; Atherton, B.; Ditmire, T.

    2006-10-01

    We have performed a series of experiments examining the properties of high Mach number blast waves. Experiments were conducted on the Z-Beamlet^1 laser at Sandia National Laboratories. We created blast waves in the laboratory by using 10 J- 1000 J laser pulses to illuminate millimeter scale solid targets immersed in gas. Our experiments studied the validity of theories forwarded by Vishniac and Ryu^2-4 to explain the dynamics of perturbations on astrophysical blast waves. These experiments consisted of an examination of the evolution of perturbations of known primary mode number induced on the surface of blast waves by means of regularly spaced wire arrays. The temporal evolution of the amplitude of the induced perturbations relative to the mean radius of the blast wave was fit to a power law in time. Measurements were taken for a number of different mode numbers and background gasses and the results show qualitative agreement with previously published theories for the hydrodynamics of thin shell blast wave. The results for perturbations on nitrogen gas have been recently published^5. .^1 P. K. Rambo, I. C. Smith, J. L. Porter, et al., Applied Optics 44, 2421 (2005). ^2 D. Ryu and E. T. Vishniac, Astrophysical Journal 313, 820 (1987). ^3 D. Ryu and E. T. Vishniac, Astrophysical Journal 368, 411 (1991). ^4 E. T. Vishniac, Astrophysical Journal 274, 152 (1983). ^5 A. D. Edens, T. Ditmire, J. F. Hansen, et al., Physical Review Letters 95 (2005).

  10. Demonstration of lithography patterns using reflective e-beam direct write

    NASA Astrophysics Data System (ADS)

    Freed, Regina; Sun, Jeff; Brodie, Alan; Petric, Paul; McCord, Mark; Ronse, Kurt; Haspeslagh, Luc; Vereecke, Bart

    2011-04-01

    Traditionally, e-beam direct write lithography has been too slow for most lithography applications. E-beam direct write lithography has been used for mask writing rather than wafer processing since the maximum blur requirements limit column beam current - which drives e-beam throughput. To print small features and a fine pitch with an e-beam tool requires a sacrifice in processing time unless one significantly increases the total number of beams on a single writing tool. Because of the uncertainty with regards to the optical lithography roadmap beyond the 22 nm technology node, the semiconductor equipment industry is in the process of designing and testing e-beam lithography tools with the potential for high volume wafer processing. For this work, we report on the development and current status of a new maskless, direct write e-beam lithography tool which has the potential for high volume lithography at and below the 22 nm technology node. A Reflective Electron Beam Lithography (REBL) tool is being developed for high throughput electron beam direct write maskless lithography. The system is targeting critical patterning steps at the 22 nm node and beyond at a capital cost equivalent to conventional lithography. Reflective Electron Beam Lithography incorporates a number of novel technologies to generate and expose lithographic patterns with a throughput and footprint comparable to current 193 nm immersion lithography systems. A patented, reflective electron optic or Digital Pattern Generator (DPG) enables the unique approach. The Digital Pattern Generator is a CMOS ASIC chip with an array of small, independently controllable lens elements (lenslets), which act as an array of electron mirrors. In this way, the REBL system is capable of generating the pattern to be written using massively parallel exposure by ~1 million beams at extremely high data rates (~ 1Tbps). A rotary stage concept using a rotating platen carrying multiple wafers optimizes the writing strategy of the DPG to achieve the capability of high throughput for sparse pattern wafer levels. The lens elements on the DPG are fabricated at IMEC (Leuven, Belgium) under IMEC's CMORE program. The CMOS fabricated DPG contains ~ 1,000,000 lens elements, allowing for 1,000,000 individually controllable beamlets. A single lens element consists of 5 electrodes, each of which can be set at controlled voltage levels to either absorb or reflect the electron beam. A system using a linear movable stage and the DPG integrated into the electron optics module was used to expose patterns on device representative wafers. Results of these exposure tests are discussed.

  11. On the implications of the classical ergodic theorems: analysis of developmental processes has to focus on intra-individual variation.

    PubMed

    Molenaar, Peter C M

    2008-01-01

    It is argued that general mathematical-statistical theorems imply that standard statistical analysis techniques of inter-individual variation are invalid to investigate developmental processes. Developmental processes have to be analyzed at the level of individual subjects, using time series data characterizing the patterns of intra-individual variation. It is shown that standard statistical techniques based on the analysis of inter-individual variation appear to be insensitive to the presence of arbitrary large degrees of inter-individual heterogeneity in the population. An important class of nonlinear epigenetic models of neural growth is described which can explain the occurrence of such heterogeneity in brain structures and behavior. Links with models of developmental instability are discussed. A simulation study based on a chaotic growth model illustrates the invalidity of standard analysis of inter-individual variation, whereas time series analysis of intra-individual variation is able to recover the true state of affairs. (c) 2007 Wiley Periodicals, Inc.

  12. Inverse planning in the age of digital LINACs: station parameter optimized radiation therapy (SPORT)

    NASA Astrophysics Data System (ADS)

    Xing, Lei; Li, Ruijiang

    2014-03-01

    The last few years have seen a number of technical and clinical advances which give rise to a need for innovations in dose optimization and delivery strategies. Technically, a new generation of digital linac has become available which offers features such as programmable motion between station parameters and high dose-rate Flattening Filter Free (FFF) beams. Current inverse planning methods are designed for traditional machines and cannot accommodate these features of new generation linacs without compromising either dose conformality and/or delivery efficiency. Furthermore, SBRT is becoming increasingly important, which elevates the need for more efficient delivery, improved dose distribution. Here we will give an overview of our recent work in SPORT designed to harness the digital linacs and highlight the essential components of SPORT. We will summarize the pros and cons of traditional beamlet-based optimization (BBO) and direct aperture optimization (DAO) and introduce a new type of algorithm, compressed sensing (CS)-based inverse planning, that is capable of automatically removing the redundant segments during optimization and providing a plan with high deliverability in the presence of a large number of station control points (potentially non-coplanar, non-isocentric, and even multi-isocenters). We show that CS-approach takes the interplay between planning and delivery into account and allows us to balance the dose optimality and delivery efficiency in a controlled way and, providing a viable framework to address various unmet demands of the new generation linacs. A few specific implementation strategies of SPORT in the forms of fixed-gantry and rotational arc delivery are also presented.

  13. Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy

    NASA Astrophysics Data System (ADS)

    Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard

    2017-05-01

    A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.

  14. Tools of Individual Evaluation and Prestige Recognition in Spain: How Sexenio 'Mints the Golden Coin of Authority'

    ERIC Educational Resources Information Center

    Marini, Giulio

    2018-01-01

    Individual experiences in dealing with individual evaluations are studied through a national documental analysis and qualitative interviews. The analysis considers three main individual assessments designed to measure individual credentials or performance: "sexenio" (research and third mission), "quinquenio" (teaching) and…

  15. Methods for network meta-analysis of continuous outcomes using individual patient data: a case study in acupuncture for chronic pain.

    PubMed

    Saramago, Pedro; Woods, Beth; Weatherly, Helen; Manca, Andrea; Sculpher, Mark; Khan, Kamran; Vickers, Andrew J; MacPherson, Hugh

    2016-10-06

    Network meta-analysis methods, which are an extension of the standard pair-wise synthesis framework, allow for the simultaneous comparison of multiple interventions and consideration of the entire body of evidence in a single statistical model. There are well-established advantages to using individual patient data to perform network meta-analysis and methods for network meta-analysis of individual patient data have already been developed for dichotomous and time-to-event data. This paper describes appropriate methods for the network meta-analysis of individual patient data on continuous outcomes. This paper introduces and describes network meta-analysis of individual patient data models for continuous outcomes using the analysis of covariance framework. Comparisons are made between this approach and change score and final score only approaches, which are frequently used and have been proposed in the methodological literature. A motivating example on the effectiveness of acupuncture for chronic pain is used to demonstrate the methods. Individual patient data on 28 randomised controlled trials were synthesised. Consistency of endpoints across the evidence base was obtained through standardisation and mapping exercises. Individual patient data availability avoided the use of non-baseline-adjusted models, allowing instead for analysis of covariance models to be applied and thus improving the precision of treatment effect estimates while adjusting for baseline imbalance. The network meta-analysis of individual patient data using the analysis of covariance approach is advocated to be the most appropriate modelling approach for network meta-analysis of continuous outcomes, particularly in the presence of baseline imbalance. Further methods developments are required to address the challenge of analysing aggregate level data in the presence of baseline imbalance.

  16. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less

  17. Individualism: a valid and important dimension of cultural differences between nations.

    PubMed

    Schimmack, Ulrich; Oishi, Shigehiro; Diener, Ed

    2005-01-01

    Oyserman, Coon, and Kemmelmeier's (2002) meta-analysis suggested problems in the measurement of individualism and collectivism. Studies using Hofstede's individualism scores show little convergent validity with more recent measures of individualism and collectivism. We propose that the lack of convergent validity is due to national differences in response styles. Whereas Hofstede statistically controlled for response styles, Oyserman et al.'s meta-analysis relied on uncorrected ratings. Data from an international student survey demonstrated convergent validity between Hofstede's individualism dimension and horizontal individualism when response styles were statistically controlled, whereas uncorrected scores correlated highly with the individualism scores in Oyserman et al.'s meta-analysis. Uncorrected horizontal individualism scores and meta-analytic individualism scores did not correlate significantly with nations' development, whereas corrected horizontal individualism scores and Hofstede's individualism dimension were significantly correlated with development. This pattern of results suggests that individualism is a valid construct for cross-cultural comparisons, but that the measurement of this construct needs improvement.

  18. Individual Differences, Intelligence, and Behavior Analysis

    ERIC Educational Resources Information Center

    Williams, Ben; Myerson, Joel; Hale, Sandra

    2008-01-01

    Despite its avowed goal of understanding individual behavior, the field of behavior analysis has largely ignored the determinants of consistent differences in level of performance among individuals. The present article discusses major findings in the study of individual differences in intelligence from the conceptual framework of a functional…

  19. WE-AB-209-05: Development of an Ultra-Fast High Quality Whole Breast Radiotherapy Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Y; Li, T; Yoo, S

    2016-06-15

    Purpose: To enable near-real-time (<20sec) and interactive planning without compromising quality for whole breast RT treatment planning using tangential fields. Methods: Whole breast RT plans from 20 patients treated with single energy (SE, 6MV, 10 patients) or mixed energy (ME, 6/15MV, 10 patients) were randomly selected for model training. Additional 20 cases were used as validation cohort. The planning process for a new case consists of three fully automated steps:1. Energy Selection. A classification model automatically selects energy level. To build the energy selection model, principle component analysis (PCA) was applied to the digital reconstructed radiographs (DRRs) of training casesmore » to extract anatomy-energy relationship.2. Fluence Estimation. Once energy is selected, a random forest (RF) model generates the initial fluence. This model summarizes the relationship between patient anatomy’s shape based features and the output fluence. 3. Fluence Fine-tuning. This step balances the overall dose contribution throughout the whole breast tissue by automatically selecting reference points and applying centrality correction. Fine-tuning works at beamlet-level until the dose distribution meets clinical objectives. Prior to finalization, physicians can also make patient-specific trade-offs between target coverage and high-dose volumes.The proposed method was validated by comparing auto-plans with manually generated clinical-plans using Wilcoxon Signed-Rank test. Results: In 19/20 cases the model suggested the same energy combination as clinical-plans. The target volume coverage V100% was 78.1±4.7% for auto-plans, and 79.3±4.8% for clinical-plans (p=0.12). Volumes receiving 105% Rx were 69.2±78.0cc for auto-plans compared to 83.9±87.2cc for clinical-plans (p=0.13). The mean V10Gy, V20Gy of the ipsilateral lung was 24.4±6.7%, 18.6±6.0% for auto plans and 24.6±6.7%, 18.9±6.1% for clinical-plans (p=0.04, <0.001). Total computational time for auto-plans was < 20s. Conclusion: We developed an automated method that generates breast radiotherapy plans with accurate energy selection, similar target volume coverage, reduced hotspot volumes, and significant reduction in planning time, allowing for near-real-time planning.« less

  20. Characterization of individual mouse cerebrospinal fluid proteomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jeffrey S.; Angel, Thomas E.; Chavkin, Charles

    2014-03-20

    Analysis of cerebrospinal fluid (CSF) offers key insight into the status of the central nervous system. Characterization of murine CSF proteomes can provide a valuable resource for studying central nervous system injury and disease in animal models. However, the small volume of CSF in mice has thus far limited individual mouse proteome characterization. Through non-terminal CSF extractions in C57Bl/6 mice and high-resolution liquid chromatography-mass spectrometry analysis of individual murine samples, we report the most comprehensive proteome characterization of individual murine CSF to date. Utilizing stringent protein inclusion criteria that required the identification of at least two unique peptides (1% falsemore » discovery rate at the peptide level) we identified a total of 566 unique proteins, including 128 proteins from three individual CSF samples that have been previously identified in brain tissue. Our methods and analysis provide a mechanism for individual murine CSF proteome analysis.« less

  1. Use of Neuroanatomical Pattern Classification to Identify Subjects in At-Risk Mental States of Psychosis and Predict Disease Transition

    PubMed Central

    Koutsouleris, Nikolaos; Meisenzahl, Eva M.; Davatzikos, Christos; Bottlender, Ronald; Frodl, Thomas; Scheuerecker, Johanna; Schmitt, Gisela; Zetzsche, Thomas; Decker, Petra; Reiser, Maximilian; Möller, Hans-Jürgen; Gaser, Christian

    2014-01-01

    Context Identification of individuals at high risk of developing psychosis has relied on prodromal symptomatology. Recently, machine learning algorithms have been successfully used for magnetic resonance imaging–based diagnostic classification of neuropsychiatric patient populations. Objective To determine whether multivariate neuroanatomical pattern classification facilitates identification of individuals in different at-risk mental states (ARMS) of psychosis and enables the prediction of disease transition at the individual level. Design Multivariate neuroanatomical pattern classification was performed on the structural magnetic resonance imaging data of individuals in early or late ARMS vs healthy controls (HCs). The predictive power of the method was then evaluated by categorizing the baseline imaging data of individuals with transition to psychosis vs those without transition vs HCs after 4 years of clinical follow-up. Classification generalizability was estimated by cross-validation and by categorizing an independent cohort of 45 new HCs. Setting Departments of Psychiatry and Psychotherapy, Ludwig-Maximilians-University, Munich, Germany. Participants The first classification analysis included 20 early and 25 late at-risk individuals and 25 matched HCs. The second analysis consisted of 15 individuals with transition, 18 without transition, and 17 matched HCs. Main Outcome Measures Specificity, sensitivity, and accuracy of classification. Results The 3-group, cross-validated classification accuracies of the first analysis were 86% (HCs vs the rest), 91% (early at-risk individuals vs the rest), and 86% (late at-risk individuals vs the rest). The accuracies in the second analysis were 90% (HCs vs the rest), 88% (individuals with transition vs the rest), and 86% (individuals without transition vs the rest). Independent HCs were correctly classified in 96% (first analysis) and 93% (second analysis) of cases. Conclusions Different ARMSs and their clinical outcomes may be reliably identified on an individual basis by assessing patterns of whole-brain neuroanatomical abnormalities. These patterns may serve as valuable biomarkers for the clinician to guide early detection in the prodromal phase of psychosis. PMID:19581561

  2. Analyzing Developmental Processes on an Individual Level Using Nonstationary Time Series Modeling

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Sinclair, Katerina O.; Rovine, Michael J.; Ram, Nilam; Corneal, Sherry E.

    2009-01-01

    Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranganathan, V; Kumar, P; Bzdusek, K

    Purpose: We propose a novel data-driven method to predict the achievability of clinical objectives upfront before invoking the IMRT optimization. Methods: A new metric called “Geometric Complexity (GC)” is used to estimate the achievability of clinical objectives. Here, GC is the measure of the number of “unmodulated” beamlets or rays that intersect the Region-of-interest (ROI) and the target volume. We first compute the geometric complexity ratio (GCratio) between the GC of a ROI (say, parotid) in a reference plan and the GC of the same ROI in a given plan. The GCratio of a ROI indicates the relative geometric complexitymore » of the ROI as compared to the same ROI in the reference plan. Hence GCratio can be used to predict if a defined clinical objective associated with the ROI can be met by the optimizer for a given case. Basically a higher GCratio indicates a lesser likelihood for the optimizer to achieve the clinical objective defined for a given ROI. Similarly, a lower GCratio indicates a higher likelihood for the optimizer to achieve the clinical objective defined for the given ROI. We have evaluated the proposed method on four Head and Neck cases using Pinnacle3 (version 9.10.0) Treatment Planning System (TPS). Results: Out of the total of 28 clinical objectives from four head and neck cases included in the study, 25 were in agreement with the prediction, which implies an agreement of about 85% between predicted and obtained results. The Pearson correlation test shows a positive correlation between predicted and obtained results (Correlation = 0.82, r2 = 0.64, p < 0.005). Conclusion: The study demonstrates the feasibility of the proposed method in head and neck cases for predicting the achievability of clinical objectives with reasonable accuracy.« less

  4. Improvements of the versatile multiaperture negative ion source NIO1

    NASA Astrophysics Data System (ADS)

    Cavenago, M.; Serianni, G.; De Muri, M.; Veltri, P.; Antoni, V.; Baltador, C.; Barbisan, M.; Brombin, M.; Galatá, A.; Ippolito, N.; Kulevoy, T.; Pasqualotto, R.; Petrenko, S.; Pimazzoni, A.; Recchia, M.; Sartori, E.; Taccogna, F.; Variale, V.; Zaniol, B.; Barbato, P.; Baseggio, L.; Cervaro, V.; Fasolo, D.; Franchin, L.; Ghiraldelli, R.; Laterza, B.; Maniero, M.; Martini, D.; Migliorato, L.; Minarello, A.; Molon, F.; Moro, G.; Patton, T.; Ravarotto, D.; Rizzieri, R.; Rizzolo, A.; Sattin, M.; Stivanello, F.; Zucchetti, S.

    2017-08-01

    The ion source NIO1 (Negative Ion Optimization 1) was developed and installed as a reduced-size model of multi-aperture sources used in neutral beam injectors. NIO1 beam optics is optimized for a 135 mA H- current (subdivided in 9 beamlets) at a Vs = 60 kV extraction voltage, with an electron-to-ion current ratio Rj up to 2. Depending on gas pressure used, NIO1 was up to now operated with Vs < 25 kV for beam extraction and Vs = 60 kV for insulation tests. The distinction between capacitively coupled plasma (E-mode, consistent with a low electron density plasma ne) and inductively coupled plasma (H-mode, requiring larger ne) was clearly related to several experimental signatures, and was confirmed for several gases, when applied radiofrequency power exceeds a given threshold Pt (with hysteresis). For hydrogen Pt was reduced below 1 kW, with a clean rf window and molybdenum liners on other walls; for oxygen Pt ≤ 400 W. Beams of H- and O- were separately extracted; since no caesium is yet introduced into the source, the expected ion currents are lower than 5 mA; this requires a lower acceleration voltage Vs (to keep the same perveance). NIO1 caesium oven was separately tested and Cs dispensers are in development. Increasing the current in the magnetic filter circuit, modifying its shape, and increasing the bias voltage were helpful to reduce Rj (still very large up to now, about 150 for oxygen, and 40 for hydrogen), in qualitative agreement with theoretical and numerical models. A second bias voltage was tested for hydrogen. Beam footprints and a spectral emission sample are shown.

  5. Measurement of Radiation Symmetry in Z-Pinch Driven Hohlraums

    NASA Astrophysics Data System (ADS)

    Hanson, David L.

    2001-10-01

    The z-pinch driven hohlraum (ZPDH) is a promising approach to high yield inertial confinement fusion currently being characterized in experiments on the Sandia Z accelerator [1]. In this concept [2], x rays are produced by an axial z-pinch in a primary hohlraum at each end of a secondary hohlraum. A fusion capsule in the secondary is imploded by a symmetric x-ray flux distribution, effectively smoothed by wall reemission during transport to the capsule position. Capsule radiation symmetry, a critical issue in the design of such a system, is influenced by hohlraum geometry, wall motion and time-dependent albedo, as well as power balance and pinch timing between the two z-pinch x-ray sources. In initial symmetry studies on Z, we used solid low density burnthrough spheres to diagnose highly asymmetric, single-sided-drive hohlraum geometries. We then applied this technique to the more symmetric double z-pinch geometry [3]. As a result of design improvements, radiation flux symmetry in Z double-pinch wire array experiments now exceeds the measurement sensitivity of this self-backlit foam ball symmetry diagnostic (15% max-min flux asymmetry). To diagnose radiation symmetry at the 2 - 5% level attainable with our present ZPDH designs, we are using high-energy x rays produced by the recently-completed Z-Beamlet laser backlighter for point-projection imaging of thin-wall implosion and symmetry capsules. We will present the results of polar flux symmetry measuremets on Z for several ZPDH capsule geometries together with radiosity and radiation-hydrodynamics simulations for comparison. [1] M. E. Cuneo et al., Phys. Plasmas 8,2257(2001); [2] J. H. Hammer et al., Phys. Plasmas 6,2129(1999); [3] D. L. Hanson et al., Bull. Am. Phys. Soc. 45,360(2000).

  6. Treatment of acne scarring with a novel fractionated, dual‐wavelength, picosecond‐domain laser incorporating a novel holographic beam‐splitter

    PubMed Central

    Schomacker, Kevin T.; Basilavecchio, Lisa D.; Plugis, Jessica M.; Bhawalkar, Jayant D.

    2017-01-01

    Background and Objectives Fractional treatment with a dual wavelength 1,064 and 532 nm picosecond‐domain laser, delivering a 10 × 10 array of highly focused beamlets via a holographic optic, was investigated for the treatment of acne scars. Study Twenty‐seven of 31 subjects completed the study, 19 were treated using 1,064 nm and 8 were treated at 532 nm, all having four‐monthly treatments. Blinded evaluation of digital images by three physician evaluators comparing pre‐ and 3‐month post‐treatment images measured efficacy using a 10‐point scale. Subject self‐assessment of treatment effects were also recorded. Safety was measured by recording subject discomfort scores and adverse effects. Results Blinded reviewers correctly identified the baseline image in 61 of the 81 image sets (75%), and baseline acne scar scores were 1.8 ± 0.7 and 1.8 ± 0.5 for the 1,064 and 532 nm cohorts, and decreased to 1.1 ± 0.5 (P < 0.001) and 1.1 ± 0.0 (P < 0.005), respectively. Post‐treatment erythema, mild edema, and petechiae were the only side effects noted. Conclusion The 1,064 and 532 nm picosecond‐domain laser incorporating a 10 × 10 holographic beam‐splitting handpiece was found to be safe and effective for the treatment of facial acne scars. The treatments were well tolerated and the subjects experienced little to no downtime. Lasers Surg. Med. 49:796–802, 2017. © 2017 The Authors. Lasers in Surgery and Medicine Published by Wiley Periodicals, Inc. PMID:28960395

  7. Optical damage testing at the Z-Backlighter facility at Sandia National Laboratories

    NASA Astrophysics Data System (ADS)

    Kimmel, Mark; Rambo, Patrick; Broyles, Robin; Geissel, Matthias; Schwarz, Jens; Bellum, John; Atherton, Briggs

    2009-10-01

    To enable laser-based radiography of high energy density physics events on the Z-Accelerator[4,5] at Sandia National Laboratories, a facility known as the Z-Backlighter has been developed. Two Nd:Phosphate glass lasers are used to create x-rays and/or proton beams capable of this radiographic diagnosis: Z-Beamlet (a multi-kilojoule laser operating at 527nm in a few nanoseconds) and Z-Petawatt (a several hundred joule laser operating at 1054nm in the subpicosecond regime) [1,2]. At the energy densities used in these systems, it is necessary to use high damage threshold optical materials, some of which are poorly characterized (especially for the sub-picosecond pulse). For example, Sandia has developed a meter-class dielectric coating capability for system optics. Damage testing can be performed by external facilities for nanosecond 532nm pulses, measuring high reflector coating damage thresholds >80J/cm2 and antireflection coating damage thresholds >20J/cm2 [3]. However, available external testing capabilities do not use femtosecond/picosecond scale laser pulses. To this end, we have constructed a sub-picoseond-laser-based optical damage test system. The damage tester system also allows for testing in a vacuum vessel, which is relevant since many optics in the Z-Backlighter system are used in vacuum. This paper will present the results of laser induced damage testing performed in both atmosphere and in vacuum, with 1054nm sub-picosecond laser pulses. Optical materials/coatings discussed are: bare fused silica and protected gold used for benchmarking; BK7; Zerodur; protected silver; and dielectric optical coatings (halfnia/silica layer pairs) produced by Sandia's in-house meter-class coating capability.

  8. Negative ion beam development at Cadarache (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonin, A.; Bucalossi, J.; Desgranges, C.

    1996-03-01

    Neutral beam injection (NBI) is one of the candidates for plasma heating and current drive in the new generation of large magnetic fusion devices (ITER). In order to produce the required deuterium atom beams with energies of 1 MeV and powers of tens of MW, negative D{sup {minus}} ion beams are required. For this purpose, multiampere D{sup {minus}} beam production and 1 MeV electrostatic acceleration is being studied at Cadarache. The SINGAP experiment, a 1 MeV 0.1 A D{sup {minus}} multisecond beam accelerator facility, has recently started operation. It is equipped with a Pagoda ion source, a multiaperture 60 keVmore » preaccelerator and a 1 MV 120 mA power supply. The particular feature of SINGAP is that the postaccelerator merges the 60 keV beamlets, aiming at accelerating the whole beam to 1 MeV in a single gap. The 1 MV level was obtained in less than 2 weeks, the accumulated voltage on-time of being {approximately}22 min. A second test bed MANTIS, is devoted to the development of multiampere D{sup {minus}} sources. It is capable of driving discharges with current up to 2500 A at arc voltages up to 150 V. A large multicusp source has been tested in pure volume and cesiated operation. With cesium seeding, an accelerated D{sup {minus}} beam current density of up to 5.2 mA/cm{sup 2} (2 A of D{sup {minus}}) was obtained. A modification of the extractor is underway in order to improve this performance. A 3D Monte Carlo code has been developed to simulate the negative ion transport in magnetized plasma sources and optimize magnetic field configuration of the large area D{sup {minus}} sources. {copyright} {ital 1996 American Institute of Physics.}« less

  9. Test of 1D carbon-carbon composite prototype tiles for the SPIDER diagnostic calorimeter

    NASA Astrophysics Data System (ADS)

    Serianni, G.; Pimazzoni, A.; Canton, A.; Palma, M. Dalla; Delogu, R.; Fasolo, D.; Franchin, L.; Pasqualotto, R.; Tollin, M.

    2017-08-01

    Additional heating will be provided to the thermonuclear fusion experiment ITER by injection of neutral beams from accelerated negative ions. In the SPIDER test facility, under construction at Consorzio RFX in Padova (Italy), the production of negative ions will be studied and optimised. To this purpose the STRIKE (Short-Time Retractable Instrumented Kalorimeter Experiment) diagnostic will be used to characterise the SPIDER beam during short operation (several seconds) and to verify if the beam meets the ITER requirement regarding the maximum allowed beam non-uniformity (below ±10%). The most important measurements performed by STRIKE are beam uniformity, beamlet divergence and stripping losses. The major components of STRIKE are 16 1D-CFC (Carbon matrix-Carbon Fibre reinforced Composite) tiles, observed at the rear side by a thermal camera. The requirements of the 1D CFC material include a large thermal conductivity along the tile thickness (at least 10 times larger than in the other directions); low specific heat and density; uniform parameters over the tile surface; capability to withstand localised heat loads resulting in steep temperature gradients. So 1D CFC is a very anisotropic and delicate material, not commercially available, and prototypes are being specifically realised. This contribution gives an overview of the tests performed on the CFC prototype tiles, aimed at verifying their thermal behaviour. The spatial uniformity of the parameters and the ratio between the thermal conductivities are assessed by means of a power laser at Consorzio RFX. Dedicated linear and non-linear simulations are carried out to interpret the experiments and to estimate the thermal conductivities; these simulations are described and a comparison of the experimental data with the simulation results is presented.

  10. Minimizing scatter-losses during pre-heat for magneto-inertial fusion targets

    NASA Astrophysics Data System (ADS)

    Geissel, Matthias; Harvey-Thompson, Adam J.; Awe, Thomas J.; Bliss, David E.; Glinsky, Michael E.; Gomez, Matthew R.; Harding, Eric; Hansen, Stephanie B.; Jennings, Christopher; Kimmel, Mark W.; Knapp, Patrick; Lewis, Sean M.; Peterson, Kyle; Schollmeier, Marius; Schwarz, Jens; Shores, Jonathon E.; Slutz, Stephen A.; Sinars, Daniel B.; Smith, Ian C.; Speas, C. Shane; Vesey, Roger A.; Weis, Matthew R.; Porter, John L.

    2018-02-01

    The size, temporal and spatial shape, and energy content of a laser pulse for the pre-heat phase of magneto-inertial fusion affect the ability to penetrate the window of the laser-entrance-hole and to heat the fuel behind it. High laser intensities and dense targets are subject to laser-plasma-instabilities (LPI), which can lead to an effective loss of pre-heat energy or to pronounced heating of areas that should stay unexposed. While this problem has been the subject of many studies over the last decades, the investigated parameters were typically geared towards traditional laser driven Inertial Confinement Fusion (ICF) with densities either at 10% and above or at 1% and below the laser's critical density, electron temperatures of 3-5 keV, and laser powers near (or in excess of) 1 × 1015 W/cm2. In contrast, Magnetized Liner Inertial Fusion (MagLIF) [Slutz et al., Phys. Plasmas 17, 056303 (2010) and Slutz and Vesey, Phys. Rev. Lett. 108, 025003 (2012)] currently operates at 5% of the laser's critical density using much thicker windows (1.5-3.5 μm) than the sub-micron thick windows of traditional ICF hohlraum targets. This article describes the Pecos target area at Sandia National Laboratories using the Z-Beamlet Laser Facility [Rambo et al., Appl. Opt. 44(12), 2421 (2005)] as a platform to study laser induced pre-heat for magneto-inertial fusion targets, and the related progress for Sandia's MagLIF program. Forward and backward scattered light were measured and minimized at larger spatial scales with lower densities, temperatures, and powers compared to LPI studies available in literature.

  11. Sci—Thur PM: Planning and Delivery — 03: Automated delivery and quality assurance of a modulated electron radiation therapy plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connell, T; Papaconstadopoulos, P; Alexander, A

    2014-08-15

    Modulated electron radiation therapy (MERT) offers the potential to improve healthy tissue sparing through increased dose conformity. Challenges remain, however, in accurate beamlet dose calculation, plan optimization, collimation method and delivery accuracy. In this work, we investigate the accuracy and efficiency of an end-to-end MERT plan and automated-delivery workflow for the electron boost portion of a previously treated whole breast irradiation case. Dose calculations were performed using Monte Carlo methods and beam weights were determined using a research-based treatment planning system capable of inverse optimization. The plan was delivered to radiochromic film placed in a water equivalent phantom for verification,more » using an automated motorized tertiary collimator. The automated delivery, which covered 4 electron energies, 196 subfields and 6183 total MU was completed in 25.8 minutes, including 6.2 minutes of beam-on time with the remainder of the delivery time spent on collimator leaf motion and the automated interfacing with the accelerator in service mode. The delivery time could be reduced by 5.3 minutes with minor electron collimator modifications and the beam-on time could be reduced by and estimated factor of 2–3 through redesign of the scattering foils. Comparison of the planned and delivered film dose gave 3%/3 mm gamma pass rates of 62.1, 99.8, 97.8, 98.3, and 98.7 percent for the 9, 12, 16, 20 MeV, and combined energy deliveries respectively. Good results were also seen in the delivery verification performed with a MapCHECK 2 device. The results showed that accurate and efficient MERT delivery is possible with current technologies.« less

  12. Beam orientation optimization for intensity-modulated radiation therapy using mixed integer programming

    NASA Astrophysics Data System (ADS)

    Yang, Ruijie; Dai, Jianrong; Yang, Yong; Hu, Yimin

    2006-08-01

    The purpose of this study is to extend an algorithm proposed for beam orientation optimization in classical conformal radiotherapy to intensity-modulated radiation therapy (IMRT) and to evaluate the algorithm's performance in IMRT scenarios. In addition, the effect of the candidate pool of beam orientations, in terms of beam orientation resolution and starting orientation, on the optimized beam configuration, plan quality and optimization time is also explored. The algorithm is based on the technique of mixed integer linear programming in which binary and positive float variables are employed to represent candidates for beam orientation and beamlet weights in beam intensity maps. Both beam orientations and beam intensity maps are simultaneously optimized in the algorithm with a deterministic method. Several different clinical cases were used to test the algorithm and the results show that both target coverage and critical structures sparing were significantly improved for the plans with optimized beam orientations compared to those with equi-spaced beam orientations. The calculation time was less than an hour for the cases with 36 binary variables on a PC with a Pentium IV 2.66 GHz processor. It is also found that decreasing beam orientation resolution to 10° greatly reduced the size of the candidate pool of beam orientations without significant influence on the optimized beam configuration and plan quality, while selecting different starting orientations had large influence. Our study demonstrates that the algorithm can be applied to IMRT scenarios, and better beam orientation configurations can be obtained using this algorithm. Furthermore, the optimization efficiency can be greatly increased through proper selection of beam orientation resolution and starting beam orientation while guaranteeing the optimized beam configurations and plan quality.

  13. Single-Photon LIDAR for Vegetation Analysis

    NASA Astrophysics Data System (ADS)

    Rosette, J.; Field, C.; Nelson, R. F.; Decola, P.; Cook, B. D.; Degnan, J. J.

    2011-12-01

    Lidar is now an established and recognised technology which has been widely applied to assist forest inventory, monitoring and management. Small footprint lidar systems produce dense 'point clouds' from intercepted surfaces which, after classification of ground and vegetation returns, can be related to important forest biophysical parameters such as biomass or carbon. Within the context of NASA's Carbon Monitoring System (CMS) initiative (NASA, 2010), the prototype 100 beam, single-photon, scanning lidar, developed by Sigma Space Corporation, USA, is tested to assess the potential of this sensor for vegetation analysis. This emerging lidar technology is currently generally operated at green wavelengths (532 nm) and, like more conventional discrete return NIR lidar sensors, produces point clouds of intercepted surfaces. However, the high pulse repetition rate (20 kHz) and multibeam approach produces an unprecedented measurement rate (up to 2 Million pixels per second) and a correspondingly high point density. Furthermore, the single photon sensitivity enables the technique to be more easily extended to high altitudes and therefore larger swath widths. Additionally, CW diode laser pumping and a low laser pulse energy (6 μJ at 532 nm) favour an extended laser lifetime while the much lower energy per beamlet (~50nJ) ensures eye safety despite operating at a visible wavelength. Furthermore, the short laser pulse duration (0.7ns) allows the surface to be located with high vertical precision. Although the 532 nm green wavelength lies near the peak of the solar output, the spatial and temporal coherence of the surface returns, combined with stringent instrument specifications (small detector field of view and narrow optical band-pass filter), allow solid surfaces to be distinguished from the solar background during daylight operations. However, for extended volumetric scatterers such as tree canopies, some amount of solar noise is likely to be mixed in with valid biomass returns. This has potential implications for the accurate identification of the vegetation profile, particularly for rough transition zones such as the canopy top. This research aims to improve understanding of the ability to extract lidar metrics and forest biomass from datasets where solar noise is present. Studies of this nature will inform future photon-counting satellite lidar sensors such as NASA's ICESat II, scheduled for launch at the beginning of 2016. This objective is achieved through a comparison of the new sensor capabilities with archival discrete return lidar data and recent field measurements in the eastern USA which are used to map biomass. Since such sensors have the potential to facilitate large area lidar coverage, this may extend the capabilities of biomass mapping and monitoring at regional or national scales. REFERENCE NASA, 2010. NASA Carbon Monitoring System Initiative. Available online at: http://cce.nasa.gov/cce/cms/index.html.

  14. Cluster analysis and subgrouping to investigate inter-individual variability to non-invasive brain stimulation: a systematic review.

    PubMed

    Pellegrini, Michael; Zoghi, Maryam; Jaberzadeh, Shapour

    2018-01-12

    Cluster analysis and other subgrouping techniques have risen in popularity in recent years in non-invasive brain stimulation research in the attempt to investigate the issue of inter-individual variability - the issue of why some individuals respond, as traditionally expected, to non-invasive brain stimulation protocols and others do not. Cluster analysis and subgrouping techniques have been used to categorise individuals, based on their response patterns, as responder or non-responders. There is, however, a lack of consensus and consistency on the most appropriate technique to use. This systematic review aimed to provide a systematic summary of the cluster analysis and subgrouping techniques used to date and suggest recommendations moving forward. Twenty studies were included that utilised subgrouping techniques, while seven of these additionally utilised cluster analysis techniques. The results of this systematic review appear to indicate that statistical cluster analysis techniques are effective in identifying subgroups of individuals based on response patterns to non-invasive brain stimulation. This systematic review also reports a lack of consensus amongst researchers on the most effective subgrouping technique and the criteria used to determine whether an individual is categorised as a responder or a non-responder. This systematic review provides a step-by-step guide to carrying out statistical cluster analyses and subgrouping techniques to provide a framework for analysis when developing further insights into the contributing factors of inter-individual variability in response to non-invasive brain stimulation.

  15. Isotope ratio analysis of individual sub-micrometer plutonium particles with inductively coupled plasma mass spectrometry.

    PubMed

    Esaka, Fumitaka; Magara, Masaaki; Suzuki, Daisuke; Miyamoto, Yutaka; Lee, Chi-Gyu; Kimura, Takaumi

    2010-12-15

    Information on plutonium isotope ratios in individual particles is of great importance for nuclear safeguards, nuclear forensics and so on. Although secondary ion mass spectrometry (SIMS) is successfully utilized for the analysis of individual uranium particles, the isobaric interference of americium-241 to plutonium-241 makes difficult to obtain accurate isotope ratios in individual plutonium particles. In the present work, an analytical technique by a combination of chemical separation and inductively coupled plasma mass spectrometry (ICP-MS) is developed and applied to isotope ratio analysis of individual sub-micrometer plutonium particles. The ICP-MS results for individual plutonium particles prepared from a standard reference material (NBL SRM-947) indicate that the use of a desolvation system for sample introduction improves the precision of isotope ratios. In addition, the accuracy of the (241)Pu/(239)Pu isotope ratio is much improved, owing to the chemical separation of plutonium and americium. In conclusion, the performance of the proposed ICP-MS technique is sufficient for the analysis of individual plutonium particles. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Individual Differences, Intelligence, and Behavior Analysis

    PubMed Central

    Williams, Ben; Myerson, Joel; Hale, Sandra

    2008-01-01

    Despite its avowed goal of understanding individual behavior, the field of behavior analysis has largely ignored the determinants of consistent differences in level of performance among individuals. The present article discusses major findings in the study of individual differences in intelligence from the conceptual framework of a functional analysis of behavior. In addition to general intelligence, we discuss three other major aspects of behavior in which individuals differ: speed of processing, working memory, and the learning of three-term contingencies. Despite recent progress in our understanding of the relations among these aspects of behavior, numerous issues remain unresolved. Researchers need to determine which learning tasks predict individual differences in intelligence and which do not, and then identify the specific characteristics of these tasks that make such prediction possible. PMID:18831127

  17. Meta-Analysis of a Continuous Outcome Combining Individual Patient Data and Aggregate Data: A Method Based on Simulated Individual Patient Data

    ERIC Educational Resources Information Center

    Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.

    2014-01-01

    When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…

  18. Individual-Level Influences on Perceptions of Neighborhood Disorder: A Multilevel Analysis

    ERIC Educational Resources Information Center

    Latkin, Carl A.; German, Danielle; Hua, Wei; Curry, Aaron D.

    2009-01-01

    Health outcomes are associated with aggregate neighborhood measures and individual neighborhood perceptions. In this study, the authors sought to delineate individual, social network, and spatial factors that may influence perceptions of neighborhood disorder. Multilevel regression analysis showed that neighborhood perceptions were more negative…

  19. The Application of Social Network Analysis to Team Sports

    ERIC Educational Resources Information Center

    Lusher, Dean; Robins, Garry; Kremer, Peter

    2010-01-01

    This article reviews how current social network analysis might be used to investigate individual and group behavior in sporting teams. Social network analysis methods permit researchers to explore social relations between team members and their individual-level qualities simultaneously. As such, social network analysis can be seen as augmenting…

  20. Different type 2 diabetes risk assessments predict dissimilar numbers at ‘high risk’: a retrospective analysis of diabetes risk-assessment tools

    PubMed Central

    Gray, Benjamin J; Bracken, Richard M; Turner, Daniel; Morgan, Kerry; Thomas, Michael; Williams, Sally P; Williams, Meurig; Rice, Sam; Stephens, Jeffrey W

    2015-01-01

    Background Use of a validated risk-assessment tool to identify individuals at high risk of developing type 2 diabetes is currently recommended. It is under-reported, however, whether a different risk tool alters the predicted risk of an individual. Aim This study explored any differences between commonly used validated risk-assessment tools for type 2 diabetes. Design and setting Cross-sectional analysis of individuals who participated in a workplace-based risk assessment in Carmarthenshire, South Wales. Method Retrospective analysis of 676 individuals (389 females and 287 males) who participated in a workplace-based diabetes risk-assessment initiative. Ten-year risk of type 2 diabetes was predicted using the validated QDiabetes®, Leicester Risk Assessment (LRA), FINDRISC, and Cambridge Risk Score (CRS) algorithms. Results Differences between the risk-assessment tools were apparent following retrospective analysis of individuals. CRS categorised the highest proportion (13.6%) of individuals at ‘high risk’ followed by FINDRISC (6.6%), QDiabetes (6.1%), and, finally, the LRA was the most conservative risk tool (3.1%). Following further analysis by sex, over one-quarter of males were categorised at high risk using CRS (25.4%), whereas a greater percentage of females were categorised as high risk using FINDRISC (7.8%). Conclusion The adoption of a different valid risk-assessment tool can alter the predicted risk of an individual and caution should be used to identify those individuals who really are at high risk of type 2 diabetes. PMID:26541180

  1. Different type 2 diabetes risk assessments predict dissimilar numbers at 'high risk': a retrospective analysis of diabetes risk-assessment tools.

    PubMed

    Gray, Benjamin J; Bracken, Richard M; Turner, Daniel; Morgan, Kerry; Thomas, Michael; Williams, Sally P; Williams, Meurig; Rice, Sam; Stephens, Jeffrey W

    2015-12-01

    Use of a validated risk-assessment tool to identify individuals at high risk of developing type 2 diabetes is currently recommended. It is under-reported, however, whether a different risk tool alters the predicted risk of an individual. This study explored any differences between commonly used validated risk-assessment tools for type 2 diabetes. Cross-sectional analysis of individuals who participated in a workplace-based risk assessment in Carmarthenshire, South Wales. Retrospective analysis of 676 individuals (389 females and 287 males) who participated in a workplace-based diabetes risk-assessment initiative. Ten-year risk of type 2 diabetes was predicted using the validated QDiabetes(®), Leicester Risk Assessment (LRA), FINDRISC, and Cambridge Risk Score (CRS) algorithms. Differences between the risk-assessment tools were apparent following retrospective analysis of individuals. CRS categorised the highest proportion (13.6%) of individuals at 'high risk' followed by FINDRISC (6.6%), QDiabetes (6.1%), and, finally, the LRA was the most conservative risk tool (3.1%). Following further analysis by sex, over one-quarter of males were categorised at high risk using CRS (25.4%), whereas a greater percentage of females were categorised as high risk using FINDRISC (7.8%). The adoption of a different valid risk-assessment tool can alter the predicted risk of an individual and caution should be used to identify those individuals who really are at high risk of type 2 diabetes. © British Journal of General Practice 2015.

  2. The singular nature of auditory and visual scene analysis in autism

    PubMed Central

    Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa

    2017-01-01

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025

  3. Physical activity, self-efficacy, and health-related quality of life in persons with multiple sclerosis: analysis of associations between individual-level changes over one year.

    PubMed

    Motl, Robert W; McAuley, Edward; Wynn, Daniel; Sandroff, Brian; Suh, Yoojin

    2013-03-01

    Physical activity and self-efficacy represent behavioral and psychological factors, respectively, that are compromised in persons with multiple sclerosis (MS), but might be modifiable through intervention and result in better health-related quality of life (HRQOL). The present study adopted a panel research design and examined the associations between individual-level changes in physical activity, self-efficacy, and HRQOL over a one-year period in persons with MS. The sample consisted of 269 persons with relapsing-remitting MS who completed the Godin Leisure-Time Questionnaire (GLTEQ), Multiple Sclerosis Self-Efficacy (MSSE) Scale, and Multiple Sclerosis Quality of Life-29 (MSIS-29) Scale on two occasions that were separated by 1 year. The data were analyzed using panel analysis in Mplus 3.0. The initial panel analysis indicated that individual-level change in physical activity was associated with individual-level change in both physical and psychological HRQOL. The subsequent panel analysis indicated that (a) individual-level change in self-efficacy for functioning with MS was associated with individual-level change in physical HRQOL, whereas individual-level change in self-efficacy for control was associated with individual-level change in psychological HRQOL; (b) individual-level change in self-efficacy for functioning with MS, but not self-efficacy for control, mediated the association between individual-level change in physical activity and physical HRQOL; and (c) individual-level change in self-efficacy for controlling MS was the strongest predictor of individual-level change in HRQOL. Physical activity and self-efficacy both might be important targets of subsequent behavioral and self-management interventions for improving the HRQOL of persons with MS, although self-efficacy is seemingly more important than physical activity.

  4. Interaction of Participant Characteristics and Type of AAC with Individuals with ASD: A Meta-Analysis

    ERIC Educational Resources Information Center

    Ganz, Jennifer B.; Mason, Rose A.; Goodwyn, Fara D.; Boles, Margot B.; Heath, Amy K.; Davis, John L.

    2014-01-01

    Individuals with autism spectrum disorders (ASD) and complex communication needs often rely on augmentative and alternative communication (AAC) as a means of functional communication. This meta-analysis investigated how individual characteristics moderate effectiveness of three types of aided AAC: the Picture Exchange Communication System (PECS),…

  5. Language Learning of Gifted Individuals: A Content Analysis Study

    ERIC Educational Resources Information Center

    Gokaydin, Beria; Baglama, Basak; Uzunboylu, Huseyin

    2017-01-01

    This study aims to carry out a content analysis of the studies on language learning of gifted individuals and determine the trends in this field. Articles on language learning of gifted individuals published in the Scopus database were examined based on certain criteria including type of publication, year of publication, language, research…

  6. Individual Profiling Using Text Analysis

    DTIC Science & Technology

    2016-04-15

    Mining a Text for Errors. . . . on Knowledge discovery in data mining , pages 624–628, 2005. [12] Michal Kosinski, David Stillwell, and Thore Graepel...AFRL-AFOSR-UK-TR-2016-0011 Individual Profiling using Text Analysis 140333 Mark Stevenson UNIVERSITY OF SHEFFIELD, DEPARTMENT OF PSYCHOLOGY Final...REPORT TYPE      Final 3.  DATES COVERED (From - To)      15 Sep 2014 to 14 Sep 2015 4.  TITLE AND SUBTITLE Individual Profiling using Text Analysis

  7. Is job a viable unit of analysis? A multilevel analysis of demand-control-support models.

    PubMed

    Morrison, David; Payne, Roy L; Wall, Toby D

    2003-07-01

    The literature has ignored the fact that the demand-control (DC) and demand-control-support (DCS) models of stress are about jobs and not individuals' perceptions of their jobs. Using multilevel modeling, the authors report results of individual- and job-level analyses from a study of over 6,700 people in 81 different jobs. Support for additive versions of the models came when individuals were the unit of analysis. DC and DCS models are only helpful for understanding the effects of individual perceptions of jobs and their relationship to psychological states. When job perceptions are aggregated and their relationship to the collective experience of jobholders is assessed, the models prove of little value. Role set may be a better unit of analysis.

  8. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  9. Individual Differences in Dynamic Functional Brain Connectivity across the Human Lifespan.

    PubMed

    Davison, Elizabeth N; Turner, Benjamin O; Schlesinger, Kimberly J; Miller, Michael B; Grafton, Scott T; Bassett, Danielle S; Carlson, Jean M

    2016-11-01

    Individual differences in brain functional networks may be related to complex personal identifiers, including health, age, and ability. Dynamic network theory has been used to identify properties of dynamic brain function from fMRI data, but the majority of analyses and findings remain at the level of the group. Here, we apply hypergraph analysis, a method from dynamic network theory, to quantify individual differences in brain functional dynamics. Using a summary metric derived from the hypergraph formalism-hypergraph cardinality-we investigate individual variations in two separate, complementary data sets. The first data set ("multi-task") consists of 77 individuals engaging in four consecutive cognitive tasks. We observe that hypergraph cardinality exhibits variation across individuals while remaining consistent within individuals between tasks; moreover, the analysis of one of the memory tasks revealed a marginally significant correspondence between hypergraph cardinality and age. This finding motivated a similar analysis of the second data set ("age-memory"), in which 95 individuals, aged 18-75, performed a memory task with a similar structure to the multi-task memory task. With the increased age range in the age-memory data set, the correlation between hypergraph cardinality and age correspondence becomes significant. We discuss these results in the context of the well-known finding linking age with network structure, and suggest that hypergraph analysis should serve as a useful tool in furthering our understanding of the dynamic network structure of the brain.

  10. Individualized Positive Behavior Support in School Settings: A Meta-Analysis

    ERIC Educational Resources Information Center

    Goh, Ailsa E.; Bambara, Linda M.

    2012-01-01

    This meta-analysis examined school-based intervention research based on functional behavioral assessment (FBA) to determine the effectiveness of key individualized positive behavior support (IPBS) practices in school settings. In all, 83 studies representing 145 participants were included in the meta-analysis. Intervention, maintenance, and…

  11. Identifying functional reorganization of spelling networks: an individual peak probability comparison approach

    PubMed Central

    Purcell, Jeremy J.; Rapp, Brenda

    2013-01-01

    Previous research has shown that damage to the neural substrates of orthographic processing can lead to functional reorganization during reading (Tsapkini et al., 2011); in this research we ask if the same is true for spelling. To examine the functional reorganization of spelling networks we present a novel three-stage Individual Peak Probability Comparison (IPPC) analysis approach for comparing the activation patterns obtained during fMRI of spelling in a single brain-damaged individual with dysgraphia to those obtained in a set of non-impaired control participants. The first analysis stage characterizes the convergence in activations across non-impaired control participants by applying a technique typically used for characterizing activations across studies: Activation Likelihood Estimate (ALE) (Turkeltaub et al., 2002). This method was used to identify locations that have a high likelihood of yielding activation peaks in the non-impaired participants. The second stage provides a characterization of the degree to which the brain-damaged individual's activations correspond to the group pattern identified in Stage 1. This involves performing a Mahalanobis distance statistics analysis (Tsapkini et al., 2011) that compares each of a control group's peak activation locations to the nearest peak generated by the brain-damaged individual. The third stage evaluates the extent to which the brain-damaged individual's peaks are atypical relative to the range of individual variation among the control participants. This IPPC analysis allows for a quantifiable, statistically sound method for comparing an individual's activation pattern to the patterns observed in a control group and, thus, provides a valuable tool for identifying functional reorganization in a brain-damaged individual with impaired spelling. Furthermore, this approach can be applied more generally to compare any individual's activation pattern with that of a set of other individuals. PMID:24399981

  12. Meta-Analysis of the Effect of Exercise Programs for Individuals with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Shin, In-Soo; Park, Eun-Young

    2012-01-01

    The purpose of this study was to examine the effects of physical exercise programs on individuals with intellectual disabilities (ID). This meta-analysis analyzed 67 effect sizes and 14 studies and calculated the standardized mean difference in effect size. The unit of analysis for overall effects was the study, and the sub-group analysis focused…

  13. Resolution Improvement and Pattern Generator Development for theMaskless Micro-Ion-Beam Reduction Lithography System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ximan

    The shrinking of IC devices has followed the Moore's Law for over three decades, which states that the density of transistors on integrated circuits will double about every two years. This great achievement is obtained via continuous advance in lithography technology. With the adoption of complicated resolution enhancement technologies, such as the phase shifting mask (PSM), the optical proximity correction (OPC), optical lithography with wavelength of 193 nm has enabled 45 nm printing by immersion method. However, this achievement comes together with the skyrocketing cost of masks, which makes the production of low volume application-specific IC (ASIC) impractical. In ordermore » to provide an economical lithography approach for low to medium volume advanced IC fabrication, a maskless ion beam lithography method, called Maskless Micro-ion-beam Reduction Lithography (MMRL), has been developed in the Lawrence Berkeley National Laboratory. The development of the prototype MMRL system has been described by Dr. Vinh Van Ngo in his Ph.D. thesis. But the resolution realized on the prototype MMRL system was far from the design expectation. In order to improve the resolution of the MMRL system, the ion optical system has been investigated. By integrating a field-free limiting aperture into the optical column, reducing the electromagnetic interference and cleaning the RF plasma, the resolution has been improved to around 50 nm. Computational analysis indicates that the MMRL system can be operated with an exposure field size of 0.25 mm and a beam half angle of 1.0 mrad on the wafer plane. Ion-ion interactions have been studied with a two-particle physics model. The results are in excellent agreement with those published by the other research groups. The charge-interaction analysis of MMRL shows that the ion-ion interactions must be reduced in order to obtain a throughput higher than 10 wafers per hour on 300-mm wafers. In addition, two different maskless lithography strategies have been studied. The dependence of the throughput with the exposure field size and the speed of the mechanical stage has been investigated. In order to perform maskless lithography, different micro-fabricated pattern generators have been developed for the MMRL system. Ion beamlet switching has been successfully demonstrated on the MMRL system. A positive bias voltage around 10 volts is sufficient to switch off the ion current on the micro-fabricated pattern generators. Some unexpected problems, such as the high-energy secondary electron radiations, have been discovered during the experimental investigation. Thermal and structural analysis indicates that the aperture displacement error induced by thermal expansion can satisfy the 3δ CD requirement for lithography nodes down to 25 nm. The cross-talking effect near the surface and inside the apertures of the pattern generator has been simulated in a 3-D ray-tracing code. New pattern generator design has been proposed to reduce the cross-talking effect. In order to eliminate the surface charging effect caused by the secondary electrons, a new beam-switching scheme in which the switching electrodes are immersed in the plasma has been demonstrated on a mechanically fabricated pattern generator.« less

  14. Nanoliter hemolymph sampling and analysis of individual adult Drosophila melanogaster.

    PubMed

    Piyankarage, Sujeewa C; Featherstone, David E; Shippy, Scott A

    2012-05-15

    The fruit fly (Drosophila melanogaster) is an extensively used and powerful, genetic model organism. However, chemical studies using individual flies have been limited by the animal's small size. Introduced here is a method to sample nanoliter hemolymph volumes from individual adult fruit-flies for chemical analysis. The technique results in an ability to distinguish hemolymph chemical variations with developmental stage, fly sex, and sampling conditions. Also presented is the means for two-point monitoring of hemolymph composition for individual flies.

  15. A meta-analysis of differences in IQ profiles between individuals with Asperger's disorder and high-functioning autism.

    PubMed

    Chiang, Hsu-Min; Tsai, Luke Y; Cheung, Ying Kuen; Brown, Alice; Li, Huacheng

    2014-07-01

    A meta-analysis was performed to examine differences in IQ profiles between individuals with Asperger's disorder (AspD) and high-functioning autism (HFA). Fifty-two studies were included for this study. The results showed that (a) individuals with AspD had significantly higher full-scale IQ, verbal IQ (VIQ), and performance IQ (PIQ) than did individuals with HFA; (b) individuals with AspD had significantly higher VIQ than PIQ; and (c) VIQ was similar to PIQ in individuals with HFA. These findings seem to suggest that AspD and HFA are two different subtypes of Autism. The implications of the present findings to DSM-5 Autism Spectrum Disorder are discussed.

  16. A Behavior Analysis of Individuals' Use of the Fairness Heuristic when Interacting with Groups and Organizations

    ERIC Educational Resources Information Center

    Goltz, Sonia M.

    2013-01-01

    In the present analysis the author utilizes the groups as patches model (Goltz, 2009, 2010) to extend fairness heuristic theory (Lind, 2001) in which the concept of fairness is thought to be a heuristic that allows individuals to match responses to consequences they receive from groups. In this model, individuals who are reviewing possible groups…

  17. Individual Participant Data Meta-Analysis of Mechanical Workplace Risk Factors and Low Back Pain

    PubMed Central

    Shannon, Harry S.; Wells, Richard P.; Walter, Stephen D.; Cole, Donald C.; Côté, Pierre; Frank, John; Hogg-Johnson, Sheilah; Langlois, Lacey E.

    2012-01-01

    Objectives. We used individual participant data from multiple studies to conduct a comprehensive meta-analysis of mechanical exposures in the workplace and low back pain. Methods. We conducted a systematic literature search and contacted an author of each study to request their individual participant data. Because outcome definitions and exposure measures were not uniform across studies, we conducted 2 substudies: (1) to identify sets of outcome definitions that could be combined in a meta-analysis and (2) to develop methods to translate mechanical exposure onto a common metric. We used generalized estimating equation regression to analyze the data. Results. The odds ratios (ORs) for posture exposures ranged from 1.1 to 2.0. Force exposure ORs ranged from 1.4 to 2.1. The magnitudes of the ORs differed according to the definition of low back pain, and heterogeneity was associated with both study-level and individual-level characteristics. Conclusions. We found small to moderate ORs for the association of mechanical exposures and low back pain, although the relationships were complex. The presence of individual-level OR modifiers in such an area can be best understood by conducting a meta-analysis of individual participant data. PMID:22390445

  18. Centre of pressure patterns in the golf swing: individual-based analysis.

    PubMed

    Ball, Kevin; Best, Russell

    2012-06-01

    Weight transfer has been identified as important in group-based analyses. The aim of this study was to extend this work by examining the importance of weight transfer in the golf swing on an individual basis. Five professional and amateur golfers performed 50 swings with the driver, hitting a ball into a net. The golfer's centre of pressure position and velocity, parallel with the line of shot, were measured by two force plates at eight swing events that were identified from high-speed video. The relationships between these parameters and club head velocity at ball contact were examined using regression statistics. The results did support the use of group-based analysis, with all golfers returning significant relationships. However, results were also individual-specific, with golfers returning different combinations of significant factors. Furthermore, factors not identified in group-based analysis were significant on an individual basis. The most consistent relationship was a larger weight transfer range associated with a larger club head velocity (p < 0.05). All golfers also returned at least one significant relationship with rate of weight transfer at swing events (p < 0.01). Individual-based analysis should form part of performance-based biomechanical analysis of sporting skills.

  19. Suicide among Young People and Adults in Ireland: Method Characteristics, Toxicological Analysis and Substance Abuse Histories Compared

    PubMed Central

    Larkin, Celine; Wall, Amanda; McAuliffe, Carmel; McCarthy, Jacklyn; Williamson, Eileen; Perry, Ivan J.

    2016-01-01

    Objective Information on factors associated with suicide among young individuals in Ireland is limited. The aim of this study was to identify socio-demographic characteristics and circumstances of death associated with age among individuals who died by suicide. Methods The study examined 121 consecutive suicides (2007–2012) occurring in the southern eastern part of Ireland (Cork city and county). Data were obtained from coroners, family informants, and health care professionals. A comparison was made between 15-24-year-old and 25-34-year-old individuals. Socio-demographic characteristics of the deceased, methods of suicide, history of alcohol and drug abuse, and findings from toxicological analysis of blood and urine samples taken at post mortem were included. Pearson’s χ2 tests and binary logistic regression analysis were performed. Results Alcohol and/or drugs were detected through toxicological analysis for the majority of the total sample (79.5%), which did not differentiate between 15-24-year-old and 25-34-year-old individuals (74.1% and 86.2% respectively). Compared to 25-34-year-old individuals, 15-24-year-old individuals were more likely to engage in suicide by hanging (88.5%). Younger individuals were less likely to die by intentional drug overdose and carbon monoxide poisoning compared to older individuals. Younger individuals who died between Saturday and Monday were more likely to have had alcohol before dying. Substance abuse histories were similar in the two age groups. Conclusion Based on this research it is recommended that strategies to reduce substance abuse be applied among 25-34-year-old individuals at risk of suicide. The wide use of hanging in young people should be taken into consideration for future means restriction strategies. PMID:27898722

  20. Suicide among Young People and Adults in Ireland: Method Characteristics, Toxicological Analysis and Substance Abuse Histories Compared.

    PubMed

    Arensman, Ella; Bennardi, Marco; Larkin, Celine; Wall, Amanda; McAuliffe, Carmel; McCarthy, Jacklyn; Williamson, Eileen; Perry, Ivan J

    2016-01-01

    Information on factors associated with suicide among young individuals in Ireland is limited. The aim of this study was to identify socio-demographic characteristics and circumstances of death associated with age among individuals who died by suicide. The study examined 121 consecutive suicides (2007-2012) occurring in the southern eastern part of Ireland (Cork city and county). Data were obtained from coroners, family informants, and health care professionals. A comparison was made between 15-24-year-old and 25-34-year-old individuals. Socio-demographic characteristics of the deceased, methods of suicide, history of alcohol and drug abuse, and findings from toxicological analysis of blood and urine samples taken at post mortem were included. Pearson's χ2 tests and binary logistic regression analysis were performed. Alcohol and/or drugs were detected through toxicological analysis for the majority of the total sample (79.5%), which did not differentiate between 15-24-year-old and 25-34-year-old individuals (74.1% and 86.2% respectively). Compared to 25-34-year-old individuals, 15-24-year-old individuals were more likely to engage in suicide by hanging (88.5%). Younger individuals were less likely to die by intentional drug overdose and carbon monoxide poisoning compared to older individuals. Younger individuals who died between Saturday and Monday were more likely to have had alcohol before dying. Substance abuse histories were similar in the two age groups. Based on this research it is recommended that strategies to reduce substance abuse be applied among 25-34-year-old individuals at risk of suicide. The wide use of hanging in young people should be taken into consideration for future means restriction strategies.

  1. Cluster and principal component analysis based on SSR markers of Amomum tsao-ko in Jinping County of Yunnan Province

    NASA Astrophysics Data System (ADS)

    Ma, Mengli; Lei, En; Meng, Hengling; Wang, Tiantao; Xie, Linyan; Shen, Dong; Xianwang, Zhou; Lu, Bingyue

    2017-08-01

    Amomum tsao-ko is a commercial plant that used for various purposes in medicinal and food industries. For the present investigation, 44 germplasm samples were collected from Jinping County of Yunnan Province. Clusters analysis and 2-dimensional principal component analysis (PCA) was used to represent the genetic relations among Amomum tsao-ko by using simple sequence repeat (SSR) markers. Clustering analysis clearly distinguished the samples groups. Two major clusters were formed; first (Cluster I) consisted of 34 individuals, the second (Cluster II) consisted of 10 individuals, Cluster I as the main group contained multiple sub-clusters. PCA also showed 2 groups: PCA Group 1 included 29 individuals, PCA Group 2 included 12 individuals, consistent with the results of cluster analysis. The purpose of the present investigation was to provide information on genetic relationship of Amomum tsao-ko germplasm resources in main producing areas, also provide a theoretical basis for the protection and utilization of Amomum tsao-ko resources.

  2. Memory and Obstructive Sleep Apnea: A Meta-Analysis

    PubMed Central

    Wallace, Anna; Bucks, Romola S.

    2013-01-01

    Study Objectives: To examine episodic memory performance in individuals with obstructive sleep apnea (OSA). Design Meta-analysis was used to synthesize results from individual studies examining the impact of OSA on episodic memory performance. The performance of individuals with OSA was compared to healthy controls or normative data. Participants Forty-two studies were included, comprising 2,294 adults with untreated OSA and 1,364 healthy controls. Studies that recorded information about participants at baseline prior to treatment interventions were included in the analysis. Measurements Participants were assessed with tasks that included a measure of episodic memory: immediate recall, delayed recall, learning, and/or recognition memory. Results: The results of the meta-analyses provide evidence that individuals with OSA are significantly impaired when compared to healthy controls on verbal episodic memory (immediate recall, delayed recall, learning, and recognition) and visuo-spatial episodic memory (immediate and delayed recall), but not visual immediate recall or visuo-spatial learning. When patients were compared to norms, negative effects of OSA were found only in verbal immediate and delayed recall. Conclusions: This meta-analysis contributes to understanding of the nature of episodic memory deficits in individuals with OSA. Impairments to episodic memory are likely to affect the daily functioning of individuals with OSA. Citation Wallace A; Bucks RS. Memory and obstructive sleep apnea: a meta-analysis. SLEEP 2013;36(2):203-220. PMID:23372268

  3. Changes in latent fingerprint examiners' markup between analysis and comparison.

    PubMed

    Ulery, Bradford T; Hicklin, R Austin; Roberts, Maria Antonia; Buscaglia, JoAnn

    2015-02-01

    After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%). Published by Elsevier Ireland Ltd.

  4. An Analysis of Consumer Nutrition in the Experimental Food Service System at Travis AFB

    DTIC Science & Technology

    1975-01-01

    Actual System Users on Any Given Day An Analysis of Individual Consumer Behavior Customers Changing Ration Status An Analysis of the Three New Food...mmmm^fi^vm^i^"’ i"»w»gw.M w« *»■ »WM«** AN ANALYSIS OF INDIVIDUAL CONSUMER BEHAVIOR Thus far the principal concern has been with the daily meal

  5. Get Real in Individual Participant Data (IPD) Meta-Analysis: A Review of the Methodology

    ERIC Educational Resources Information Center

    Debray, Thomas P. A.; Moons, Karel G. M.; van Valkenhoef, Gert; Efthimiou, Orestis; Hummel, Noemi; Groenwold, Rolf H. H.; Reitsma, Johannes B.

    2015-01-01

    Individual participant data (IPD) meta-analysis is an increasingly used approach for synthesizing and investigating treatment effect estimates. Over the past few years, numerous methods for conducting an IPD meta-analysis (IPD-MA) have been proposed, often making different assumptions and modeling choices while addressing a similar research…

  6. Factor Analysis of the Aberrant Behavior Checklist in Individuals with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Brinkley, Jason; Nations, Laura; Abramson, Ruth K.; Hall, Alicia; Wright, Harry H.; Gabriels, Robin; Gilbert, John R.; Pericak-Vance, Margaret A. O.; Cuccaro, Michael L.

    2007-01-01

    Exploratory factor analysis (varimax and promax rotations) of the aberrant behavior checklist-community version (ABC) in 275 individuals with Autism spectrum disorder (ASD) identified four- and five-factor solutions which accounted for greater than 70% of the variance. Confirmatory factor analysis (Lisrel 8.7) revealed indices of moderate fit for…

  7. JOINT AND INDIVIDUAL VARIATION EXPLAINED (JIVE) FOR INTEGRATED ANALYSIS OF MULTIPLE DATA TYPES.

    PubMed

    Lock, Eric F; Hoadley, Katherine A; Marron, J S; Nobel, Andrew B

    2013-03-01

    Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.

  8. Individual Differences in Dynamic Functional Brain Connectivity across the Human Lifespan

    PubMed Central

    Davison, Elizabeth N.; Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.

    2016-01-01

    Individual differences in brain functional networks may be related to complex personal identifiers, including health, age, and ability. Dynamic network theory has been used to identify properties of dynamic brain function from fMRI data, but the majority of analyses and findings remain at the level of the group. Here, we apply hypergraph analysis, a method from dynamic network theory, to quantify individual differences in brain functional dynamics. Using a summary metric derived from the hypergraph formalism—hypergraph cardinality—we investigate individual variations in two separate, complementary data sets. The first data set (“multi-task”) consists of 77 individuals engaging in four consecutive cognitive tasks. We observe that hypergraph cardinality exhibits variation across individuals while remaining consistent within individuals between tasks; moreover, the analysis of one of the memory tasks revealed a marginally significant correspondence between hypergraph cardinality and age. This finding motivated a similar analysis of the second data set (“age-memory”), in which 95 individuals, aged 18–75, performed a memory task with a similar structure to the multi-task memory task. With the increased age range in the age-memory data set, the correlation between hypergraph cardinality and age correspondence becomes significant. We discuss these results in the context of the well-known finding linking age with network structure, and suggest that hypergraph analysis should serve as a useful tool in furthering our understanding of the dynamic network structure of the brain. PMID:27880785

  9. Comparisons of synthesized and individual reinforcement contingencies during functional analysis.

    PubMed

    Fisher, Wayne W; Greer, Brian D; Romani, Patrick W; Zangrillo, Amanda N; Owen, Todd M

    2016-09-01

    Researchers typically modify individual functional analysis (FA) conditions after results are inconclusive (Hanley, Iwata, & McCord, 2003). Hanley, Jin, Vanselow, and Hanratty (2014) introduced a marked departure from this practice, using an interview-informed synthesized contingency analysis (IISCA). In the test condition, they delivered multiple contingencies simultaneously (e.g., attention and escape) after each occurrence of problem behavior; in the control condition, they delivered those same reinforcers noncontingently and continuously. In the current investigation, we compared the results of the IISCA with a more traditional FA in which we evaluated each putative reinforcer individually. Four of 5 participants displayed destructive behavior that was sensitive to the individual contingencies evaluated in the traditional FA. By contrast, none of the participants showed a response pattern consistent with the assumption of the IISCA. We discuss the implications of these findings on the development of accurate and efficient functional analyses. © 2016 Society for the Experimental Analysis of Behavior.

  10. An Experimental Evaluation of the Effects of a Realistic Job Preview on Marine Recruit Affect, Intentions and Behavior

    DTIC Science & Technology

    1979-09-01

    individual level of analysis , met expectations were shown to be significantly related to attrition and propensity to withdraw. What are the implications of...35 Values . . . . . . . . . . . . . . ....... 36 Trust and Honesty .. ...... .... 37 Individual Level of Analysis . . . . . . . 38 Met Expectations...48 Design and Precedures ............ . 51 Sub- analysis . . . . . . . . . . . . . . 54 Measures ...... 55 Criteria Measures

  11. The Analysis of the Impact of Individual Weighting Factor on Individual Scores

    ERIC Educational Resources Information Center

    Kilic, Gulsen Bagci; Cakan, Mehtap

    2006-01-01

    In this study, category-based self and peer assessment were applied twice in a semester in an Elementary Science Teaching Methods course in order to assess individual contributions of group members to group projects as well as to analyze the impact of Individual Weighting Factors (IWF) on individual scores and individual grades. IWF were…

  12. Tools for Genomic and Transcriptomic Analysis of Microbes at Single-Cell Level

    PubMed Central

    Chen, Zixi; Chen, Lei; Zhang, Weiwen

    2017-01-01

    Microbiologists traditionally study population rather than individual cells, as it is generally assumed that the status of individual cells will be similar to that observed in the population. However, the recent studies have shown that the individual behavior of each single cell could be quite different from that of the whole population, suggesting the importance of extending traditional microbiology studies to single-cell level. With recent technological advances, such as flow cytometry, next-generation sequencing (NGS), and microspectroscopy, single-cell microbiology has greatly enhanced the understanding of individuality and heterogeneity of microbes in many biological systems. Notably, the application of multiple ‘omics’ in single-cell analysis has shed light on how individual cells perceive, respond, and adapt to the environment, how heterogeneity arises under external stress and finally determines the fate of the whole population, and how microbes survive under natural conditions. As single-cell analysis involves no axenic cultivation of target microorganism, it has also been demonstrated as a valuable tool for dissecting the microbial ‘dark matter.’ In this review, current state-of-the-art tools and methods for genomic and transcriptomic analysis of microbes at single-cell level were critically summarized, including single-cell isolation methods and experimental strategies of single-cell analysis with NGS. In addition, perspectives on the future trends of technology development in the field of single-cell analysis was also presented. PMID:28979258

  13. A Meta-Analysis of Differences in IQ Profiles between Individuals with Asperger's Disorder and High-Functioning Autism

    ERIC Educational Resources Information Center

    Chiang, Hsu-Min; Tsai, Luke Y.; Cheung, Ying Kuen; Brown, Alice; Li, Huacheng

    2014-01-01

    A meta-analysis was performed to examine differences in IQ profiles between individuals with Asperger's disorder (AspD) and high-functioning autism (HFA). Fifty-two studies were included for this study. The results showed that (a) individuals with AspD had significantly higher full-scale IQ, verbal IQ (VIQ), and performance IQ (PIQ) than did…

  14. Analysis of Total Food Intake and Composition of Individual's Diet Based on the U.S. Department of Agriculture's 1994-96, 1998 Continuing Survey of Food Intakes by Individuals (CSFII) (2005, Final Report)

    EPA Science Inventory

    EPA released the final report, Analysis of Total Food Intake and Composition of Individual’s Diet Based on USDA’s 1994-1996, 98 Continuing Survey of Food Intakes by Individuals (CSFII). The consumption of food by the general population is a significant route of potential ...

  15. Preliminary investigation of vocal variation in the Mexican Spotted Owl (Strix occidentalis lucida): would vocal analysis of the four-note location call be a useful field tool for individual identification?

    Treesearch

    Wendy A. Kuntz; Peter B. Stacey

    1997-01-01

    Individual identification, especially in rare species, can provide managers with critical information about demographic processes. Traditionally, banding has been the only effective method of marking individuals. However, banding's drawbacks have led some researchers to suggest vocal analysis as an alternative. We explore this prospect for Mexican Spotted Owls (...

  16. Who are the obese? A cluster analysis exploring subgroups of the obese.

    PubMed

    Green, M A; Strong, M; Razak, F; Subramanian, S V; Relton, C; Bissell, P

    2016-06-01

    Body mass index (BMI) can be used to group individuals in terms of their height and weight as obese. However, such a distinction fails to account for the variation within this group across other factors such as health, demographic and behavioural characteristics. The study aims to examine the existence of subgroups of obese individuals. Data were taken from the Yorkshire Health Study (2010-12) including information on demographic, health and behavioural characteristics. Individuals with a BMI of ≥30 were included. A two-step cluster analysis was used to define groups of individuals who shared common characteristics. The cluster analysis found six distinct groups of individuals whose BMI was ≥30. These subgroups were heavy drinking males, young healthy females; the affluent and healthy elderly; the physically sick but happy elderly; the unhappy and anxious middle aged and a cluster with the poorest health. It is important to account for the important heterogeneity within individuals who are obese. Interventions introduced by clinicians and policymakers should not target obese individuals as a whole but tailor strategies depending upon the subgroups that individuals belong to. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Multiple Interacting Risk Factors: On Methods for Allocating Risk Factor Interactions.

    PubMed

    Price, Bertram; MacNicoll, Michael

    2015-05-01

    A persistent problem in health risk analysis where it is known that a disease may occur as a consequence of multiple risk factors with interactions is allocating the total risk of the disease among the individual risk factors. This problem, referred to here as risk apportionment, arises in various venues, including: (i) public health management, (ii) government programs for compensating injured individuals, and (iii) litigation. Two methods have been described in the risk analysis and epidemiology literature for allocating total risk among individual risk factors. One method uses weights to allocate interactions among the individual risk factors. The other method is based on risk accounting axioms and finding an optimal and unique allocation that satisfies the axioms using a procedure borrowed from game theory. Where relative risk or attributable risk is the risk measure, we find that the game-theory-determined allocation is the same as the allocation where risk factor interactions are apportioned to individual risk factors using equal weights. Therefore, the apportionment problem becomes one of selecting a meaningful set of weights for allocating interactions among the individual risk factors. Equal weights and weights proportional to the risks of the individual risk factors are discussed. © 2015 Society for Risk Analysis.

  18. The use of cognitive ability measures as explanatory variables in regression analysis.

    PubMed

    Junker, Brian; Schofield, Lynne Steuerle; Taylor, Lowell J

    2012-12-01

    Cognitive ability measures are often taken as explanatory variables in regression analysis, e.g., as a factor affecting a market outcome such as an individual's wage, or a decision such as an individual's education acquisition. Cognitive ability is a latent construct; its true value is unobserved. Nonetheless, researchers often assume that a test score , constructed via standard psychometric practice from individuals' responses to test items, can be safely used in regression analysis. We examine problems that can arise, and suggest that an alternative approach, a "mixed effects structural equations" (MESE) model, may be more appropriate in many circumstances.

  19. Body sway, aim point fluctuation and performance in rifle shooters: inter- and intra-individual analysis.

    PubMed

    Ball, Kevin A; Best, Russell J; Wrigley, Tim V

    2003-07-01

    In this study, we examined the relationships between body sway, aim point fluctuation and performance in rifle shooting on an inter- and intra-individual basis. Six elite shooters performed 20 shots under competition conditions. For each shot, body sway parameters and four aim point fluctuation parameters were quantified for the time periods 5 s to shot, 3 s to shot and 1 s to shot. Three parameters were used to indicate performance. An AMTI LG6-4 force plate was used to measure body sway parameters, while a SCATT shooting analysis system was used to measure aim point fluctuation and shooting performance. Multiple regression analysis indicated that body sway was related to performance for four shooters. Also, body sway was related to aim point fluctuation for all shooters. These relationships were specific to the individual, with the strength of association, parameters of importance and time period of importance different for different shooters. Correlation analysis of significant regressions indicated that, as body sway increased, performance decreased and aim point fluctuation increased for most relationships. We conclude that body sway and aim point fluctuation are important in elite rifle shooting and performance errors are highly individual-specific at this standard. Individual analysis should be a priority when examining elite sports performance.

  20. The effectiveness of external sensory cues in improving functional performance in individuals with Parkinson's disease: a systematic review with meta-analysis.

    PubMed

    Cassimatis, Constantine; Liu, Karen P Y; Fahey, Paul; Bissett, Michelle

    2016-09-01

    A systematic review with meta-analysis was performed to investigate the effect external sensory cued therapy on activities of daily living (ADL) performance that include walking and daily tasks such as dressing for individuals with Parkinson's disease (PD). A detailed computer-aided search of the literature was applied to MEDLINE, Cumulative Index to Nursing and Allied Health Literature, EMBASE and PubMed. Studies investigating the effects of external sensory cued therapy on ADL performance for individuals with PD in all stages of disease progression were collected. Relevant articles were critically reviewed and study results were synthesized by two independent researchers. A data-analysis method was used to extract data from selected articles. A meta-analysis was carried out for all randomized-controlled trials. Six studies with 243 individuals with PD were included in this review. All six studies yielded positive findings in favour of external sensory cues. The meta-analysis showed that external sensory cued therapy improved statistically after treatment (P=0.011) and at follow-up (P<0.001) for ADL performance. The results of this review provided evidence of an improvement in ADL performance in general in individuals with PD. It is recommended that clinicians incorporate external sensory into a training programme focused on improving daily task performance.

  1. Neurobiological changes of schizotypy: evidence from both volume-based morphometric analysis and resting-state functional connectivity.

    PubMed

    Wang, Yi; Yan, Chao; Yin, Da-zhi; Fan, Ming-xia; Cheung, Eric F C; Pantelis, Christos; Chan, Raymond C K

    2015-03-01

    The current study sought to examine the underlying brain changes in individuals with high schizotypy by integrating networks derived from brain structural and functional imaging. Individuals with high schizotypy (n = 35) and low schizotypy (n = 34) controls were screened using the Schizotypal Personality Questionnaire and underwent brain structural and resting-state functional magnetic resonance imaging on a 3T scanner. Voxel-based morphometric analysis and graph theory-based functional network analysis were conducted. Individuals with high schizotypy showed reduced gray matter (GM) density in the insula and the dorsolateral prefrontal gyrus. The graph theoretical analysis showed that individuals with high schizotypy showed similar global properties in their functional networks as low schizotypy individuals. Several hubs of the functional network were identified in both groups, including the insula, the lingual gyrus, the postcentral gyrus, and the rolandic operculum. More hubs in the frontal lobe and fewer hubs in the occipital lobe were identified in individuals with high schizotypy. By comparing the functional connectivity between clusters with abnormal GM density and the whole brain, individuals with high schizotypy showed weaker functional connectivity between the left insula and the putamen, but stronger connectivity between the cerebellum and the medial frontal gyrus. Taken together, our findings suggest that individuals with high schizotypy present changes in terms of GM and resting-state functional connectivity, especially in the frontal lobe. © The Author 2014. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  2. Mapping vulnerability to bipolar disorder: a systematic review and meta-analysis of neuroimaging studies

    PubMed Central

    Fusar-Poli, Paolo; Howes, Oliver; Bechdolf, Andreas; Borgwardt, Stefan

    2012-01-01

    Background Although early interventions in individuals with bipolar disorder may reduce the associated personal and economic burden, the neurobiologic markers of enhanced risk are unknown. Methods Neuroimaging studies involving individuals at enhanced genetic risk for bipolar disorder (HR) were included in a systematic review. We then performed a region of interest (ROI) analysis and a whole-brain meta-analysis combined with a formal effect-sizes meta-analysis in a subset of studies. Results There were 37 studies included in our systematic review. The overall sample for the systematic review included 1258 controls and 996 HR individuals. No significant differences were detected between HR individuals and controls in the selected ROIs: striatum, amygdala, hippocampus, pituitary and frontal lobe. The HR group showed increased grey matter volume compared with patients with established bipolar disorder. The HR individuals showed increased neural response in the left superior frontal gyrus, medial frontal gyrus and left insula compared with controls, independent from the functional magnetic resonance imaging task used. There were no publication biases. Sensitivity analysis confirmed the robustness of these results. Limitations As the included studies were cross-sectional, it remains to be determined whether the observed neurofunctional and structural alterations represent risk factors that can be clinically used in preventive interventions for prodromal bipolar disorder. Conclusion Accumulating structural and functional imaging evidence supports the existence of neurobiologic trait abnormalities in individuals at genetic risk for bipolar disorder at various scales of investigation. PMID:22297067

  3. Low-molecular-weight heparin for prevention of placenta-mediated pregnancy complications: protocol for a systematic review and individual patient data meta-analysis (AFFIRM)

    PubMed Central

    2014-01-01

    Background Placenta-mediated pregnancy complications include pre-eclampsia, late pregnancy loss, placental abruption, and the small-for-gestational age newborn. They are leading causes of maternal, fetal, and neonatal morbidity and mortality in developed nations. Women who have experienced these complications are at an elevated risk of recurrence in subsequent pregnancies. However, despite decades of research no effective strategies to prevent recurrence have been identified, until recently. We completed a pooled summary-based meta-analysis that strongly suggests that low-molecular-weight heparin reduces the risk of recurrent placenta-mediated complications. The proposed individual patient data meta-analysis builds on this successful collaboration. The project is called AFFIRM, An individual patient data meta-analysis oF low-molecular-weight heparin For prevention of placenta-medIated pRegnancy coMplications. Methods/Design We conducted a systematic review to identify randomized controlled trials with a low-molecular-weight heparin intervention for the prevention of recurrent placenta-mediated pregnancy complications. Investigators and statisticians representing eight trials met to discuss the outcomes and analysis plan for an individual patient data meta-analysis. An additional trial has since been added for a total of nine eligible trials. The primary analyses from the original trials will be replicated for quality assurance prior to recoding the data from each trial and combining it into a common dataset for analysis. Using the anonymized combined data we will conduct logistic regression and subgroup analyses aimed at identifying which women with previous pregnancy complications benefit most from treatment with low-molecular-weight heparin during pregnancy. Discussion The goal of the proposed individual patient data meta-analysis is a thorough estimation of treatment effects in patients with prior individual placenta-mediated pregnancy complications and exploration of which complications are specifically prevented by low-molecular-weight heparin. Systematic review registration PROSPERO (International Prospective Registry of Systematic Reviews) 23 December 2013, CRD42013006249 PMID:24969227

  4. Radiation analysis devices, radiation analysis methods, and articles of manufacture

    DOEpatents

    Roybal, Lyle Gene

    2010-06-08

    Radiation analysis devices include circuitry configured to determine respective radiation count data for a plurality of sections of an area of interest and combine the radiation count data of individual of sections to determine whether a selected radioactive material is present in the area of interest. An amount of the radiation count data for an individual section is insufficient to determine whether the selected radioactive material is present in the individual section. An article of manufacture includes media comprising programming configured to cause processing circuitry to perform processing comprising determining one or more correction factors based on a calibration of a radiation analysis device, measuring radiation received by the radiation analysis device using the one or more correction factors, and presenting information relating to an amount of radiation measured by the radiation analysis device having one of a plurality of specified radiation energy levels of a range of interest.

  5. Varieties and Trends in Music Analysis: A Commentary on the Literature. Technical Note 3-72-14.

    ERIC Educational Resources Information Center

    Fink, Michael

    This commentary examines the primary types of music analysis. Although the study focuses mainly upon individual trends and schools of thought, some attention is given to the evolution of ideas within the field. Methodologies are considered in the light of their applicability to individual or classroom analysis by students. From the most…

  6. Analysis of sensitivity and uncertainty in an individual-based model of a threatened wildlife species

    Treesearch

    Bruce G. Marcot; Peter H. Singleton; Nathan H. Schumaker

    2015-01-01

    Sensitivity analysis—determination of how prediction variables affect response variables—of individual-based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington...

  7. Identifying influential individuals on intensive care units: using cluster analysis to explore culture.

    PubMed

    Fong, Allan; Clark, Lindsey; Cheng, Tianyi; Franklin, Ella; Fernandez, Nicole; Ratwani, Raj; Parker, Sarah Henrickson

    2017-07-01

    The objective of this paper is to identify attribute patterns of influential individuals in intensive care units using unsupervised cluster analysis. Despite the acknowledgement that culture of an organisation is critical to improving patient safety, specific methods to shift culture have not been explicitly identified. A social network analysis survey was conducted and an unsupervised cluster analysis was used. A total of 100 surveys were gathered. Unsupervised cluster analysis was used to group individuals with similar dimensions highlighting three general genres of influencers: well-rounded, knowledge and relational. Culture is created locally by individual influencers. Cluster analysis is an effective way to identify common characteristics among members of an intensive care unit team that are noted as highly influential by their peers. To change culture, identifying and then integrating the influencers in intervention development and dissemination may create more sustainable and effective culture change. Additional studies are ongoing to test the effectiveness of utilising these influencers to disseminate patient safety interventions. This study offers an approach that can be helpful in both identifying and understanding influential team members and may be an important aspect of developing methods to change organisational culture. © 2017 John Wiley & Sons Ltd.

  8. Social cohesion matters in health.

    PubMed

    Chuang, Ying-Chih; Chuang, Kun-Yang; Yang, Tzu-Hsuan

    2013-10-28

    The concept of social cohesion has invoked debate due to the vagueness of its definition and the limitations of current measurements. This paper attempts to examine the concept of social cohesion, develop measurements, and investigate the relationship between social cohesion and individual health. This study used a multilevel study design. The individual-level samples from 29 high-income countries were obtained from the 2000 World Value Survey (WVS) and the 2002 European Value Survey. National-level social cohesion statistics were obtained from Organization of Economic Cooperation and Development datasets, World Development Indicators, and Asian Development Bank key indicators for the year 2000, and from aggregating responses from the WVS. In total 47,923 individuals were included in this study. The factor analysis was applied to identify dimensions of social cohesion, which were used as entities in the cluster analysis to generate a regime typology of social cohesion. Then, multilevel regression models were applied to assess the influences of social cohesion on an individual's self-rated health. Factor analysis identified five dimensions of social cohesion: social equality, social inclusion, social development, social capital, and social diversity. Then, the cluster analysis revealed five regimes of social cohesion. A multi-level analysis showed that respondents in countries with higher social inclusion, social capital, and social diversity were more likely to report good health above and beyond individual-level characteristics. This study is an innovative effort to incorporate different aspects of social cohesion. This study suggests that social cohesion was associated with individual self-rated after controlling individual characteristics. To achieve further advancement in population health, developed countries should consider policies that would foster a society with a high level of social inclusion, social capital, and social diversity. Future research could focus on identifying possible pathways by which social cohesion influences various health outcomes.

  9. Impact of human population history on distributions of individual-level genetic distance

    PubMed Central

    2005-01-01

    Summaries of human genomic variation shed light on human evolution and provide a framework for biomedical research. Variation is often summarised in terms of one or a few statistics (eg FST and gene diversity). Now that multilocus genotypes for hundreds of autosomal loci are available for thousands of individuals, new approaches are applicable. Recently, trees of individuals and other clustering approaches have demonstrated the power of an individual-focused analysis. We propose analysing the distributions of genetic distances between individuals. Each distribution, or common ancestry profile (CAP), is unique to an individual, and does not require a priori assignment of individuals to populations. Here, we consider a range of models of population history and, using coalescent simulation, reveal the potential insights gained from a set of CAPs. Information lies in the shapes of individual profiles -- sometimes captured by variance of individual CAPs -- and the variation across profiles. Analysis of short tandem repeat genotype data for over 1,000 individuals from 52 populations is consistent with dramatic differences in population histories across human groups. PMID:15814064

  10. Scanning proton microprobe applied to analysis of individual aerosol particles from Amazon Basin

    NASA Astrophysics Data System (ADS)

    Gerab, Fábio; Artaxo, Paulo; Swietlicki, Erik; Pallon, Jan

    1998-03-01

    The development of the Scanning Proton Microprobe (SPM) offers a new possibility for individual aerosol particle studies. The SPM joins Particle Induced X-ray Emission (PIXE) elemental analysis qualities with micrometric spatial resolution. In this work the Lund University SPM facility was used for elemental characterization of individual aerosol particles emitted to the atmosphere in the Brazilian Amazon Basin, during gold mining activities by the so-called "gold shops".

  11. Dietary reconstruction from trace element analysis and dental microwear in an Early Medieval population from Gán (Galanta district, Slovakia).

    PubMed

    Bodoriková, Silvia; Tibenská, Kristína Domonkosová; Katina, Stanislav; Uhrová, Petra; Dörnhöferová, Michaela; Takács, Michal; Urminský, Jozef

    2013-01-01

    The aim of the study was to determine the diet of an historical human population using the trace elements in dental tissues and dental buccal microwear. Although 38 individuals had been buried in the cemetery, preservation of the remains did not allow analysis of all of them. A total of 13 individuals were analysed, of which the samples for trace-element analysis consisted of 12 permanent premolars from 12 individuals. Buccal microwear was studied in a sample of nine teeth from nine individuals. Both trace-element and microwear analyses were performed on eight individuals. All analyzed teeth were intact, with fully developed roots, without dental calculus and macro-abrasion. Concentrations of Sr, Zn, and Ca, and their ratios, were used to determine the relative proportions of plant and animal protein in the diet. Samples were analyzed using optical emission spectrometry with inductively coupled plasma. The values of the Sr and Zn concentrations indicate that a diet of the investigated population was of a mixed character with approximately the same proportion of plants and meat in their food. Buccal microwear was studied in molds ofbuccal surfaces and observed at 100x magnification with a scanning electron microscope (SEM). Length and orientation of striations were determined with the SigmaScan Pro 5.0 image analysis program. The results obtained from microwear analysis correspond with those from trace-element analysis and showed that the population consumed a mixed diet. The density of the scratches indicates that the diet contained a considerable vegetable component. The high number of vertical scratches and their high average length suggest that individuals also consumed a large portion of meat. The results of both analyses showed that there were also individuals whose diet had probably been poor, i.e. richer in animal protein, which probably could be related to their health or social status in the population.

  12. Geographic determinants of individual obesity risk in Spain: A multilevel approach.

    PubMed

    Raftopoulou, Athina

    2017-02-01

    This paper seeks to understand the determinants of individual body weight status and obesity risk in Spain by concurrently examining individual and regional characteristics. The data are drawn from the National Health Survey of Spain for the year 2011-2012 (INE-National Statistical Institute of Spain) and contain information for a representative sample of 12,671 adults across 50 provinces in Spain. A multilevel analysis is carried out to examine the determinants of individual weight status and obesity, controlling not only for the individual effects and those of the immediate environment but also for the broader setting to which individuals and their immediate environment belong. Our findings suggest that attributes from all three levels of analysis have an effect on individual weight status and obesity. Lack of green spaces and criminality taken as proxies of the social environment positively affect individual and women's BMI and obesity, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Pervasive and strong effects of plants on soil chemistry: a meta-analysis of individual plant 'Zinke' effects.

    PubMed

    Waring, Bonnie G; Álvarez-Cansino, Leonor; Barry, Kathryn E; Becklund, Kristen K; Dale, Sarah; Gei, Maria G; Keller, Adrienne B; Lopez, Omar R; Markesteijn, Lars; Mangan, Scott; Riggs, Charlotte E; Rodríguez-Ronderos, María Elizabeth; Segnitz, R Max; Schnitzer, Stefan A; Powers, Jennifer S

    2015-08-07

    Plant species leave a chemical signature in the soils below them, generating fine-scale spatial variation that drives ecological processes. Since the publication of a seminal paper on plant-mediated soil heterogeneity by Paul Zinke in 1962, a robust literature has developed examining effects of individual plants on their local environments (individual plant effects). Here, we synthesize this work using meta-analysis to show that plant effects are strong and pervasive across ecosystems on six continents. Overall, soil properties beneath individual plants differ from those of neighbours by an average of 41%. Although the magnitudes of individual plant effects exhibit weak relationships with climate and latitude, they are significantly stronger in deserts and tundra than forests, and weaker in intensively managed ecosystems. The ubiquitous effects of plant individuals and species on local soil properties imply that individual plant effects have a role in plant-soil feedbacks, linking individual plants with biogeochemical processes at the ecosystem scale. © 2015 The Author(s).

  14. Perspectives in Individualized Learning.

    ERIC Educational Resources Information Center

    Weisgerber, Robert A.

    The readings presented here are an analysis of selected factors underlying the process of individualized learning. The book is organized topically and moves from theoretical considerations toward an analysis of important educational components. The readings come from a cross section of experts representing the areas of learning theory, individual…

  15. Statistical Analysis of Human Body Movement and Group Interactions in Response to Music

    NASA Astrophysics Data System (ADS)

    Desmet, Frank; Leman, Marc; Lesaffre, Micheline; de Bruyn, Leen

    Quantification of time series that relate to physiological data is challenging for empirical music research. Up to now, most studies have focused on time-dependent responses of individual subjects in controlled environments. However, little is known about time-dependent responses of between-subject interactions in an ecological context. This paper provides new findings on the statistical analysis of group synchronicity in response to musical stimuli. Different statistical techniques were applied to time-dependent data obtained from an experiment on embodied listening in individual and group settings. Analysis of inter group synchronicity are described. Dynamic Time Warping (DTW) and Cross Correlation Function (CCF) were found to be valid methods to estimate group coherence of the resulting movements. It was found that synchronicity of movements between individuals (human-human interactions) increases significantly in the social context. Moreover, Analysis of Variance (ANOVA) revealed that the type of music is the predominant factor in both the individual and the social context.

  16. Does Group-Level Commitment Predict Employee Well-Being?: A Prospective Analysis.

    PubMed

    Clausen, Thomas; Christensen, Karl Bang; Nielsen, Karina

    2015-11-01

    To investigate the links between group-level affective organizational commitment (AOC) and individual-level psychological well-being, self-reported sickness absence, and sleep disturbances. A total of 5085 care workers from 301 workgroups in the Danish eldercare services participated in both waves of the study (T1 [2005] and T2 [2006]). The three outcomes were analyzed using linear multilevel regression analysis, multilevel Poisson regression analysis, and multilevel logistic regression analysis, respectively. Group-level AOC (T1) significantly predicted individual-level psychological well-being, self-reported sickness absence, and sleep disturbances (T2). The association between group-level AOC (T1) and psychological well-being (T2) was fully mediated by individual-level AOC (T1), and the associations between group-level AOC (T1) and self-reported sickness absence and sleep disturbances (T2) were partially mediated by individual-level AOC (T1). Group-level AOC is an important predictor of employee well-being in contemporary health care organizations.

  17. Chemometric analysis for extraction of individual fluorescence spectrum and lifetimes from a target mixture

    NASA Technical Reports Server (NTRS)

    Hallidy, William H. (Inventor); Chin, Robert C. (Inventor)

    1999-01-01

    The present invention is a system for chemometric analysis for the extraction of the individual component fluorescence spectra and fluorescence lifetimes from a target mixture. The present invention combines a processor with an apparatus for generating an excitation signal to transmit at a target mixture and an apparatus for detecting the emitted signal from the target mixture. The present invention extracts the individual fluorescence spectrum and fluorescence lifetime measurements from the frequency and wavelength data acquired from the emitted signal. The present invention uses an iterative solution that first requires the initialization of several decision variables and the initial approximation determinations of intermediate matrices. The iterative solution compares the decision variables for convergence to see if further approximation determinations are necessary. If the solution converges, the present invention then determines the reduced best fit error for the analysis of the individual fluorescence lifetime and the fluorescence spectrum before extracting the individual fluorescence lifetime and fluorescence spectrum from the emitted signal of the target mixture.

  18. Investigating Subjective Experience and the Influence of Weather Among Individuals With Fibromyalgia: A Content Analysis of Twitter.

    PubMed

    Delir Haghighi, Pari; Kang, Yong-Bin; Buchbinder, Rachelle; Burstein, Frada; Whittle, Samuel

    2017-01-19

    Little is understood about the determinants of symptom expression in individuals with fibromyalgia syndrome (FMS). While individuals with FMS often report environmental influences, including weather events, on their symptom severity, a consistent effect of specific weather conditions on FMS symptoms has yet to be demonstrated. Content analysis of a large number of messages by individuals with FMS on Twitter can provide valuable insights into variation in the fibromyalgia experience from a first-person perspective. The objective of our study was to use content analysis of tweets to investigate the association between weather conditions and fibromyalgia symptoms among individuals who tweet about fibromyalgia. Our second objective was to gain insight into how Twitter is used as a form of communication and expression by individuals with fibromyalgia and to explore and uncover thematic clusters and communities related to weather. Computerized sentiment analysis was performed to measure the association between negative sentiment scores (indicative of severe symptoms such as pain) and coincident environmental variables. Date, time, and location data for each individual tweet were used to identify corresponding climate data (such as temperature). We used graph analysis to investigate the frequency and distribution of domain-related terms exchanged in Twitter and their association strengths. A community detection algorithm was applied to partition the graph and detect different communities. We analyzed 140,432 tweets related to fibromyalgia from 2008 to 2014. There was a very weak positive correlation between humidity and negative sentiment scores (r=.009, P=.001). There was no significant correlation between other environmental variables and negative sentiment scores. The graph analysis showed that "pain" and "chronicpain" were the most frequently used terms. The Louvain method identified 6 communities. Community 1 was related to feelings and symptoms at the time (subjective experience). It also included a list of weather-related terms such as "weather," "cold," and "rain." According to our results, a uniform causal effect of weather variation on fibromyalgia symptoms at the group level remains unlikely. Any impact of weather on fibromyalgia symptoms may vary geographically or at an individual level. Future work will further explore geographic variation and interactions focusing on individual pain trajectories over time. ©Pari Delir Haghighi, Yong-Bin Kang, Rachelle Buchbinder, Frada Burstein, Samuel Whittle. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 19.01.2017.

  19. Automated estimation of individual conifer tree height and crown diameter via Two-dimensional spatial wavelet analysis of lidar data

    Treesearch

    Michael J. Falkowski; Alistair M.S. Smith; Andrew T. Hudak; Paul E. Gessler; Lee A. Vierling; Nicholas L. Crookston

    2006-01-01

    We describe and evaluate a new analysis technique, spatial wavelet analysis (SWA), to automatically estimate the location, height, and crown diameter of individual trees within mixed conifer open canopy stands from light detection and ranging (lidar) data. Two-dimensional Mexican hat wavelets, over a range of likely tree crown diameters, were convolved with lidar...

  20. Analysis of Fat Intake Based on the U.S. Department of Agriculture's 1994-96, 1998 Continuing Survey of Food Intakes by Individuals (CSFII, Final Report)

    EPA Science Inventory

    EPA released the final report, Analysis of Fat Intake Based on USDA’s 1994-1996, 1998 Continuing Survey of Food Intakes by Individuals (CSFII, Final Report). For this report, the EPA conducted an analysis of fat consumption across the U.S. population based on data derived...

  1. Analyzing developmental processes on an individual level using nonstationary time series modeling.

    PubMed

    Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E

    2009-01-01

    Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.

  2. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    PubMed

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  3. Development of an Individualism-Collectivism Scale revisited: a Korean sample.

    PubMed

    Kim, Kitae; Cho, Bongsoon

    2011-04-01

    A 13-item Individualism-Collectivism scale comprising source of identity, goal priority, mode of social relation, and norm acceptance is presented. A validation of this scale was conducted using a survey of 773 Korean employees. An exploratory factor analysis and a second-order confirmatory factor analysis supported the measure as having theoretical face validity and acceptable internal consistency reliability. Among the four facets, goal priority most strongly predicted the general Individualism-Collectivism latent factor.

  4. A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence

    PubMed Central

    Spain, S L; Pedroso, I; Kadeva, N; Miller, M B; Iacono, W G; McGue, M; Stergiakouli, E; Smith, G D; Putallaz, M; Lubinski, D; Meaburn, E L; Plomin, R; Simpson, M A

    2016-01-01

    Although individual differences in intelligence (general cognitive ability) are highly heritable, molecular genetic analyses to date have had limited success in identifying specific loci responsible for its heritability. This study is the first to investigate exome variation in individuals of extremely high intelligence. Under the quantitative genetic model, sampling from the high extreme of the distribution should provide increased power to detect associations. We therefore performed a case–control association analysis with 1409 individuals drawn from the top 0.0003 (IQ >170) of the population distribution of intelligence and 3253 unselected population-based controls. Our analysis focused on putative functional exonic variants assayed on the Illumina HumanExome BeadChip. We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence. Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence. PMID:26239293

  5. A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence.

    PubMed

    Spain, S L; Pedroso, I; Kadeva, N; Miller, M B; Iacono, W G; McGue, M; Stergiakouli, E; Davey Smith, G; Putallaz, M; Lubinski, D; Meaburn, E L; Plomin, R; Simpson, M A

    2016-08-01

    Although individual differences in intelligence (general cognitive ability) are highly heritable, molecular genetic analyses to date have had limited success in identifying specific loci responsible for its heritability. This study is the first to investigate exome variation in individuals of extremely high intelligence. Under the quantitative genetic model, sampling from the high extreme of the distribution should provide increased power to detect associations. We therefore performed a case-control association analysis with 1409 individuals drawn from the top 0.0003 (IQ >170) of the population distribution of intelligence and 3253 unselected population-based controls. Our analysis focused on putative functional exonic variants assayed on the Illumina HumanExome BeadChip. We did not observe any individual protein-altering variants that are reproducibly associated with extremely high intelligence and within the entire distribution of intelligence. Moreover, no significant associations were found for multiple rare alleles within individual genes. However, analyses using genome-wide similarity between unrelated individuals (genome-wide complex trait analysis) indicate that the genotyped functional protein-altering variation yields a heritability estimate of 17.4% (s.e. 1.7%) based on a liability model. In addition, investigation of nominally significant associations revealed fewer rare alleles associated with extremely high intelligence than would be expected under the null hypothesis. This observation is consistent with the hypothesis that rare functional alleles are more frequently detrimental than beneficial to intelligence.

  6. Active holographic interconnects for interfacing volume storage

    NASA Astrophysics Data System (ADS)

    Domash, Lawrence H.; Schwartz, Jay R.; Nelson, Arthur R.; Levin, Philip S.

    1992-04-01

    In order to achieve the promise of terabit/cm3 data storage capacity for volume holographic optical memory, two technological challenges must be met. Satisfactory storage materials must be developed and the input/output architectures able to match their capacity with corresponding data access rates must also be designed. To date the materials problem has received more attention than devices and architectures for access and addressing. Two philosophies of parallel data access to 3-D storage have been discussed. The bit-oriented approach, represented by recent work on two-photon memories, attempts to store bits at local sites within a volume without affecting neighboring bits. High speed acousto-optic or electro- optic scanners together with dynamically focused lenses not presently available would be required. The second philosophy is that volume optical storage is essentially holographic in nature, and that each data write or read is to be distributed throughout the material volume on the basis of angle multiplexing or other schemes consistent with the principles of holography. The requirements for free space optical interconnects for digital computers and fiber optic network switching interfaces are also closely related to this class of devices. Interconnects, beamlet generators, angle multiplexers, scanners, fiber optic switches, and dynamic lenses are all devices which may be implemented by holographic or microdiffractive devices of various kinds, which we shall refer to collectively as holographic interconnect devices. At present, holographic interconnect devices are either fixed holograms or spatial light modulators. Optically or computer generated holograms (submicron resolution, 2-D or 3-D, encoding 1013 bits, nearly 100 diffraction efficiency) can implement sophisticated mathematical design principles, but of course once fabricated they cannot be changed. Spatial light modulators offer high speed programmability but have limited resolution (512 X 512 pixels, encoding about 106 bits of data) and limited diffraction efficiency. For any application, one must choose between high diffractive performance and programmability.

  7. Upgrade of the BATMAN test facility for H- source development

    NASA Astrophysics Data System (ADS)

    Heinemann, B.; Fröschle, M.; Falter, H.-D.; Fantz, U.; Franzen, P.; Kraus, W.; Nocentini, R.; Riedl, R.; Ruf, B.

    2015-04-01

    The development of a radio frequency (RF) driven source for negative hydrogen ions for the neutral beam heating devices of fusion experiments has been successfully carried out at IPP since 1996 on the test facility BATMAN. The required ITER parameters have been achieved with the prototype source consisting of a cylindrical driver on the back side of a racetrack like expansion chamber. The extraction system, called "Large Area Grid" (LAG) was derived from a positive ion accelerator from ASDEX Upgrade (AUG) using its aperture size (ø 8 mm) and pattern but replacing the first two electrodes and masking down the extraction area to 70 cm2. BATMAN is a well diagnosed and highly flexible test facility which will be kept operational in parallel to the half size ITER source test facility ELISE for further developments to improve the RF efficiency and the beam properties. It is therefore planned to upgrade BATMAN with a new ITER-like grid system (ILG) representing almost one ITER beamlet group, namely 5 × 14 apertures (ø 14 mm). Additionally to the standard three grid extraction system a repeller electrode upstream of the grounded grid can optionally be installed which is positively charged against it by 2 kV. This is designated to affect the onset of the space charge compensation downstream of the grounded grid and to reduce the backstreaming of positive ions from the drift space backwards into the ion source. For magnetic filter field studies a plasma grid current up to 3 kA will be available as well as permanent magnets embedded into a diagnostic flange or in an external magnet frame. Furthermore different source vessels and source configurations are under discussion for BATMAN, e.g. using the AUG type racetrack RF source as driver instead of the circular one or modifying the expansion chamber for a more flexible position of the external magnet frame.

  8. Demonstration of thermonuclear conditions in magnetized liner inertial fusion experiments

    DOE PAGES

    Gomez, Matthew R.; Slutz, Stephen A.; Sefkow, Adam B.; ...

    2015-04-29

    In this study, the magnetized liner inertial fusion concept [S. A. Slutz et al., Phys. Plasmas17, 056303 (2010)] utilizes a magnetic field and laser heating to relax the pressure requirements of inertial confinement fusion. The first experiments to test the concept [M. R. Gomez et al., Phys. Rev. Lett. 113, 155003 (2014)] were conducted utilizing the 19 MA, 100 ns Z machine, the 2.5 kJ, 1 TW Z Beamlet laser, and the 10 T Applied B-field on Z system. Despite an estimated implosion velocity of only 70 km/s in these experiments, electron and ion temperatures at stagnation were as highmore » as 3 keV, and thermonuclear deuterium-deuterium neutron yields up to 2 × 10 12 have been produced. X-ray emission from the fuel at stagnation had widths ranging from 50 to 110 μm over a roughly 80% of the axial extent of the target (6–8 mm) and lasted approximately 2 ns. X-ray yields from these experiments are consistent with a stagnation density of the hot fuel equal to 0.2–0.4 g/cm 3. In these experiments, up to 5 ×10 10 secondary deuterium-tritium neutrons were produced. Given that the areal density of the plasma was approximately 1–2 mg/cm 2, this indicates the stagnation plasma was significantly magnetized, which is consistent with the anisotropy observed in the deuterium-tritium neutron spectra. Control experiments where the laser and/or magnetic field were not utilized failed to produce stagnation temperatures greater than 1 keV and primary deuterium-deuterium yields greater than 10 10. An additional control experiment where the fuel contained a sufficient dopant fraction to substantially increase radiative losses also failed to produce a relevant stagnation temperature. The results of these experiments are consistent with a thermonuclear neutron source.« less

  9. Microionization chamber for reference dosimetry in IMRT verification: clinical implications on OAR dosimetric errors

    NASA Astrophysics Data System (ADS)

    Sánchez-Doblado, Francisco; Capote, Roberto; Leal, Antonio; Roselló, Joan V.; Lagares, Juan I.; Arráns, Rafael; Hartmann, Günther H.

    2005-03-01

    Intensity modulated radiotherapy (IMRT) has become a treatment of choice in many oncological institutions. Small fields or beamlets with sizes of 1 to 5 cm2 are now routinely used in IMRT delivery. Therefore small ionization chambers (IC) with sensitive volumes <=0.1 cm3are generally used for dose verification of an IMRT treatment. The measurement conditions during verification may be quite different from reference conditions normally encountered in clinical beam calibration, so dosimetry of these narrow photon beams pertains to the so-called non-reference conditions for beam calibration. This work aims at estimating the error made when measuring the organ at risk's (OAR) absolute dose by a micro ion chamber (μIC) in a typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. We have selected two clinical cases, treated by IMRT, for our dose error evaluations. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the dose measured by the chamber as a dose averaged over the air cavity within the ion-chamber active volume (Dair). The absorbed dose to water (Dwater) is derived as the dose deposited inside the same volume, in the same geometrical position, filled and surrounded by water in the absence of the ion chamber. Therefore, the Dwater/Dair dose ratio is the MC estimator of the total correction factor needed to convert the absorbed dose in air into the absorbed dose in water. The dose ratio was calculated for the μIC located at the isocentre within the OARs for both clinical cases. The clinical impact of the calculated dose error was found to be negligible for the studied IMRT treatments.

  10. SU-E-T-502: Initial Results of a Comparison of Treatment Plans Produced From Automated Prioritized Planning Method and a Commercial Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiwari, P; Chen, Y; Hong, L

    2015-06-15

    Purpose We developed an automated treatment planning system based on a hierarchical goal programming approach. To demonstrate the feasibility of our method, we report the comparison of prostate treatment plans produced from the automated treatment planning system with those produced by a commercial treatment planning system. Methods In our approach, we prioritized the goals of the optimization, and solved one goal at a time. The purpose of prioritization is to ensure that higher priority dose-volume planning goals are not sacrificed to improve lower priority goals. The algorithm has four steps. The first step optimizes dose to the target structures, whilemore » sparing key sensitive organs from radiation. In the second step, the algorithm finds the best beamlet weight to reduce toxicity risks to normal tissue while holding the objective function achieved in the first step as a constraint, with a small amount of allowed slip. Likewise, the third and fourth steps introduce lower priority normal tissue goals and beam smoothing. We compared with prostate treatment plans from Memorial Sloan Kettering Cancer Center developed using Eclipse, with a prescription dose of 72 Gy. A combination of liear, quadratic, and gEUD objective functions were used with a modified open source solver code (IPOPT). Results Initial plan results on 3 different cases show that the automated planning system is capable of competing or improving on expert-driven eclipse plans. Compared to the Eclipse planning system, the automated system produced up to 26% less mean dose to rectum and 24% less mean dose to bladder while having the same D95 (after matching) to the target. Conclusion We have demonstrated that Pareto optimal treatment plans can be generated automatically without a trial-and-error process. The solver finds an optimal plan for the given patient, as opposed to database-driven approaches that set parameters based on geometry and population modeling.« less

  11. SU-F-J-197: A Novel Intra-Beam Range Detection and Adaptation Strategy for Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Shao, Y

    2016-06-15

    Purpose: In-vivo range detection/verification is crucial in particle therapy for effective and safe delivery. The state-of-art techniques are not sufficient for in-vivo on-line range verification due to conflicts among patient dose, signal statistics and imaging time. We propose a novel intra-beam range detection and adaptation strategy for particle therapy. Methods: This strategy uses the planned mid-range spots as probing beams without adding extra radiation to patients. Such choice of probing beams ensures the Bragg peaks to remain inside the tumor even with significant range variation from the plan. It offers sufficient signal statistics for in-beam positron emission tomography (PET) duemore » to high positron activity of therapeutic dose. The probing beam signal can be acquired and reconstructed using in-beam PET that allows for delineation of the Bragg peaks and detection of range shift with ease of detection enabled by single-layered spots. If the detected range shift is within a pre-defined tolerance, the remaining spots will be delivered as the original plan. Otherwise, a fast re-optimization using range-shifted beamlets and accounting for the probing beam dose is applied to consider the tradeoffs posed by the online anatomy. Simulated planning and delivery studies were used to demonstrate the effectiveness of the proposed techniques. Results: Simulations with online range variations due to shifts of various foreign objects into the beam path showed successful delineation of the Bragg peaks as a result of delivering probing beams. Without on-line delivery adaptation, dose distribution was significantly distorted. In contrast, delivery adaptation incorporating detected range shift recovered well the planned dose. Conclusion: The proposed intra-beam range detection and adaptation utilizing the planned mid-range spots as probing beams, which illuminate the beam range with strong and accurate PET signals, is a safe, practical, yet effective approach to address range uncertainty issues in particle therapy.« less

  12. SU-C-BRD-07: Three-Dimensional Dose Reconstruction in the Presence of Inhomogeneities Using Fast EPID-Based Back-Projection Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Q; Cao, R; Pei, X

    2015-06-15

    Purpose: Three-dimensional dose verification can detect errors introduced by the treatment planning system (TPS) or differences between planned and delivered dose distribution during the treatment. The aim of the study is to extend a previous in-house developed three-dimensional dose reconstructed model in homogeneous phantom to situtions in which tissue inhomogeneities are present. Methods: The method was based on the portal grey images from an electronic portal imaging device (EPID) and the relationship between beamlets and grey-scoring voxels at the position of the EPID. The relationship was expressed in the form of grey response matrix that was quantified using thickness-dependence scattermore » kernels determined by series of experiments. From the portal grey-value distribution information measured by the EPID the two-dimensional incident fluence distribution was reconstructed based on the grey response matrix using a fast iterative algorithm. The accuracy of this approach was verified using a four-field intensity-modulated radiotherapy (IMRT) plan for the treatment of lung cancer in anthopomorphic phantom. Each field had between twenty and twenty-eight segments and was evaluated by comparing the reconstructed dose distribution with the measured dose. Results: The gamma-evaluation method was used with various evaluation criteria of dose difference and distance-to-agreement: 3%/3mm and 2%/2 mm. The dose comparison for all irradiated fields showed a pass rate of 100% with the criterion of 3%/3mm, and a pass rate of higher than 92% with the criterion of 2%/2mm. Conclusion: Our experimental results demonstrate that our method is capable of accurately reconstructing three-dimensional dose distribution in the presence of inhomogeneities. Using the method, the combined planning and treatment delivery process is verified, offing an easy-to-use tool for the verification of complex treatments.« less

  13. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less

  14. Demonstration of thermonuclear conditions in magnetized liner inertial fusion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez, M. R.; Slutz, S. A.; Sefkow, A. B.

    2015-05-15

    The magnetized liner inertial fusion concept [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)] utilizes a magnetic field and laser heating to relax the pressure requirements of inertial confinement fusion. The first experiments to test the concept [M. R. Gomez et al., Phys. Rev. Lett. 113, 155003 (2014)] were conducted utilizing the 19 MA, 100 ns Z machine, the 2.5 kJ, 1 TW Z Beamlet laser, and the 10 T Applied B-field on Z system. Despite an estimated implosion velocity of only 70 km/s in these experiments, electron and ion temperatures at stagnation were as high as 3 keV, and thermonuclear deuterium-deuterium neutronmore » yields up to 2 × 10{sup 12} have been produced. X-ray emission from the fuel at stagnation had widths ranging from 50 to 110 μm over a roughly 80% of the axial extent of the target (6–8 mm) and lasted approximately 2 ns. X-ray yields from these experiments are consistent with a stagnation density of the hot fuel equal to 0.2–0.4 g/cm{sup 3}. In these experiments, up to 5 × 10{sup 10} secondary deuterium-tritium neutrons were produced. Given that the areal density of the plasma was approximately 1–2 mg/cm{sup 2}, this indicates the stagnation plasma was significantly magnetized, which is consistent with the anisotropy observed in the deuterium-tritium neutron spectra. Control experiments where the laser and/or magnetic field were not utilized failed to produce stagnation temperatures greater than 1 keV and primary deuterium-deuterium yields greater than 10{sup 10}. An additional control experiment where the fuel contained a sufficient dopant fraction to substantially increase radiative losses also failed to produce a relevant stagnation temperature. The results of these experiments are consistent with a thermonuclear neutron source.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, V; Nguyen, D; Tran, A

    Purpose: To develop and clinically implement 4π radiotherapy, an inverse optimization platform that maximally utilizes non-coplanar intensity modulated radiotherapy (IMRT) beams to significantly improve critical organ sparing. Methods: A 3D scanner was used to digitize the human and phantom subject surfaces, which were positioned in the computer assisted design (CAD) model of a TrueBeam machine to create a virtual geometrical model, based on which, the feasible beam space was calculated for different tumor locations. Beamlets were computed for all feasible beams using convolution/superposition. A column generation algorithm was employed to optimize patient specific beam orientations and fluence maps. Optimal routingmore » through all selected beams were calculated by a level set method. The resultant plans were converted to XML files and delivered to phantoms in the TrueBeam developer mode. Finally, 4π plans were recomputed in Eclipse and manually delivered to recurrent GBM patients. Results: Compared to IMRT utilizing manually selected beams and volumetric modulated arc therapy plans, markedly improved dosimetry was observed using 4π for the brain, head and neck, liver, lung, and prostate patients. The improvements were due to significantly improved conformality and reduced high dose spillage to organs mediolateral to the PTV. The virtual geometrical model was experimentally validated. Safety margins with 99.9% confidence in collision avoidance were included to the model based model accuracy estimates determined via 300 physical machine to phantom distance measurements. Automated delivery in the developer mode was completed in 10 minutes and collision free. Manual 4 π treatment on the GBM cases resulted in significant brainstem sparing and took 35–45 minutes including multiple images, which showed submillimeter cranial intrafractional motion. Conclusion: The mathematical modeling utilized in 4π is accurate to create and guide highly complex non-coplanar IMRT treatments that consistently and significantly outperform human-operator-created plans. Deliverability of such plans is clinically demonstrated. This work is funded by Varian Medical Systems and the NSF Graduate Research Fellowship DGE-1144087.« less

  16. Recent laser upgrades at Sandia’s Z-backlighter facility in order to accommodate new requirements for magnetized liner inertial fusion on the Z-machine

    DOE PAGES

    Schwarz, Jens; Rambo, Patrick; Armstrong, Darrell; ...

    2016-10-21

    The Z-backlighter laser facility primarily consists of two high energy, high-power laser systems. Z-Beamlet laser (ZBL) (Rambo et al., Appl. Opt. 44, 2421 (2005)) is a multi-kJ-class, nanosecond laser operating at 1054 nm which is frequency doubled to 527 nm in order to provide x-ray backlighting of high energy density events on the Z-machine. Z-Petawatt (ZPW) (Schwarz et al., J. Phys.: Conf. Ser. 112, 032020 (2008)) is a petawatt-class system operating at 1054 nm delivering up to 500 J in 500 fs for backlighting and various short-pulse laser experiments (see also Figure 10 for a facility overview). With the developmentmore » of the magnetized liner inertial fusion (MagLIF) concept on the Z-machine, the primary backlighting missions of ZBL and ZPW have been adjusted accordingly. As a result, we have focused our recent efforts on increasing the output energy of ZBL from 2 to 4 kJ at 527 nm by modifying the fiber front end to now include extra bandwidth (for stimulated Brillouin scattering suppression). The MagLIF concept requires a well-defined/behaved beam for interaction with the pressurized fuel. Hence we have made great efforts to implement an adaptive optics system on ZBL and have explored the use of phase plates. We are also exploring concepts to use ZPW as a backlighter for ZBL driven MagLIF experiments. Alternatively, ZPW could be used as an additional fusion fuel pre-heater or as a temporally flexible high energy pre-pulse. All of these concepts require the ability to operate the ZPW in a nanosecond long-pulse mode, in which the beam can co-propagate with ZBL. Finally, some of the proposed modifications are complete and most of them are well on their way.« less

  17. Performance of the full size nGEM detector for the SPIDER experiment

    NASA Astrophysics Data System (ADS)

    Muraro, A.; Croci, G.; Albani, G.; Claps, G.; Cavenago, M.; Cazzaniga, C.; Dalla Palma, M.; Grosso, G.; Murtas, F.; Pasqualotto, R.; Perelli Cippo, E.; Rebai, M.; Tardocchi, M.; Tollin, M.; Gorini, G.

    2016-03-01

    The ITER neutral beam test facility under construction in Padova will host two experimental devices: SPIDER, a 100 kV negative H/D RF beam source, and MITICA, a full scale, 1 MeV deuterium beam injector. SPIDER will start operations in 2016 while MITICA is expected to start during 2019. Both devices feature a beam dump used to stop the produced deuteron beam. Detection of fusion neutrons produced between beam-deuterons and dump-implanted deuterons will be used as a means to resolve the horizontal beam intensity profile. The neutron detection system will be placed right behind the beam dump, as close to the neutron emitting surface as possible thus providing the map of the neutron emission on the beam dump surface. The system uses nGEM neutron detectors. These are Gas Electron Multiplier detectors equipped with a cathode that also serves as neutron-proton converter foil. The cathode is designed to ensure that most of the detected neutrons at a point of the nGEM surface are emitted from the corresponding beamlet footprint (with dimensions of about 40×22 mm2) on the dump front surface. The size of the nGEM detector for SPIDER is 352 mm×200 mm. Several smaller size prototypes have been successfully made in the last years and the experience gained on these detectors has led to the production of the full size detector for SPIDER during 2014. This nGEM has a read-out board made of 256 pads (arranged in a 16×16 matrix) each with a dimension of 22 mm×13 mm. This paper describes the production of this detector and its tests (in terms of beam profile reconstruction capability, uniformity over the active area, gamma rejection capability and time stability) performed on the ROTAX beam-line at the ISIS spallation source (Didcot-UK).

  18. Acceleration of plasma electrons by intense nonrelativistic ion and electron beams propagating in background plasma due to two-stream instability

    NASA Astrophysics Data System (ADS)

    Kaganovich, Igor D.

    2015-11-01

    In this paper we study the effects of the two-stream instability on the propagation of intense nonrelativistic ion and electron beams in background plasma. Development of the two-stream instability between the beam ions and plasma electrons leads to beam breakup, a slowing down of the beam particles, acceleration of the plasma particles, and transfer of the beam energy to the plasma particles and wave excitations. Making use of the particle-in-cell codes EDIPIC and LSP, and analytic theory we have simulated the effects of the two-stream instability on beam propagation over a wide range of beam and plasma parameters. Because of the two-stream instability the plasma electrons can be accelerated to velocities as high as twice the beam velocity. The resulting return current of the accelerated electrons may completely change the structure of the beam self - magnetic field, thereby changing its effect on the beam from focusing to defocusing. Therefore, previous theories of beam self-electromagnetic fields that did not take into account the effects of the two-stream instability must be significantly modified. This effect can be observed on the National Drift Compression Experiment-II (NDCX-II) facility by measuring the spot size of the extracted beamlet propagating through several meters of plasma. Particle-in-cell, fluid simulations, and analytical theory also reveal the rich complexity of beam- plasma interaction phenomena: intermittency and multiple regimes of the two-stream instability in dc discharges; band structure of the growth rate of the two-stream instability of an electron beam propagating in a bounded plasma and repeated acceleration of electrons in a finite system. In collaboration with E. Tokluoglu, D. Sydorenko, E. A. Startsev, J. Carlsson, and R. C. Davidson. Research supported by the U.S. Department of Energy.

  19. Theoretical Benefits of Dynamic Collimation in Pencil Beam Scanning Proton Therapy for Brain Tumors: Dosimetric and Radiobiological Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, Alexandra, E-mail: alexandra-moignier@uiowa.edu; Gelover, Edgar; Wang, Dongxu

    Purpose: To quantify the dosimetric benefit of using a dynamic collimation system (DCS) for penumbra reduction during the treatment of brain tumors by pencil beam scanning proton therapy (PBS PT). Methods and Materials: Collimated and uncollimated brain treatment plans were created for 5 patients previously treated with PBS PT and retrospectively enrolled in an institutional review board–approved study. The in-house treatment planning system, RDX, was used to generate the plans because it is capable of modeling both collimated and uncollimated beamlets. The clinically delivered plans were reproduced with uncollimated plans in terms of target coverage and organ at risk (OAR) sparingmore » to ensure a clinically relevant starting point, and collimated plans were generated to improve the OAR sparing while maintaining target coverage. Physical and biological comparison metrics, such as dose distribution conformity, mean and maximum doses, normal tissue complication probability, and risk of secondary brain cancer, were used to evaluate the plans. Results: The DCS systematically improved the dose distribution conformity while preserving the target coverage. The average reduction of the mean dose to the 10-mm ring surrounding the target and the healthy brain were 13.7% (95% confidence interval [CI] 11.6%-15.7%; P<.0001) and 25.1% (95% CI 16.8%-33.4%; P<.001), respectively. This yielded an average reduction of 24.8% (95% CI 0.8%-48.8%; P<.05) for the brain necrosis normal tissue complication probability using the Flickinger model, and 25.1% (95% CI 16.8%-33.4%; P<.001) for the risk of secondary brain cancer. A general improvement of the OAR sparing was also observed. Conclusion: The lateral penumbra reduction afforded by the DCS increases the normal tissue sparing capabilities of PBS PT for brain cancer treatment while preserving target coverage.« less

  20. Recent laser upgrades at Sandia’s Z-backlighter facility in order to accommodate new requirements for magnetized liner inertial fusion on the Z-machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwarz, Jens; Rambo, Patrick; Armstrong, Darrell

    The Z-backlighter laser facility primarily consists of two high energy, high-power laser systems. Z-Beamlet laser (ZBL) (Rambo et al., Appl. Opt. 44, 2421 (2005)) is a multi-kJ-class, nanosecond laser operating at 1054 nm which is frequency doubled to 527 nm in order to provide x-ray backlighting of high energy density events on the Z-machine. Z-Petawatt (ZPW) (Schwarz et al., J. Phys.: Conf. Ser. 112, 032020 (2008)) is a petawatt-class system operating at 1054 nm delivering up to 500 J in 500 fs for backlighting and various short-pulse laser experiments (see also Figure 10 for a facility overview). With the developmentmore » of the magnetized liner inertial fusion (MagLIF) concept on the Z-machine, the primary backlighting missions of ZBL and ZPW have been adjusted accordingly. As a result, we have focused our recent efforts on increasing the output energy of ZBL from 2 to 4 kJ at 527 nm by modifying the fiber front end to now include extra bandwidth (for stimulated Brillouin scattering suppression). The MagLIF concept requires a well-defined/behaved beam for interaction with the pressurized fuel. Hence we have made great efforts to implement an adaptive optics system on ZBL and have explored the use of phase plates. We are also exploring concepts to use ZPW as a backlighter for ZBL driven MagLIF experiments. Alternatively, ZPW could be used as an additional fusion fuel pre-heater or as a temporally flexible high energy pre-pulse. All of these concepts require the ability to operate the ZPW in a nanosecond long-pulse mode, in which the beam can co-propagate with ZBL. Finally, some of the proposed modifications are complete and most of them are well on their way.« less

  1. Metabolite Biometrics for the Differentiation of Individuals.

    PubMed

    Hair, Mindy E; Mathis, Adrianna I; Brunelle, Erica K; Halámková, Lenka; Halámek, Jan

    2018-04-17

    Sweat is a biological fluid present on the skin surface of every individual and is known to contain amino acids as well as other low molecular weight compounds. (1) Each individual is inherently different from one another based on certain factors including, but not limited to, his/her genetic makeup, environment, and lifestyle. As such, the biochemical composition of each person greatly differs. The concentrations of the biochemical content within an individual's sweat are largely controlled by metabolic processes within the body that fluctuate regularly based on attributes such as age, sex, and activity level. Therefore, the concentrations of these sweat components are person-specific and can be exploited, as presented here, to differentiate individuals based on trace amounts of sweat. For this concept, we analyzed three model compounds-lactate, urea, and glutamate. The average absorbance change from each compound in sweat was determined using three separate bioaffinity-based systems: lactate oxidase coupled with horseradish peroxidase (LOx-HRP), urease coupled with glutamate dehydrogenase (UR-GlDH), and glutamate dehydrogenase alone (GlDH). After optimization of a linear dependence for each assay to its respective analyte, analysis was performed on 50 mimicked sweat samples. Additionally, a collection and extraction method was developed and optimized by our group to evaluate authentic sweat samples from the skin surface of 25 individuals. A multivariate analysis of variance (MANOVA) test was performed to demonstrate that these three single-analyte enzymatic assays were effectively used to identify each person in both sample sets. This novel sweat analysis approach is capable of differentiating individuals, without the use of DNA, based on the collective responses from the chosen metabolic compounds in sweat. Applications for this newly developed, noninvasive analysis can include the field of forensic science in order to differentiate between individuals as well as the fields of homeland security and cybersecurity for personal authentication via unlocking mechanisms in smart devices that monitor metabolites. Through further development and analysis, this concept also has the potential to be clinically applicable in monitoring the health of individuals based on particular biomarker combinations.

  2. Evaluation of adding item-response theory analysis for evaluation of the European Board of Ophthalmology Diploma examination.

    PubMed

    Mathysen, Danny G P; Aclimandos, Wagih; Roelant, Ella; Wouters, Kristien; Creuzot-Garcher, Catherine; Ringens, Peter J; Hawlina, Marko; Tassignon, Marie-José

    2013-11-01

    To investigate whether introduction of item-response theory (IRT) analysis, in parallel to the 'traditional' statistical analysis methods available for performance evaluation of multiple T/F items as used in the European Board of Ophthalmology Diploma (EBOD) examination, has proved beneficial, and secondly, to study whether the overall assessment performance of the current written part of EBOD is sufficiently high (KR-20≥ 0.90) to be kept as examination format in future EBOD editions. 'Traditional' analysis methods for individual MCQ item performance comprise P-statistics, Rit-statistics and item discrimination, while overall reliability is evaluated through KR-20 for multiple T/F items. The additional set of statistical analysis methods for the evaluation of EBOD comprises mainly IRT analysis. These analysis techniques are used to monitor whether the introduction of negative marking for incorrect answers (since EBOD 2010) has a positive influence on the statistical performance of EBOD as a whole and its individual test items in particular. Item-response theory analysis demonstrated that item performance parameters should not be evaluated individually, but should be related to one another. Before the introduction of negative marking, the overall EBOD reliability (KR-20) was good though with room for improvement (EBOD 2008: 0.81; EBOD 2009: 0.78). After the introduction of negative marking, the overall reliability of EBOD improved significantly (EBOD 2010: 0.92; EBOD 2011:0.91; EBOD 2012: 0.91). Although many statistical performance parameters are available to evaluate individual items, our study demonstrates that the overall reliability assessment remains the only crucial parameter to be evaluated allowing comparison. While individual item performance analysis is worthwhile to undertake as secondary analysis, drawing final conclusions seems to be more difficult. Performance parameters need to be related, as shown by IRT analysis. Therefore, IRT analysis has proved beneficial for the statistical analysis of EBOD. Introduction of negative marking has led to a significant increase in the reliability (KR-20 > 0.90), indicating that the current examination format can be kept for future EBOD examinations. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  3. The Science of the Individual

    ERIC Educational Resources Information Center

    Rose, L. Todd; Rouhani, Parisa; Fischer, Kurt W.

    2013-01-01

    Our goal is to establish a science of the individual, grounded in dynamic systems, and focused on the analysis of individual variability. Our argument is that individuals behave, learn, and develop in distinctive ways, showing patterns of variability that are not captured by models based on statistical averages. As such, any meaningful attempt to…

  4. Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.

    ERIC Educational Resources Information Center

    Buchanan, Trey

    Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…

  5. Metacarpophalangeal pattern profile analysis in Leri-Weill dyschondrosteosis.

    PubMed

    Laurencikas, E; Soderman, E; Grigelioniene, G; Hagenäs, L; Jorulf, H

    2005-04-01

    To analyze the metacarpophalangeal profile (MCPP) in individuals with Leri-Weill dyschondrosteosis (LWD) and to assess its value as a possible contributor to early diagnosis. Hand profiles of 39 individuals with a diagnosis of LWD were calculated and analyzed. Discriminant analysis was applied to differentiate between LWD and normal individuals. There was a distinct pattern profile in LWD. Mean pattern profile showed two bone-shortening gradients, with increasing shortening from distal to proximal and from medial to lateral. Distal phalanx 2 was disproportionately long and second metacarpal was disproportionately short. Discriminant analysis yielded correct classification in 72% of analyzed cases. MCPP is not age-related and the analysis can be applied at any age, facilitating early diagnosis of LWD. In view of its availability, low costs, and diagnostic value, MCPP analysis should be considered as a routine method in the patients of short stature where LWD is suspected.

  6. Comparison of individual and composite field analysis using array detector for Intensity Modulated Radiotherapy dose verification.

    PubMed

    Saminathan, Sathiyan; Chandraraj, Varatharaj; Sridhar, C H; Manickam, Ravikumar

    2012-01-01

    To compare the measured and calculated individual and composite field planar dose distribution of Intensity Modulated Radiotherapy plans. The measurements were performed in Clinac DHX linear accelerator with 6 MV photons using Matrixx device and a solid water phantom. The 20 brain tumor patients were selected for this study. The IMRT plan was carried out for all the patients using Eclipse treatment planning system. The verification plan was produced for every original plan using CT scan of Matrixx embedded in the phantom. Every verification field was measured by the Matrixx. The TPS calculated and measured dose distributions were compared for individual and composite fields. The percentage of gamma pixel match for the dose distribution patterns were evaluated using gamma histogram. The gamma pixel match was 95-98% for 41 fields (39%) and 98% for 59 fields (61%) with individual fields. The percentage of gamma pixel match was 95-98% for 5 patients and 98% for other 12 patients with composite fields. Three patients showed a gamma pixel match of less than 95%. The comparison of percentage gamma pixel match for individual and composite fields showed more than 2.5% variation for 6 patients, more than 1% variation for 4 patients, while the remaining 10 patients showed less than 1% variation. The individual and composite field measurements showed good agreement with TPS calculated dose distribution for the studied patients. The measurement and data analysis for individual fields is a time consuming process, the composite field analysis may be sufficient enough for smaller field dose distribution analysis with array detectors.

  7. Clinical utility of the Calgary Depression Scale for Schizophrenia in individuals at ultra-high risk of psychosis.

    PubMed

    Rekhi, Gurpreet; Ng, Wai Yee; Lee, Jimmy

    2018-03-01

    There is a pressing need for reliable and valid rating scales to assess and measure depression in individuals at ultra-high risk (UHR) of psychosis. The aim of this study was to examine the clinical utility of the Calgary Depression Scale for Schizophrenia (CDSS) in individuals at UHR of psychosis. 167 individuals at UHR of psychosis were included as participants in this study. The Structured Clinical Interview for DSM-IV Axis I Disorders, CDSS, Beck Anxiety Inventory and Global Assessment of Functioning were administered. A receiver operating characteristic (ROC) curve analysis and factor analyses were performed. Cronbach's alpha was computed. Correlations between CDSS factor scores and other clinical variables were examined. The median CDSS total score was 5.0 (IQR 1.0-9.0). The area under ROC curve was 0.886 and Cronbach's alpha was 0.855. A score of 7 on the CDSS yielded the highest sensitivity and specificity in detecting depression in UHR individuals. Exploratory factor analysis of the CDSS yielded two factors: depression-hopelessness and self depreciation-guilt, which was confirmed by confirmatory factor analysis. Further analysis showed that the depression-hopelessness factor predicted functioning; whereas the self depreciation-guilt factor was related to the severity of the attenuated psychotic symptoms. In conclusion, the CDSS demonstrates good psychometric properties when used to evaluate depression in individuals at UHR of psychosis. Our study results also support a two-factor structure of the CDSS in UHR individuals. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy: The Case of Neighbourhoods and Health

    PubMed Central

    Wagner, Philippe; Ghith, Nermin; Leckie, George

    2016-01-01

    Background and Aim Many multilevel logistic regression analyses of “neighbourhood and health” focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that distinguishes between “specific” (measures of association) and “general” (measures of variance) contextual effects. Performing two empirical examples we illustrate the methodology, interpret the results and discuss the implications of this kind of analysis in public health. Methods We analyse 43,291 individuals residing in 218 neighbourhoods in the city of Malmö, Sweden in 2006. We study two individual outcomes (psychotropic drug use and choice of private vs. public general practitioner, GP) for which the relative importance of neighbourhood as a source of individual variation differs substantially. In Step 1 of the analysis, we evaluate the OR and the area under the receiver operating characteristic (AUC) curve for individual-level covariates (i.e., age, sex and individual low income). In Step 2, we assess general contextual effects using the AUC. Finally, in Step 3 the OR for a specific neighbourhood characteristic (i.e., neighbourhood income) is interpreted jointly with the proportional change in variance (i.e., PCV) and the proportion of ORs in the opposite direction (POOR) statistics. Results For both outcomes, information on individual characteristics (Step 1) provide a low discriminatory accuracy (AUC = 0.616 for psychotropic drugs; = 0.600 for choosing a private GP). Accounting for neighbourhood of residence (Step 2) only improved the AUC for choosing a private GP (+0.295 units). High neighbourhood income (Step 3) was strongly associated to choosing a private GP (OR = 3.50) but the PCV was only 11% and the POOR 33%. Conclusion Applying an innovative stepwise multilevel analysis, we observed that, in Malmö, the neighbourhood context per se had a negligible influence on individual use of psychotropic drugs, but appears to strongly condition individual choice of a private GP. However, the latter was only modestly explained by the socioeconomic circumstances of the neighbourhoods. Our analyses are based on real data and provide useful information for understanding neighbourhood level influences in general and on individual use of psychotropic drugs and choice of GP in particular. However, our primary aim is to illustrate how to perform and interpret a multilevel analysis of individual heterogeneity in social epidemiology and public health. Our study shows that neighbourhood “effects” are not properly quantified by reporting differences between neighbourhood averages but rather by measuring the share of the individual heterogeneity that exists at the neighbourhood level. PMID:27120054

  9. A Meta-Analysis of Individualized Instruction in Dental Education.

    ERIC Educational Resources Information Center

    Dacanay, Lakshmi S; Cohen, Peter A.

    1992-01-01

    Meta-analysis of 34 comparative studies on conventional vs. individualized instruction (II) found most favored the latter but with small-moderate overall effect. Pacing had significant effect, with teacher-pacing more effective than student-paced learning. On average, II required less time than conventional teaching. Additional research on this…

  10. "Scaffolding" through Talk in Groupwork Learning

    ERIC Educational Resources Information Center

    Panselinas, Giorgos; Komis, Vassilis

    2009-01-01

    In the present study, we develop and deploy a conceptual framework of "scaffolding" in groupwork learning, through the analysis of the pursuit of a learning goal over time. The analysis follows individuals' different experiences of an interaction as well as collective experiences, considering individual attainment as a result of a bi-directional…

  11. Analysis of Sensitivity and Uncertainty in an Individual-Based Model of a Threatened Wildlife Species

    EPA Science Inventory

    We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...

  12. Stereoscopic video analysis of Anopheles gambiae behavior in the field: challenges and opportunities

    USDA-ARS?s Scientific Manuscript database

    Advances in our ability to localize and track individual swarming mosquitoes in the field via stereoscopic image analysis have enabled us to test long standing ideas about individual male behavior and directly observe coupling. These studies further our fundamental understanding of the reproductive ...

  13. An Etic-Emic Analysis of Individualism and Collectivism.

    ERIC Educational Resources Information Center

    Triandis, Harry C.; And Others

    1993-01-01

    An analysis of the responses of 1,614 adult subjects from 10 cultures show that the Leung-Bond procedure provides ways of extracting both strong and weak etics relevant to individualism and weak etics relevant to collectivism. The most complete picture is obtained when both etics and emics are examined. (SLD)

  14. Students' Meaning Making in Classroom Discussions: The Importance of Peer Interaction

    ERIC Educational Resources Information Center

    Rudsberg, Karin; Östman, Leif; Aaro Östman, Elisabeth

    2017-01-01

    The aim is to investigate how encounters with peers affect an individual's meaning making in argumentation about socio-scientific issues, and how the individual's meaning making influences the argumentation at the collective level. The analysis is conducted using the analytical method "transactional argumentation analysis" (TAA) which…

  15. Stability of individual loudness functions obtained by magnitude estimation and production

    NASA Technical Reports Server (NTRS)

    Hellman, R. P.

    1981-01-01

    A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.

  16. The individual in mainstream health economics: a case of Persona Non-grata.

    PubMed

    Davis, John B; McMaster, Robert

    2007-09-01

    This paper is motivated by Davis' [14] theory of the individual in economics. Davis' analysis is applied to health economics, where the individual is conceived as a utility maximiser, although capable of regarding others' welfare through interdependent utility functions. Nonetheless, this provides a restrictive and flawed account, engendering a narrow and abstract conception of care grounded in Paretian value and Cartesian analytical frames. Instead, a richer account of the socially embedded individual is advocated, which employs collective intentionality analysis. This provides a sound foundation for research into an approach to health policy that promotes health as a basic human right.

  17. How can social network analysis contribute to social behavior research in applied ethology?

    PubMed

    Makagon, Maja M; McCowan, Brenda; Mench, Joy A

    2012-05-01

    Social network analysis is increasingly used by behavioral ecologists and primatologists to describe the patterns and quality of interactions among individuals. We provide an overview of this methodology, with examples illustrating how it can be used to study social behavior in applied contexts. Like most kinds of social interaction analyses, social network analysis provides information about direct relationships (e.g. dominant-subordinate relationships). However, it also generates a more global model of social organization that determines how individual patterns of social interaction relate to individual and group characteristics. A particular strength of this approach is that it provides standardized mathematical methods for calculating metrics of sociality across levels of social organization, from the population and group levels to the individual level. At the group level these metrics can be used to track changes in social network structures over time, evaluate the effect of the environment on social network structure, or compare social structures across groups, populations or species. At the individual level, the metrics allow quantification of the heterogeneity of social experience within groups and identification of individuals who may play especially important roles in maintaining social stability or information flow throughout the network.

  18. Shifting the balance: conceptualising empowerment in individuals with spinal cord injury.

    PubMed

    Rohatinsky, Noelle; Goodridge, Donna; Rogers, Marla R; Nickel, Darren; Linassi, Gary

    2017-03-01

    Empowerment is believed to be an essential element in self-management of disease and the promotion of self-efficacy, and can be defined as the ability of individuals to increase control over aspects of their lives. In contrast, powerlessness in individuals with chronic illness can occur when they perceive that they lack the capacity, authority or resources to affect an outcome. Individuals with spinal cord injuries (SCIs) are at risk for powerlessness and have the potential to become empowered, but these concepts have not been explored within their context. The purpose of this study was to explore how individuals with SCI enact the empowerment process using Lord's (1991) process of empowerment framework. This study used a secondary analysis of a data set obtained from a mixed methods study exploring access to health and social care for 23 persons with SCI in Saskatchewan, Canada. The primary study data were collected from September 2012 to January 2013. The secondary analysis of data utilised a deductive thematic analysis approach and findings were conceptualised and applied to a model that represents the shift in balance between powerlessness and empowerment in individuals with SCI. © 2016 John Wiley & Sons Ltd.

  19. Results from Evaluation of Three Commercial Off the shelf Face Recognition Systems on Chokepoint Dataset

    DTIC Science & Technology

    2014-09-01

    curves. Level 2 or subject-based analysis describes the performance of the system using the-so-called “Doddington’s Zoo ” categorization of individuals...Doddington’s Zoo ” categorization of individuals, which detects whether an individual belongs to an easier or a harder classes of people that the system is able...Marcialis, and F. Roli, “An experimental analysis of the relationship between biometric template update and the doddingtons zoo : A case study in face

  20. Factors associated with the amount of public home care received by elderly and intellectually disabled individuals in a large Norwegian municipality.

    PubMed

    Døhl, Øystein; Garåsen, Helge; Kalseth, Jorid; Magnussen, Jon

    2016-05-01

    This study reports an analysis of factors associated with home care use in a setting in which long-term care services are provided within a publicly financed welfare system. We considered two groups of home care recipients: elderly individuals and intellectually disabled individuals. Routinely collected data on users of public home care in the municipality of Trondheim in October 2012, including 2493 people aged 67 years or older and 270 intellectually disabled people, were used. Multivariate regression analysis was used to analyse the relationship between the time spent in direct contact with recipients by public healthcare personnel and perceived individual determinants of home care use (i.e. physical disability, cognitive impairment, diagnoses, age and gender, as well as socioeconomic characteristics). Physical disability and cognitive impairment are routinely registered for long-term care users through a standardised instrument that is used in all Norwegian municipalities. Factor analysis was used to aggregate the individual items into composite variables that were included as need variables. Both physical disability and cognitive impairment were strong predictors of the amount of received care for both elderly and intellectually disabled individuals. Furthermore, we found a negative interaction effect between physical disability and cognitive impairment for elderly home care users. For elderly individuals, we also found significant positive associations between weekly hours of home care and having comorbidity, living alone, living in a service flat and having a safety alarm. The reduction in the amount of care for elderly individuals living with a cohabitant was substantially greater for males than for females. For intellectually disabled individuals, receiving services involuntarily due to severe behavioural problems was a strong predictor of the amount of care received. Our analysis showed that routinely collected data capture important predictors of home care use and thus facilitate both short-term budgeting and long-term planning of home care services. © 2015 John Wiley & Sons Ltd.

  1. Interaction of participant characteristics and type of AAC with individuals with ASD: a meta-analysis.

    PubMed

    Ganz, Jennifer B; Mason, Rose A; Goodwyn, Fara D; Boles, Margot B; Heath, Amy K; Davis, John L

    2014-11-01

    Individuals with autism spectrum disorders (ASD) and complex communication needs often rely on augmentative and alternative communication (AAC) as a means of functional communication. This meta-analysis investigated how individual characteristics moderate effectiveness of three types of aided AAC: the Picture Exchange Communication System (PECS), speech-generating devices (SGDs), and other picture-based AAC. Effectiveness was measured via the Improvement Rate Difference. Results indicated that AAC has small to moderate effects on speech outcomes, and that SGDs appear to be most effective when considering any outcome measure with individuals with ASD without comorbid intellectual/developmental disorders (IDD). PECS appears to be most effective when considering any outcome measure with individuals with ASD and IDD. SGDs and PECS were the most effective type of AAC for preschoolers, when aggregating across outcome measures. No difference was found between systems for elementary-aged and older individuals.

  2. Extracting body image symptom dimensions among eating disorder patients: the Profile Analysis via Multidimensional Scaling (PAMS) approach.

    PubMed

    Olatunji, Bunmi O; Kim, Se-Kang; Wall, David

    2015-09-01

    The present study employs Profile Analysis via Multidimensional Scaling (PAMS), a procedure for extracting dimensions, in order to identify core eating disorder symptoms in a clinical sample. A large sample of patients with eating disorders (N=5193) presenting for treatment completed the Eating Disorders Inventory-2 (EDI-2; Garner, 1991), and PAMS was then employed to estimate individual profile weights that reflect the degree to which an individual's observed symptom profile approximates the pattern of the dimensions. The findings revealed three symptom dimensions: Body Thinness, Body Perfectionism, and Body Awareness. Subsequent analysis using individual level data illustrate that the PAMS profiles properly operate as prototypical profiles that encapsulate all individuals' response patterns. The implications of these dimensional findings for the assessment and diagnosis of eating disorders are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Familial Analysis of Seropositivity to Trypanosoma cruzi and of Clinical Forms of Chagas Disease

    PubMed Central

    Silva-Grecco, Roseane L.; Balarin, Marly A. S.; Correia, Dalmo; Prata, Aluízio; Rodrigues, Virmondes

    2010-01-01

    A cross-sectional study was carried out in Água Comprida, MG, Brazil, a region previously endemic to Chagas disease whose vectorial transmission was interrupted around 20 year ago. A total of 998 individuals were examined for anti-Trypanosoma cruzi antibodies. Seropositivity was observed in 255 subjects (25.5%), and 743 subjects were negative. Forty-one families with 5–80 individuals with similar environmental conditions were selected for familial analysis. In 15 families, seropositivity to T. cruzi was observed in > 50% of individuals. The segregation analysis confirmed family aggregation for the seropositivity to the T. cruzi. Heart commitment was the major clinical form observed, and in six families, > 50% of the individuals display cardiopathy that may be attributed to T. cruzi infection. Our results support the hypothesis that there is a family aggregation for the seropositivity but without the effect of one major gene. PMID:20064994

  4. Study protocol for examining job strain as a risk factor for severe unipolar depression in an individual participant meta-analysis of 14 European cohorts.

    PubMed

    Madsen, Ida E H; Hannerz, Harald; Nyberg, Solja T; Magnusson Hanson, Linda L; Ahola, Kirsi; Alfredsson, Lars; Batty, G David; Bjorner, Jakob B; Borritz, Marianne; Burr, Hermann; Dragano, Nico; Ferrie, Jane E; Hamer, Mark; Jokela, Markus; Knutsson, Anders; Koskenvuo, Markku; Koskinen, Aki; Leineweber, Constanze; Nielsen, Martin L; Nordin, Maria; Oksanen, Tuula; Pejtersen, Jan H; Pentti, Jaana; Salo, Paula; Singh-Manoux, Archana; Suominen, Sakari; Theorell, Töres; Toppinen-Tanner, Salla; Vahtera, Jussi; Väänänen, Ari; Westerholm, Peter J M; Westerlund, Hugo; Fransson, Eleonor; Heikkilä, Katriina; Virtanen, Marianna; Rugulies, Reiner; Kivimäki, Mika

    2013-01-01

    Previous studies have shown that gainfully employed individuals with high work demands and low control at work (denoted "job strain") are at increased risk of common mental disorders, including depression. Most existing studies have, however, measured depression using self-rated symptom scales that do not necessarily correspond to clinically diagnosed depression. In addition, a meta-analysis from 2008 indicated publication bias in the field.   This study protocol describes the planned design and analyses of an individual participant data meta-analysis, to examine whether job strain is associated with an increased risk of clinically diagnosed unipolar depression based on hospital treatment registers.  The study will be based on data from approximately 120,000 individuals who participated in 14 studies on work environment and health in 4 European countries. The self-reported working conditions data will be merged with national registers on psychiatric hospital treatment, primarily hospital admissions. Study-specific risk estimates for the association between job strain and depression will be calculated using Cox regressions. The study-specific risk estimates will be pooled using random effects meta-analysis.   The planned analyses will help clarify whether job strain is associated with an increased risk of clinically diagnosed unipolar depression. As the analysis is based on pre-planned study protocols and an individual participant data meta-analysis, the pooled risk estimates will not be influenced by selective reporting and publication bias. However, the results of the planned study may only pertain to severe cases of unipolar depression, because of the outcome measure applied.

  5. Meta-analysis of individual registry results enhances international registry collaboration.

    PubMed

    Paxton, Elizabeth W; Mohaddes, Maziar; Laaksonen, Inari; Lorimer, Michelle; Graves, Stephen E; Malchau, Henrik; Namba, Robert S; Kärrholm, John; Rolfson, Ola; Cafri, Guy

    2018-03-28

    Background and purpose - Although common in medical research, meta-analysis has not been widely adopted in registry collaborations. A meta-analytic approach in which each registry conducts a standardized analysis on its own data followed by a meta-analysis to calculate a weighted average of the estimates allows collaboration without sharing patient-level data. The value of meta-analysis as an alternative to individual patient data analysis is illustrated in this study by comparing the risk of revision of porous tantalum cups versus other uncemented cups in primary total hip arthroplasties from Sweden, Australia, and a US registry (2003-2015). Patients and methods - For both individual patient data analysis and meta-analysis approaches a Cox proportional hazard model was fit for time to revision, comparing porous tantalum (n = 23,201) with other uncemented cups (n = 128,321). Covariates included age, sex, diagnosis, head size, and stem fixation. In the meta-analysis approach, treatment effect size (i.e., Cox model hazard ratio) was calculated within each registry and a weighted average for the individual registries' estimates was calculated. Results - Patient-level data analysis and meta-analytic approaches yielded the same results with the porous tantalum cups having a higher risk of revision than other uncemented cups (HR (95% CI) 1.6 (1.4-1.7) and HR (95% CI) 1.5 (1.4-1.7), respectively). Adding the US cohort to the meta-analysis led to greater generalizability, increased precision of the treatment effect, and similar findings (HR (95% CI) 1.6 (1.4-1.7)) with increased risk of porous tantalum cups. Interpretation - The meta-analytic technique is a viable option to address privacy, security, and data ownership concerns allowing more expansive registry collaboration, greater generalizability, and increased precision of treatment effects.

  6. Nanoparticle-based flow virometry for the analysis of individual virions

    PubMed Central

    Arakelyan, Anush; Fitzgerald, Wendy; Margolis, Leonid; Grivel, Jean-Charles

    2013-01-01

    While flow cytometry has been used to analyze the antigenic composition of individual cells, the antigenic makeup of viral particles is still characterized predominantly in bulk. Here, we describe a technology, “flow virometry,” that can be used for antigen detection on individual virions. The technology is based on binding magnetic nanoparticles to virions, staining the virions with monoclonal antibodies, separating the formed complexes with magnetic columns, and characterizing them with flow cytometers. We used this technology to study the distribution of two antigens (HLA-DR and LFA-1) that HIV-1 acquires from infected cells among individual HIV-1 virions. Flow virometry revealed that the antigenic makeup of virions from a single preparation is heterogeneous. This heterogeneity could not be detected with bulk analysis of viruses. Moreover, in two preparations of the same HIV-1 produced by different cells, the distribution of antigens among virions was different. In contrast, HIV-1 of two different HIV-1 genotypes replicating in the same cells became somewhat antigenically similar. This nanotechnology allows the study of virions in bodily fluids without virus propagation and in principle is not restricted to the analysis of HIV, but can be applied to the analysis of the individual surface antigenic makeup of any virus. PMID:23925291

  7. Evaluation of Adherence to Nutritional Intervention Through Trajectory Analysis.

    PubMed

    Sevilla-Villanueva, B; Gibert, K; Sanchez-Marre, M; Fito, M; Covas, M I

    2017-05-01

    Classical pre-post intervention studies are often analyzed using traditional statistics. Nevertheless, the nutritional interventions have small effects on the metabolism and traditional statistics are not enough to detect these subtle nutrient effects. Generally, this kind of studies assumes that the participants are adhered to the assigned dietary intervention and directly analyzes its effects over the target parameters. Thus, the evaluation of adherence is generally omitted. Although, sometimes, participants do not effectively adhere to the assigned dietary guidelines. For this reason, the trajectory map is proposed as a visual tool where dietary patterns of individuals can be followed during the intervention and can also be related with nutritional prescriptions. The trajectory analysis is also proposed allowing both analysis: 1) adherence to the intervention and 2) intervention effects. The analysis is made by projecting the differences of the target parameters over the resulting trajectories between states of different time-stamps which might be considered either individually or by groups. The proposal has been applied over a real nutritional study showing that some individuals adhere better than others and some individuals of the control group modify their habits during the intervention. In addition, the intervention effects are different depending on the type of individuals, even some subgroups have opposite response to the same intervention.

  8. Tracking and Analyzing Individual Distress Following Terrorist Attacks Using Social Media Streams.

    PubMed

    Lin, Yu-Ru; Margolin, Drew; Wen, Xidao

    2017-08-01

    Risk research has theorized a number of mechanisms that might trigger, prolong, or potentially alleviate individuals' distress following terrorist attacks. These mechanisms are difficult to examine in a single study, however, because the social conditions of terrorist attacks are difficult to simulate in laboratory experiments and appropriate preattack baselines are difficult to establish with surveys. To address this challenge, we propose the use of computational focus groups and a novel analysis framework to analyze a social media stream that archives user history and location. The approach uses time-stamped behavior to quantify an individual's preattack behavior after an attack has occurred, enabling the assessment of time-specific changes in the intensity and duration of an individual's distress, as well as the assessment of individual and social-level covariates. To exemplify the methodology, we collected over 18 million tweets from 15,509 users located in Paris on November 13, 2015, and measured the degree to which they expressed anxiety, anger, and sadness after the attacks. The analysis resulted in findings that would be difficult to observe through other methods, such as that news media exposure had competing, time-dependent effects on anxiety, and that gender dynamics are complicated by baseline behavior. Opportunities for integrating computational focus group analysis with traditional methods are discussed. © 2017 Society for Risk Analysis.

  9. Analysis of Strengths, Weaknesses, Opportunities, and Threats as a Tool for Translating Evidence into Individualized Medical Strategies (I-SWOT)

    PubMed Central

    von Kodolitsch, Yskert; Bernhardt, Alexander M.; Robinson, Peter N.; Kölbel, Tilo; Reichenspurner, Hermann; Debus, Sebastian; Detter, Christian

    2015-01-01

    Background It is the physicians’ task to translate evidence and guidelines into medical strategies for individual patients. Until today, however, there is no formal tool that is instrumental to perform this translation. Methods We introduce the analysis of strengths (S) and weaknesses (W) related to therapy with opportunities (O) and threats (T) related to individual patients as a tool to establish an individualized (I) medical strategy (I-SWOT). The I-SWOT matrix identifies four fundamental types of strategy. These comprise “SO” maximizing strengths and opportunities, “WT” minimizing weaknesses and threats, “WO” minimizing weaknesses and maximizing opportunities, and “ST” maximizing strengths and minimizing threats. Each distinct type of strategy may be considered for individualized medical strategies. Results We describe four steps of I-SWOT to establish an individualized medical strategy to treat aortic disease. In the first step, we define the goal of therapy and identify all evidence-based therapeutic options. In a second step, we assess strengths and weaknesses of each therapeutic option in a SW matrix form. In a third step, we assess opportunities and threats related to the individual patient, and in a final step, we use the I-SWOT matrix to establish an individualized medical strategy through matching “SW” with “OT”. As an example we present two 30-year-old patients with Marfan syndrome with identical medical history and aortic pathology. As a result of I-SWOT analysis of their individual opportunities and threats, we identified two distinct medical strategies in these patients. Conclusion I-SWOT is a formal but easy to use tool to translate medical evidence into individualized medical strategies. PMID:27069939

  10. Analysis of Strengths, Weaknesses, Opportunities, and Threats as a Tool for Translating Evidence into Individualized Medical Strategies (I-SWOT).

    PubMed

    von Kodolitsch, Yskert; Bernhardt, Alexander M; Robinson, Peter N; Kölbel, Tilo; Reichenspurner, Hermann; Debus, Sebastian; Detter, Christian

    2015-06-01

    It is the physicians' task to translate evidence and guidelines into medical strategies for individual patients. Until today, however, there is no formal tool that is instrumental to perform this translation. We introduce the analysis of strengths (S) and weaknesses (W) related to therapy with opportunities (O) and threats (T) related to individual patients as a tool to establish an individualized (I) medical strategy (I-SWOT). The I-SWOT matrix identifies four fundamental types of strategy. These comprise "SO" maximizing strengths and opportunities, "WT" minimizing weaknesses and threats, "WO" minimizing weaknesses and maximizing opportunities, and "ST" maximizing strengths and minimizing threats. Each distinct type of strategy may be considered for individualized medical strategies. We describe four steps of I-SWOT to establish an individualized medical strategy to treat aortic disease. In the first step, we define the goal of therapy and identify all evidence-based therapeutic options. In a second step, we assess strengths and weaknesses of each therapeutic option in a SW matrix form. In a third step, we assess opportunities and threats related to the individual patient, and in a final step, we use the I-SWOT matrix to establish an individualized medical strategy through matching "SW" with "OT". As an example we present two 30-year-old patients with Marfan syndrome with identical medical history and aortic pathology. As a result of I-SWOT analysis of their individual opportunities and threats, we identified two distinct medical strategies in these patients. I-SWOT is a formal but easy to use tool to translate medical evidence into individualized medical strategies.

  11. The Struggle Between Liberties and Authorities in the Information Age.

    PubMed

    Taddeo, Mariarosaria

    2015-10-01

    The "struggle between liberties and authorities", as described by Mill, refers to the tension between individual rights and the rules restricting them that are imposed by public authorities exerting their power over civil society. In this paper I argue that contemporary information societies are experiencing a new form of such a struggle, which now involves liberties and authorities in the cyber-sphere and, more specifically, refers to the tension between cyber-security measures and individual liberties. Ethicists, political philosophers and political scientists have long debated how to strike an ethically sound balance between security measures and individual rights. I argue that such a balance can only be reached once individual rights are clearly defined, and that such a definition cannot prescind from an analysis of individual well-being in the information age. Hence, I propose an analysis of individual well-being which rests on the capability approach, and I then identify a set of rights that individuals should claim for themselves. Finally, I consider a criterion for balancing the proposed set of individual rights with cyber-security measures in the information age.

  12. Ambiguity and judgments of obese individuals: no news could be bad news.

    PubMed

    Ross, Kathryn M; Shivy, Victoria A; Mazzeo, Suzanne E

    2009-08-01

    Stigmatization towards obese individuals has not decreased despite the increasing prevalence of obesity. Nonetheless, stigmatization remains difficult to study, given concerns about social desirability. To address this issue, this study used paired comparisons and cluster analysis to examine how undergraduates (n=189) categorized scenarios describing the health-related behaviors of obese individuals. The cluster analysis found that the scenarios were categorized into two distinct clusters. The first cluster included all scenarios with health behaviors indicating high responsibility for body weight. These individuals were perceived as unattractive, lazy, less likeable, less disciplined, and more deserving of their condition compared to individuals in the second cluster, which included all scenarios with health behaviors indicating low responsibility for body weight. Four scenarios depicted obese individuals with ambiguous information regarding health behaviors; three out of these four individuals were categorized in the high-responsibility cluster. These findings suggested that participants viewed these individuals as negatively as those who were responsible for their condition. These results have practical implications for reducing obesity bias, as the etiology of obesity is typically not known in real-life situations.

  13. Trustful societies, trustful individuals, and health: An analysis of self-rated health and social trust using the World Value Survey.

    PubMed

    Jen, Min Hua; Sund, Erik R; Johnston, Ron; Jones, Kelvyn

    2010-09-01

    This study analyses the relationships between self-rated health and both individual and mean national social trust, focusing on a variant of Wilkinson's hypothesis that individuals will be less healthy the greater the lack of social cohesion in a country. It employs multilevel modelling on World Values Survey data across 69 countries with a total sample of 160,436 individuals. The results show that self-rated health are positively linked to social trust at both country and individual levels after controlling for individual socio-demographic and income variables plus individual social trust; increased trust is associated with better health. Moreover, this analysis of social trust gives some insight into distinctive results for the former Soviet Bloc countries, which have high reported levels of poor health, alongside the Scandinavian countries which have high levels of trust and better health situations. Our results support and extend the Wilkinson hypothesis that the level of trust, an indicator of social cohesion, is predictive of individuals' health. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Grouping individual independent BOLD effects: a new way to ICA group analysis

    NASA Astrophysics Data System (ADS)

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2009-04-01

    A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.

  15. Determination of the Absolute Number of Cytokine mRNA Molecules within Individual Activated Human T Cells

    NASA Technical Reports Server (NTRS)

    Karr, Laurel J.; Marshall, Gwen; Hockett, Richard D.; Bucy, R. Pat; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    A primary function of activated T cells is the expression and subsequent secretion of cytokines, which orchestrate the differentiation of other lymphocytes, modulate antigen presenting cell activity, and alter vascular endothelium to mediate an immune response. Since many features of immune regulation probably result from modest alterations of endogenous rates of multiple interacting processes, quantitative analysis of the frequency and specific activity of individual T cells is critically important. Using a coordinated set of quantitative methods, the absolute number of molecules of several key cytokine mRNA species in individual T cells has been determined. The frequency of human blood T cells activated in vitro by mitogens and recall protein antigens was determined by intracellular cytokine protein staining, in situ hybridization for cytokine mRNA, and by limiting dilution analysis for cytokine mRNA+ cells. The absolute number of mRNA molecules was simultaneously determined in both homogenates of the entire population of cells and in individual cells obtained by limiting dilution, using a quantitative, competitive RT-PCR assay. The absolute numbers of mRNA molecules in a population of cells divided by the frequency of individual positive cells, yielded essentially the same number of mRNA molecules per cell as direct analysis of individual cells by limiting dilution analysis. Mean numbers of mRNA per positive cell from both mitogen and antigen activated T cells, using these stimulation conditions, were 6000 for IL-2, 6300 for IFN-gamma, and 1600 for IL-4.

  16. Applications of Photogrammetry for Analysis of Forest Plantations. Preliminary study: Analysis of individual trees

    NASA Astrophysics Data System (ADS)

    Mora, R.; Barahona, A.; Aguilar, H.

    2015-04-01

    This paper presents a method for using high detail volumetric information, captured with a land based photogrammetric survey, to obtain information from individual trees. Applying LIDAR analysis techniques it is possible to measure diameter at breast height, height at first branch (commercial height), basal area and volume of an individual tree. Given this information it is possible to calculate how much of that tree can be exploited as wood. The main objective is to develop a methodology for successfully surveying one individual tree, capturing every side of the stem a using high resolution digital camera and reference marks with GPS coordinates. The process is executed for several individuals of two species present in the metropolitan area in San Jose, Costa Rica, Delonix regia (Bojer) Raf. and Tabebuia rosea (Bertol.) DC., each one with different height, stem shape and crown area. Using a photogrammetry suite all the pictures are aligned, geo-referenced and a dense point cloud is generated with enough detail to perform the required measurements, as well as a solid tridimensional model for volume measurement. This research will open the way to develop a capture methodology with an airborne camera using close range UAVs. An airborne platform will make possible to capture every individual in a forest plantation, furthermore if the analysis techniques applied in this research are automated it will be possible to calculate with high precision the exploit potential of a forest plantation and improve its management.

  17. Help Seeking in Online Collaborative Groupwork: A Multilevel Analysis

    ERIC Educational Resources Information Center

    Du, Jianxia; Xu, Jianzhong; Fan, Xitao

    2015-01-01

    This study examined predictive models for students' help seeking in the context of online collaborative groupwork. Results from multilevel analysis revealed that most of the variance in help seeking was at the individual student level, and multiple variables at the individual level were predictive of help-seeking behaviour. Help seeking was…

  18. Discrimination against Latina/os: A Meta-Analysis of Individual-Level Resources and Outcomes

    ERIC Educational Resources Information Center

    Lee, Debbiesiu L.; Ahn, Soyeon

    2012-01-01

    This meta-analysis synthesizes the findings of 60 independent samples from 51 studies examining racial/ethnic discrimination against Latina/os in the United States. The purpose was to identify individual-level resources and outcomes that most strongly relate to discrimination. Discrimination against Latina/os significantly results in outcomes…

  19. A Re-Examination of the Education Production Function Using Individual Participant Data

    ERIC Educational Resources Information Center

    Pigott, Therese D.; Williams, Ryan T.; Polanin, Joshua R.

    2011-01-01

    The focus and purpose of this research is to examine the benefits, limitations, and implications of Individual Participant Data (IPD) meta-analysis in education. Comprehensive research reviews in education have been limited to the use of aggregated data (AD) meta- analysis, techniques based on quantitatively combining information from studies on…

  20. CHARACTERIZATION OF LARGE PARTICLES AT A RURAL SITE IN THE EASTERN UNITED STATES: MASS DISTRIBUTION AND INDIVIDUAL PARTICLE ANALYSIS

    EPA Science Inventory

    A unique combination of an effective sampler and analysis of individual particles has been used in studying large particles (> 5 micrometers) at a rural site in Eastern United States. The sampler is a modified 'high volume' rotary inertial impactor, which consists of four collect...

  1. A Meta-Analysis of Smoking Cessation Interventions With Individuals in Substance Abuse Treatment or Recovery

    ERIC Educational Resources Information Center

    Prochaska, Judith J.; Delucchi, Kevin; Hall, Sharon M.

    2004-01-01

    This meta-analysis examined outcomes of smoking cessation interventions evaluated in 19 randomized controlled trials with individuals in current addictions treatment or recovery. Smoking and substance use outcomes at posttreatment and long-term follow-up (? 6 months) were summarized with random effects models. Intervention effects for smoking…

  2. A comprehensive literature review of haplotyping software and methods for use with unrelated individuals.

    PubMed

    Salem, Rany M; Wessel, Jennifer; Schork, Nicholas J

    2005-03-01

    Interest in the assignment and frequency analysis of haplotypes in samples of unrelated individuals has increased immeasurably as a result of the emphasis placed on haplotype analyses by, for example, the International HapMap Project and related initiatives. Although there are many available computer programs for haplotype analysis applicable to samples of unrelated individuals, many of these programs have limitations and/or very specific uses. In this paper, the key features of available haplotype analysis software for use with unrelated individuals, as well as pooled DNA samples from unrelated individuals, are summarised. Programs for haplotype analysis were identified through keyword searches on PUBMED and various internet search engines, a review of citations from retrieved papers and personal communications, up to June 2004. Priority was given to functioning computer programs, rather than theoretical models and methods. The available software was considered in light of a number of factors: the algorithm(s) used, algorithm accuracy, assumptions, the accommodation of genotyping error, implementation of hypothesis testing, handling of missing data, software characteristics and web-based implementations. Review papers comparing specific methods and programs are also summarised. Forty-six haplotyping programs were identified and reviewed. The programs were divided into two groups: those designed for individual genotype data (a total of 43 programs) and those designed for use with pooled DNA samples (a total of three programs). The accuracy of programs using various criteria are assessed and the programs are categorised and discussed in light of: algorithm and method, accuracy, assumptions, genotyping error, hypothesis testing, missing data, software characteristics and web implementation. Many available programs have limitations (eg some cannot accommodate missing data) and/or are designed with specific tasks in mind (eg estimating haplotype frequencies rather than assigning most likely haplotypes to individuals). It is concluded that the selection of an appropriate haplotyping program for analysis purposes should be guided by what is known about the accuracy of estimation, as well as by the limitations and assumptions built into a program.

  3. Context matters: the impact of unit leadership and empowerment on nurses' organizational commitment.

    PubMed

    Laschinger, Heather K Spence; Finegan, Joan; Wilk, Piotr

    2009-05-01

    The aim of this study was to test a multilevel model linking unit-level leader-member exchange quality and structural empowerment to nurses' psychological empowerment and organizational commitment at the individual level of analysis. Few studies have examined the contextual effects of unit leadership on individual nurse outcomes. Workplace empowerment has been related to retention outcomes such as organizational commitment in several studies, but few have studied the impact of specific unit characteristics within which nurses work on these outcomes. We surveyed 3,156 nurses in 217 hospital units to test the multilevel model. A multilevel path analysis revealed significant individual and contextual effects on nurses' organizational commitment. Both unit-level leader-member exchange quality and structural empowerment had significant direct effects on individual-level psychological empowerment and organizational commitment. Psychological empowerment mediated the relationship between core self-evaluations and organizational commitment at the individual level of analysis. The contextual effects of positive supervisor relationships and their influence on empowering working conditions at the unit level and, subsequently, nurses' organizational commitment highlight the importance of leadership for creating conditions that result in a committed nursing workforce.

  4. Design of DNA pooling to allow incorporation of covariates in rare variants analysis.

    PubMed

    Guan, Weihua; Li, Chun

    2014-01-01

    Rapid advances in next-generation sequencing technologies facilitate genetic association studies of an increasingly wide array of rare variants. To capture the rare or less common variants, a large number of individuals will be needed. However, the cost of a large scale study using whole genome or exome sequencing is still high. DNA pooling can serve as a cost-effective approach, but with a potential limitation that the identity of individual genomes would be lost and therefore individual characteristics and environmental factors could not be adjusted in association analysis, which may result in power loss and a biased estimate of genetic effect. For case-control studies, we propose a design strategy for pool creation and an analysis strategy that allows covariate adjustment, using multiple imputation technique. Simulations show that our approach can obtain reasonable estimate for genotypic effect with only slight loss of power compared to the much more expensive approach of sequencing individual genomes. Our design and analysis strategies enable more powerful and cost-effective sequencing studies of complex diseases, while allowing incorporation of covariate adjustment.

  5. Analysis of capture-recapture models with individual covariates using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew

    2009-01-01

    I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.

  6. Bayesian change-point analysis reveals developmental change in a classic theory of mind task.

    PubMed

    Baker, Sara T; Leslie, Alan M; Gallistel, C R; Hood, Bruce M

    2016-12-01

    Although learning and development reflect changes situated in an individual brain, most discussions of behavioral change are based on the evidence of group averages. Our reliance on group-averaged data creates a dilemma. On the one hand, we need to use traditional inferential statistics. On the other hand, group averages are highly ambiguous when we need to understand change in the individual; the average pattern of change may characterize all, some, or none of the individuals in the group. Here we present a new method for statistically characterizing developmental change in each individual child we study. Using false-belief tasks, fifty-two children in two cohorts were repeatedly tested for varying lengths of time between 3 and 5 years of age. Using a novel Bayesian change point analysis, we determined both the presence and-just as importantly-the absence of change in individual longitudinal cumulative records. Whenever the analysis supports a change conclusion, it identifies in that child's record the most likely point at which change occurred. Results show striking variability in patterns of change and stability across individual children. We then group the individuals by their various patterns of change or no change. The resulting patterns provide scarce support for sudden changes in competence and shed new light on the concepts of "passing" and "failing" in developmental studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. [Application of multivariate analysis to the serum mineral and trace element content on differentiation of healthy subjects].

    PubMed

    Rodríguez Rodríguez, E; Henríquez Sánchez, P; López Blanco, F; Díaz Romero, C; Serra Majem, L

    2004-01-01

    Serum concentrations of Na, K, Ca, Mg, Fe, Cu, Zn, Se, Mn and P were determined in apparently health individuals representing of the population of the Canary Islands. Multivariate analysis was applied on the data matrix in order to differentiate the individuals according several criteria such as gender, age, island and province of residence, smoking and drinking habits and physical exercise. 395 serum samples (187 men and 208 women) were analyzed mean age of 38.4 +/- 20.0 years. Individuals data about age, gender, weight, height, alcohol consumption, smoking habits and physical exercise were recorded using standardized questionnaires. The determination of minerals was carried out by flame emission spectrometry (Na and K) and atomic absorption spectrometry with flame air/acetylene (Ca, Mg, Fe, Cu, Zn), hybride generation (Se) and graphite furnace (Mn). The P was determined by a colorimetric method. The sex and age of individuals influenced on the serum concentrations of some minerals, Cu and Fe, and P and Se, respectively. The island of residence influenced the mean concentrations of the most the minerals analysed. The smoking and drinking habits do not seem to influence the mean contents of the minerals in an important manner. Physical exercise had significant influence on the P, Cu and Mn concentrations in serum. The water for consumption influenced on the serum concentrations of the electrolytes and Ca and Mg, but it did not affect the concentrations of the trace elements. Applying discriminant analysis the individuals lower 18 years were reasonably well differentiated (89% of the individuals correctly classified) from the rest of individuals. A tendency for differentiation of individuals according to the island of residence was also observed. A low differentiation of the individuals according to the sex, province or island or residence and habits or life style was observed after application of multivariate analysis techniques. However, the adults were reasonably differentiated from the children and adolescent, and the inhabitants of Lanzarote and La Palma tend to separate from the rest of the individuals of their province.

  8. A New Method for Non-destructive Measurement of Biomass, Growth Rates, Vertical Biomass Distribution and Dry Matter Content Based on Digital Image Analysis

    PubMed Central

    Tackenberg, Oliver

    2007-01-01

    Background and Aims Biomass is an important trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive. Thus, they do not allow the development of individual plants to be followed and they require many individuals to be cultivated for repeated measurements. Non-destructive methods do not have these limitations. Here, a non-destructive method based on digital image analysis is presented, addressing not only above-ground fresh biomass (FBM) and oven-dried biomass (DBM), but also vertical biomass distribution as well as dry matter content (DMC) and growth rates. Methods Scaled digital images of the plants silhouettes were taken for 582 individuals of 27 grass species (Poaceae). Above-ground biomass and DMC were measured using destructive methods. With image analysis software Zeiss KS 300, the projected area and the proportion of greenish pixels were calculated, and generalized linear models (GLMs) were developed with destructively measured parameters as dependent variables and parameters derived from image analysis as independent variables. A bootstrap analysis was performed to assess the number of individuals required for re-calibration of the models. Key Results The results of the developed models showed no systematic errors compared with traditionally measured values and explained most of their variance (R2 ≥ 0·85 for all models). The presented models can be directly applied to herbaceous grasses without further calibration. Applying the models to other growth forms might require a re-calibration which can be based on only 10–20 individuals for FBM or DMC and on 40–50 individuals for DBM. Conclusions The methods presented are time and cost effective compared with traditional methods, especially if development or growth rates are to be measured repeatedly. Hence, they offer an alternative way of determining biomass, especially as they are non-destructive and address not only FBM and DBM, but also vertical biomass distribution and DMC. PMID:17353204

  9. Mortality and the business cycle: Evidence from individual and aggregated data.

    PubMed

    van den Berg, Gerard J; Gerdtham, Ulf G; von Hinke, Stephanie; Lindeboom, Maarten; Lissdaniels, Johannes; Sundquist, Jan; Sundquist, Kristina

    2017-12-01

    There has been much interest recently in the relationship between economic conditions and mortality, with some studies showing that mortality is pro-cyclical, while others find the opposite. Some suggest that the aggregation level of analysis (e.g. individual vs. regional) matters. We use both individual and aggregated data on a sample of 20-64 year-old Swedish men from 1993 to 2007. Our results show that the association between the business cycle and mortality does not depend on the level of analysis: the sign and magnitude of the parameter estimates are similar at the individual level and the aggregate (county) level; both showing pro-cyclical mortality. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. PURA syndrome: clinical delineation and genotype-phenotype study in 32 individuals with review of published literature

    PubMed Central

    Reijnders, Margot R F; Janowski, Robert; Alvi, Mohsan; Self, Jay E; van Essen, Ton J; Vreeburg, Maaike; Rouhl, Rob P W; Stevens, Servi J C; Stegmann, Alexander P A; Schieving, Jolanda; Pfundt, Rolph; van Dijk, Katinke; Smeets, Eric; Stumpel, Connie T R M; Bok, Levinus A; Cobben, Jan Maarten; Engelen, Marc; Mansour, Sahar; Whiteford, Margo; Chandler, Kate E; Douzgou, Sofia; Cooper, Nicola S; Tan, Ene-Choo; Foo, Roger; Lai, Angeline H M; Rankin, Julia; Green, Andrew; Lönnqvist, Tuula; Isohanni, Pirjo; Williams, Shelley; Ruhoy, Ilene; Carvalho, Karen S; Dowling, James J; Lev, Dorit L; Sterbova, Katalin; Lassuthova, Petra; Neupauerová, Jana; Waugh, Jeff L; Keros, Sotirios; Clayton-Smith, Jill; Smithson, Sarah F; Brunner, Han G; van Hoeckel, Ceciel; Anderson, Mel; Clowes, Virginia E; Siu, Victoria Mok; DDD study, The; Selber, Paulo; Leventer, Richard J; Nellaker, Christoffer; Niessing, Dierk; Hunt, David; Baralle, Diana

    2018-01-01

    Background De novo mutations in PURA have recently been described to cause PURA syndrome, a neurodevelopmental disorder characterised by severe intellectual disability (ID), epilepsy, feeding difficulties and neonatal hypotonia. Objectives To delineate the clinical spectrum of PURA syndrome and study genotype-phenotype correlations. Methods Diagnostic or research-based exome or Sanger sequencing was performed in individuals with ID. We systematically collected clinical and mutation data on newly ascertained PURA syndrome individuals, evaluated data of previously reported individuals and performed a computational analysis of photographs. We classified mutations based on predicted effect using 3D in silico models of crystal structures of Drosophila-derived Pur-alpha homologues. Finally, we explored genotype-phenotype correlations by analysis of both recurrent mutations as well as mutation classes. Results We report mutations in PURA (purine-rich element binding protein A) in 32 individuals, the largest cohort described so far. Evaluation of clinical data, including 22 previously published cases, revealed that all have moderate to severe ID and neonatal-onset symptoms, including hypotonia (96%), respiratory problems (57%), feeding difficulties (77%), exaggerated startle response (44%), hypersomnolence (66%) and hypothermia (35%). Epilepsy (54%) and gastrointestinal (69%), ophthalmological (51%) and endocrine problems (42%) were observed frequently. Computational analysis of facial photographs showed subtle facial dysmorphism. No strong genotype-phenotype correlation was identified by subgrouping mutations into functional classes. Conclusion We delineate the clinical spectrum of PURA syndrome with the identification of 32 additional individuals. The identification of one individual through targeted Sanger sequencing points towards the clinical recognisability of the syndrome. Genotype-phenotype analysis showed no significant correlation between mutation classes and disease severity. PMID:29097605

  11. Serum Anti-Glycan Antibody Biomarkers for Inflammatory Bowel Disease Diagnosis and Progression: A Systematic Review and Meta-analysis

    PubMed Central

    Kaul, Amit; Hutfless, Susan; Liu, Ling; Bayless, Theodore M.; Marohn, Michael R.; Li, Xuhang

    2011-01-01

    BACKGROUND Anti-glycan antibody serologic markers may serve as useful adjunct in the diagnosis/prognosis of inflammatory bowel disease (IBD), including Crohn’s disease (CD) and ulcerative colitis (UC). This meta-analysis/systemic review was aimed to evaluate the diagnostic value, as well as the association of anti-glycan biomarkers with IBD susceptible gene variants, disease complications, and need for surgery in IBD. METHODS The diagnostic odds ratio (DOR), 95% confidence interval (CI), and sensitivity/specificity were used to compare the diagnostic value of individual and combinations of anti-glycan markers and their association with disease course (complication and/or need for surgery). RESULTS Fourteen studies were included in the systemic review and nine in the meta-analysis. Individually, ASCA had the highest DOR for differentiating IBD from healthy (DOR 21.1; 1.8-247.3; 2 studies), and CD from UC (DOR 10.2; CI 7.7-13.7; 7 studies). For combination of ≥2 markers, the DOR was 2.8 (CI 2.2-3.6; 2 studies) for CD-related surgery, higher than any individual marker, while the DOR for differentiating CD from UC was 10.2 (CI 5.6-18.5; 3 studies) and for complication was 2.8 (CI 2.2-3.7; 2 studies), similar to individual markers. CONCLUSIONS ASCA had the highest diagnostic value among individual anti-glycan markers. While ACCA had the highest association with complications, ASCA and ACCA associated equally with need for surgery. Although in most individual studies, combination of ≥2 markers had a better diagnostic value as well as higher association with complications and need for surgery, we found the combination performing slightly better than any individual marker in our meta-analysis. PMID:22294465

  12. CERAMIC: Case-Control Association Testing in Samples with Related Individuals, Based on Retrospective Mixed Model Analysis with Adjustment for Covariates

    PubMed Central

    Zhong, Sheng; McPeek, Mary Sara

    2016-01-01

    We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM) approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype), because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D) from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan. PMID:27695091

  13. Understanding Individual-Level Change through the Basis Functions of a Latent Curve Model

    ERIC Educational Resources Information Center

    Blozis, Shelley A.; Harring, Jeffrey R.

    2017-01-01

    Latent curve models have become a popular approach to the analysis of longitudinal data. At the individual level, the model expresses an individual's response as a linear combination of what are called "basis functions" that are common to all members of a population and weights that may vary among individuals. This article uses…

  14. Aging and Variability of Individual Differences: A Longitudinal Analysis of Social, Psychological, and Physiological Indicators.

    ERIC Educational Resources Information Center

    Maddox, George L.; Douglass, Elizabeth B.

    This paper explores the relationship between age and individual differences. Two hypotheses were tested through the use of repeated measures of functioning in terms of social, psychological, and physiological parameters: (1) individual differences do not decrease with age, and (2) individuals tend to maintain the same rank in relation to age peers…

  15. Meta-Analysis of Human Factors Engineering Studies Comparing Individual Differences, Practice Effects and Equipment Design Variations.

    DTIC Science & Technology

    1985-02-21

    Approvoid foT public 90Ieleol, 2* . tJni7nited " - . - o . - ’--. * . -... . 1 UNCLASSIFIED S, E CURITY CLASSIFICATION OF THIS PAGE-" REPORT DOCUMENTATION...ACCESSION NO. 11. TITLE (Include Security Classification) . Veta -Analysis of Human Factors Engineering Studies Comparing Individual Differences, Practice...Background C Opportunity D Significance E History III. PHASE I FINAL REPORT A Literature Review B Formal Analysis C Results D Implications for Phase II IV

  16. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    PubMed

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  17. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis

    PubMed Central

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J.; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T.; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-01-01

    Motivation: A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. Results: We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness. Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Availability and implementation: Code is available at https://github.com/aalto-ics-kepaco Contacts: anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153689

  18. The role of molecular genetic analysis in the diagnosis of primary ciliary dyskinesia.

    PubMed

    Kim, Raymond H; A Hall, David; Cutz, Ernest; Knowles, Michael R; Nelligan, Kathleen A; Nykamp, Keith; Zariwala, Maimoona A; Dell, Sharon D

    2014-03-01

    Primary ciliary dyskinesia (PCD) is an autosomal recessive genetic disorder of motile cilia. The diagnosis of PCD has previously relied on ciliary analysis with transmission electron microscopy or video microscopy. However, patients with PCD may have normal ultrastructural appearance, and ciliary analysis has limited accessibility. Alternatively, PCD can be diagnosed by demonstrating biallelic mutations in known PCD genes. Genetic testing is emerging as a diagnostic tool to complement ciliary analysis where interpretation and access may delay diagnosis. To determine the diagnostic yield of genetic testing of patients with a confirmed or suspected diagnosis of PCD in a multiethnic urban center. Twenty-eight individuals with confirmed PCD on transmission electron microscopy of ciliary ultrastructure and 24 individuals with a probable diagnosis of PCD based on a classical PCD phenotype and low nasal nitric oxide had molecular analysis of 12 genes associated with PCD. Of 49 subjects who underwent ciliary biopsy, 28 (57%) were diagnosed with PCD through an ultrastructural defect. Of the 52 individuals who underwent molecular genetic analysis, 22 (42%) individuals had two mutations in known PCD genes. Twenty-four previously unreported mutations in known PCD genes were observed. Combining both diagnostic modalities of biopsy and molecular genetics, the diagnostic yield increased to 69% compared with 57% based on biopsy alone. The diagnosis of PCD is challenging and has traditionally relied on ciliary biopsy, which is unreliable as the sole criterion for a definitive diagnosis. Molecular genetic analysis can be used as a complementary test to increase the diagnostic yield.

  19. Individual and Work-Related Factors Influencing Burnout of Mental Health Professionals: A Meta-Analysis

    ERIC Educational Resources Information Center

    Lim, Nayoung; Kim, Eun Kyoung; Kim, Hyunjung; Yang, Eunjoo; Lee, Sang Min

    2010-01-01

    The current study identifies and assesses individual and work-related factors as correlates of burnout among mental health professionals. Results of a meta-analysis indicate that age and work setting variables are the most significant indicators of emotional exhaustion and depersonalization. In terms of level of personal accomplishment, the age…

  20. An Investigative Assessment of the Need for Individual Learning Support for a Y9 Pupil with Learning Difficulties and ADHD.

    ERIC Educational Resources Information Center

    Pester, James

    2002-01-01

    A year nine pupil with attention deficit hyperactivity disorder in a residential school was not accessing the curriculum due to distractibility and aggressive behavior. Analysis of data gathered through individual interviews, analysis of daily staff reports, and structured behavioral observation, indicate the student functions more successfully…

  1. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  2. THE CONTEXTUAL ANALYSIS OF SYMBOLISM IN LANGUAGE.

    ERIC Educational Resources Information Center

    LAFFAL, JULIUS

    A TECHNIQUE OF ANALYSIS OF SYMBOLISM IS PRESENTED, BASED ON THE IDEA THAT WORDS WHICH APPEAR IN CLOSE ASSOCIATION IN THE SPEECH OR WRITING OF AN INDIVIDUAL ARE PSYCHOLOGICALLY CLOSELY RELATED. THE LATENT MEANING, OR SYMBOLISM, OF A WORD IS ELUCIDATED BY SHOWING HOW CLOSE IT IS, CONCEPTUALLY, TO OTHER SELECTED WORDS OR THEMES IN THE INDIVIDUAL'S…

  3. Measurement Structure of the Trait Hope Scale in Persons with Spinal Cord Injury: A Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Smedema, Susan Miller; Pfaller, Joseph; Moser, Erin; Tu, Wei-Mo; Chan, Fong

    2013-01-01

    Objective: To evaluate the measurement structure of the Trait Hope Scale (THS) among individuals with spinal cord injury. Design: Confirmatory factor analysis and reliability and validity analyses were performed. Participants: 242 individuals with spinal cord injury. Results: Results support the two-factor measurement model for the THS with agency…

  4. Time-to-Event Analysis of Individual Variables Associated with Nursing Students' Academic Failure: A Longitudinal Study

    ERIC Educational Resources Information Center

    Dante, Angelo; Fabris, Stefano; Palese, Alvisa

    2013-01-01

    Empirical studies and conceptual frameworks presented in the extant literature offer a static imagining of academic failure. Time-to-event analysis, which captures the dynamism of individual factors, as when they determine the failure to properly tailor timely strategies, impose longitudinal studies which are still lacking within the field. The…

  5. Attitudes toward the Unification of Western Europe and Cross-National Suicide Rates: Eight European Countries, 1973-1990.

    ERIC Educational Resources Information Center

    Fernquist, Robert M.

    2001-01-01

    Political integration theory (Durkheim) argues that when political crises occur, individuals band together to solve the problem at hand, which yields lower suicide rates. This analysis examines a different component of political integration-attitudes. Cross-sectional time series analysis reveals that attitudes individuals hold toward such an event…

  6. Functional Technology for Individuals with Intellectual Disabilities: Meta-Analysis of Mobile Device-Based Interventions

    ERIC Educational Resources Information Center

    Kim, Jemma; Kimm, Christina H.

    2017-01-01

    This study employs a meta-analysis of single-subject design research to investigate the efficacy of mobile device-based interventions for individuals with intellectual disabilities (ID) and to further examine possible variables that may moderate the intervention outcomes. A total of 23 studies, 78 participants, and 140 observed cases that met the…

  7. Combining individual participant and aggregated data in a meta-analysis with correlational studies.

    PubMed

    Pigott, Terri; Williams, Ryan; Polanin, Joshua

    2012-12-01

    This paper presents methods for combining individual participant data (IPD) with aggregated study level data (AD) in a meta-analysis of correlational studies. Although medical researchers have employed IPD in a wide range of studies, only a single example exists in the social sciences. New policies at the National Science Foundation requiring grantees to submit data archiving plans may increase social scientists' access to individual level data that could be combined with traditional meta-analysis. The methods presented here extend prior work on IPD to meta-analyses using correlational studies. The examples presented illustrate the synthesis of publicly available national datasets in education with aggregated study data from a meta-analysis examining the correlation of socioeconomic status measures and academic achievement. The major benefit of the inclusion of the individual level is that both within-study and between-study interactions among moderators of effect size can be estimated. Given the potential growth in data archives in the social sciences, we should see a corresponding increase in the ability to synthesize IPD and AD in a single meta-analysis, leading to a more complete understanding of how within-study and between-study moderators relate to effect size. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Individual and population pharmacokinetic compartment analysis: a graphic procedure for quantification of predictive performance.

    PubMed

    Eksborg, Staffan

    2013-01-01

    Pharmacokinetic studies are important for optimizing of drug dosing, but requires proper validation of the used pharmacokinetic procedures. However, simple and reliable statistical methods suitable for evaluation of the predictive performance of pharmacokinetic analysis are essentially lacking. The aim of the present study was to construct and evaluate a graphic procedure for quantification of predictive performance of individual and population pharmacokinetic compartment analysis. Original data from previously published pharmacokinetic compartment analyses after intravenous, oral, and epidural administration, and digitized data, obtained from published scatter plots of observed vs predicted drug concentrations from population pharmacokinetic studies using the NPEM algorithm and NONMEM computer program and Bayesian forecasting procedures, were used for estimating the predictive performance according to the proposed graphical method and by the method of Sheiner and Beal. The graphical plot proposed in the present paper proved to be a useful tool for evaluation of predictive performance of both individual and population compartment pharmacokinetic analysis. The proposed method is simple to use and gives valuable information concerning time- and concentration-dependent inaccuracies that might occur in individual and population pharmacokinetic compartment analysis. Predictive performance can be quantified by the fraction of concentration ratios within arbitrarily specified ranges, e.g. within the range 0.8-1.2.

  9. The color(s) of human hair--forensic hair analysis with SpectraCube.

    PubMed

    Birngruber, Christoph; Ramsthaler, Frank; Verhoff, Marcel A

    2009-03-10

    Human hair is among the most common kind of evidence secured at crime scenes. Although DNA analysis through STR-typing is possible in principle, it is not very promising for telogenic hair or single hairs. For the mixed traces frequently found in practice, composed of different hair from an unknown number of individuals, mtDNA sequencing of each individual hair seems to be the only possible, even if technically elaborate, solution. If it were possible to pool all hair belonging to an individual prior to DNA analysis, then this effort could not only be reduced, but the number of hair for an STR-approach could also be increased. Although it is possible to examine hair microscopically, this method must be considered unsuitable for pooling, since the results depend strongly on examiner experience, and the hair cannot always be correctly attributed to an individual. The goal of this study was to develop an objective non-DNA-contaminative pooling method for hair. To this end, the efficacy of spectral imaging as a method of obtaining information--beyond that obtained from a purely microscopic and morphological approach--for the identification of individuals was investigated. Three hairs each from 25 test persons (female: 18; male: 7) were examined with a SpectraCube-System and a light microscope. Six spectra were calculated for each hair, and the hairs from each individual were not only compared to each other, but also to those of the other individuals. From a forensic vantage, the examination showed, in particular, that individuals, whose hair could not be distinguished on the basis of morphology, could also not be accurately distinguished with the SpectraCube. The intra-individual differences were, in part, greater than the inter-individual differences. Altogether, the study shows that a person's hair color, as perceived, is composed of many naturally different, individual colors.

  10. The oral microbiome in human immunodeficiency virus (HIV)-positive individuals.

    PubMed

    Kistler, James O; Arirachakaran, Pratanporn; Poovorawan, Yong; Dahlén, Gunnar; Wade, William G

    2015-09-01

    Human immunodeficiency virus (HIV) infection is associated with a range of oral conditions, and increased numbers of disease-associated microbial species have previously been found in HIV-positive subjects. The aim of this study was to use next-generation sequencing to compare the composition of the oral microbiome in HIV-positive and -negative individuals. Plaque and saliva were collected from 37 HIV-positive individuals and 37 HIV-negative individuals, and their bacterial composition determined by pyrosequencing of partial 16S rRNA genes. A total of 855,222 sequences were analysed. The number of species-level operational taxonomic units (OTUs) detected was significantly lower in the saliva of HIV-positive individuals (mean = 303.3) than in that of HIV-negative individuals (mean = 365.5) (P < 0.0003). Principal coordinates analysis (PCoA) based on community membership (Jaccard index) and structure (Yue and Clayton measure of dissimilarity) showed significant separation of plaque and saliva samples [analysis of molecular variance (AMOVA), P < 0.001]. PCoA plots did not show any clear separation based on HIV status. However, AMOVA indicated that there was a significant difference in the community membership of saliva between HIV-positive and -negative groups (P = 0.001). Linear discriminant analysis effect size revealed an OTU identified as Haemophilus parainfluenzae to be significantly associated with HIV-positive individuals, whilst Streptococcus mitis/HOT473 was most significantly associated with HIV-negative individuals. In conclusion, this study has confirmed that the microbial composition of saliva and plaque is different. The oral microbiomes of HIV-positive and -negative individuals were found to be similar overall, although there were minor but significant differences in the composition of the salivary microbiota of the two groups.

  11. Spectroscopic analysis of solar and cosmic X-ray spectra. 1: The nature of cosmic X-ray spectra and proposed analytical techniques

    NASA Technical Reports Server (NTRS)

    Walker, A. B. C., Jr.

    1975-01-01

    Techniques for the study of the solar corona are reviewed as an introduction to a discussion of modifications required for the study of cosmic sources. Spectroscopic analysis of individual sources and the interstellar medium is considered. The latter was studied via analysis of its effect on the spectra of selected individual sources. The effects of various characteristics of the ISM, including the presence of grains, molecules, and ionization, are first discussed, and the development of ISM models is described. The expected spectral structure of individual cosmic sources is then reviewed with emphasis on supernovae remnants and binary X-ray sources. The observational and analytical requirements imposed by the characteristics of these sources are identified, and prospects for the analysis of abundances and the study of physical parameters within them are assessed. Prospects for the spectroscopic study of other classes of X-ray sources are also discussed.

  12. Average grip strength: a meta-analysis of data obtained with a Jamar dynamometer from individuals 75 years or more of age.

    PubMed

    Bohannon, Richard W; Bear-Lehman, Jane; Desrosiers, Johanne; Massy-Westropp, Nicola; Mathiowetz, Virgil

    2007-01-01

    Although strength diminishes with age, average values for grip strength have not been available heretofore for discrete strata after 75 years. The purpose of this meta-analysis was to provide average values for the left and right hands of men and women 75-79, 80-84, 85-89, and 90-99 years. Contributing to the analysis were 7 studies and 739 subjects with whom the Jamar dynamometer and standard procedures were employed. Based on the analysis, average values for the left and right hands of men and women in each age stratum were derived. The derived values can serve as a standard of comparison for individual patients. An individual whose grip strength is below the lower limit of the confidence intervals of each stratum can be confidently considered to have less than average grip strength.

  13. Limitations in Using Multiple Imputation to Harmonize Individual Participant Data for Meta-Analysis.

    PubMed

    Siddique, Juned; de Chavez, Peter J; Howe, George; Cruden, Gracelyn; Brown, C Hendricks

    2018-02-01

    Individual participant data (IPD) meta-analysis is a meta-analysis in which the individual-level data for each study are obtained and used for synthesis. A common challenge in IPD meta-analysis is when variables of interest are measured differently in different studies. The term harmonization has been coined to describe the procedure of placing variables on the same scale in order to permit pooling of data from a large number of studies. Using data from an IPD meta-analysis of 19 adolescent depression trials, we describe a multiple imputation approach for harmonizing 10 depression measures across the 19 trials by treating those depression measures that were not used in a study as missing data. We then apply diagnostics to address the fit of our imputation model. Even after reducing the scale of our application, we were still unable to produce accurate imputations of the missing values. We describe those features of the data that made it difficult to harmonize the depression measures and provide some guidelines for using multiple imputation for harmonization in IPD meta-analysis.

  14. Facebook Facts: Breast Reconstruction Patient-Reported Outcomes Using Social Media.

    PubMed

    Tang, Sherry Y Q; Israel, Jacqueline S; Poore, Samuel O; Afifi, Ahmed M

    2018-05-01

    Social media are used for information sharing among patients with similar health conditions, and analysis of social media activity could inform clinical decision-making. The aim of this study was to use Facebook to evaluate a cohort of individuals' perceptions of and satisfaction with breast reconstruction. In this observational study, the authors collected and analyzed posts pertaining to autologous and implant-based breast reconstruction from active Facebook groups. Patient satisfaction data were categorized, and a thematic analysis of posts was conducted. Qualitative posts were grouped based on common themes and quantitatively compared using frequency and chi-square analysis. The authors evaluated 500 posts from two Facebook groups. Two hundred sixty-four posts referenced deep inferior epigastric perforator (DIEP) flap reconstruction and 117 were related to implant-based reconstruction. Among individuals referencing DIEP flap reconstruction, 52 percent were satisfied, compared with 20 percent of individuals who referenced satisfaction with implant-based reconstruction (p < 0.0001). Individuals posting about DIEP flaps reported a higher rate of unexpected side effects (p < 0.001) and numbness (p = 0.004). When referencing implant-based reconstruction, individuals reported significantly higher rates of infection, contracture, and implant failure (p < 0.001). Based on the authors' review of social media activity, individuals undergoing DIEP flap breast reconstruction expressed relatively high individual satisfaction despite difficult postoperative recovery. Individuals who referenced implant-based reconstruction mentioned infection and implant failure, leading to high rates of dissatisfaction. Social media appear to provide informational and emotional support to patients. Plastic surgeons can use social media to gather unbiased information of patients' experience to inform clinical conversation and guide clinical practice.

  15. The interaction between individualism and wellbeing in predicting mortality: Survey of Health Ageing and Retirement in Europe.

    PubMed

    Okely, Judith A; Weiss, Alexander; Gale, Catharine R

    2018-02-01

    The link between greater wellbeing and longevity is well documented. The aim of the current study was to test whether this association is consistent across individualistic and collectivistic cultures. The sample consisted of 13,596 participants from 11 European countries, each of which was assigned an individualism score according to Hofstede et al.'s (Cultures and organizations: software of the mind, McGraw Hill, New York, 2010) cultural dimension of individualism. We tested whether individualism moderated the cross-sectional association between wellbeing and self-rated health or the longitudinal association between wellbeing and mortality risk. Our analysis revealed a significant interaction between individualism and wellbeing such that the association between wellbeing and self-rated health or risk of mortality from cardiovascular disease was stronger in more individualistic countries. However, the interaction between wellbeing and individualism was not significant in analysis predicting all-cause mortality. Further prospective studies are needed to confirm our finding and to explore the factors responsible for this culturally dependent effect.

  16. Quantifying Individual Brain Connectivity with Functional Principal Component Analysis for Networks.

    PubMed

    Petersen, Alexander; Zhao, Jianyang; Carmichael, Owen; Müller, Hans-Georg

    2016-09-01

    In typical functional connectivity studies, connections between voxels or regions in the brain are represented as edges in a network. Networks for different subjects are constructed at a given graph density and are summarized by some network measure such as path length. Examining these summary measures for many density values yields samples of connectivity curves, one for each individual. This has led to the adoption of basic tools of functional data analysis, most commonly to compare control and disease groups through the average curves in each group. Such group differences, however, neglect the variability in the sample of connectivity curves. In this article, the use of functional principal component analysis (FPCA) is demonstrated to enrich functional connectivity studies by providing increased power and flexibility for statistical inference. Specifically, individual connectivity curves are related to individual characteristics such as age and measures of cognitive function, thus providing a tool to relate brain connectivity with these variables at the individual level. This individual level analysis opens a new perspective that goes beyond previous group level comparisons. Using a large data set of resting-state functional magnetic resonance imaging scans, relationships between connectivity and two measures of cognitive function-episodic memory and executive function-were investigated. The group-based approach was implemented by dichotomizing the continuous cognitive variable and testing for group differences, resulting in no statistically significant findings. To demonstrate the new approach, FPCA was implemented, followed by linear regression models with cognitive scores as responses, identifying significant associations of connectivity in the right middle temporal region with both cognitive scores.

  17. Individual differences in political ideology are effects of adaptive error management.

    PubMed

    Petersen, Michael Bang; Aarøe, Lene

    2014-06-01

    We apply error management theory to the analysis of individual differences in the negativity bias and political ideology. Using principles from evolutionary psychology, we propose a coherent theoretical framework for understanding (1) why individuals differ in their political ideology and (2) the conditions under which these individual differences influence and fail to influence the political choices people make.

  18. Analysis of the individual risk of altitude decompression sickness under repeated exposures

    NASA Technical Reports Server (NTRS)

    Kumar, K. Vasantha; Horrigan, David J.; Waligora, James M.; Gilbert, John H.

    1991-01-01

    In a case-control study, researchers examined the risk of decompression sickness (DCS) in individual subjects with higher number of exposures. Of the 126 subjects, 42 showed one or more episodes of DCS. Examination of the exposure-DCS relationship by odds ratio showed a linear relationship. Stratification analysis showed that sex, tissue ratio, and the presence of Doppler microbubbles were cofounders of this risk. A higher number of exposures increased the risk of DCS in this analysis.

  19. Candida parapsilosis biofilm identification by Raman spectroscopy.

    PubMed

    Samek, Ota; Mlynariková, Katarina; Bernatová, Silvie; Ježek, Jan; Krzyžánek, Vladislav; Šiler, Martin; Zemánek, Pavel; Růžička, Filip; Holá, Veronika; Mahelová, Martina

    2014-12-22

    Colonies of Candida parapsilosis on culture plates were probed directly in situ using Raman spectroscopy for rapid identification of specific strains separated by a given time intervals (up to months apart). To classify the Raman spectra, data analysis was performed using the approach of principal component analysis (PCA). The analysis of the data sets generated during the scans of individual colonies reveals that despite the inhomogeneity of the biological samples unambiguous associations to individual strains (two biofilm-positive and two biofilm-negative) could be made.

  20. Candida parapsilosis Biofilm Identification by Raman Spectroscopy

    PubMed Central

    Samek, Ota; Mlynariková, Katarina; Bernatová, Silvie; Ježek, Jan; Krzyžánek, Vladislav; Šiler, Martin; Zemánek, Pavel; Růžička, Filip; Holá, Veronika; Mahelová, Martina

    2014-01-01

    Colonies of Candida parapsilosis on culture plates were probed directly in situ using Raman spectroscopy for rapid identification of specific strains separated by a given time intervals (up to months apart). To classify the Raman spectra, data analysis was performed using the approach of principal component analysis (PCA). The analysis of the data sets generated during the scans of individual colonies reveals that despite the inhomogeneity of the biological samples unambiguous associations to individual strains (two biofilm-positive and two biofilm-negative) could be made. PMID:25535081

  1. Tobacco, Marijuana, and Alcohol Use in University Students: A Cluster Analysis

    PubMed Central

    Primack, Brian A.; Kim, Kevin H.; Shensa, Ariel; Sidani, Jaime E.; Barnett, Tracey E.; Switzer, Galen E.

    2012-01-01

    Objective Segmentation of populations may facilitate development of targeted substance abuse prevention programs. We aimed to partition a national sample of university students according to profiles based on substance use. Participants We used 2008–2009 data from the National College Health Assessment from the American College Health Association. Our sample consisted of 111,245 individuals from 158 institutions. Method We partitioned the sample using cluster analysis according to current substance use behaviors. We examined the association of cluster membership with individual and institutional characteristics. Results Cluster analysis yielded six distinct clusters. Three individual factors—gender, year in school, and fraternity/sorority membership—were the most strongly associated with cluster membership. Conclusions In a large sample of university students, we were able to identify six distinct patterns of substance abuse. It may be valuable to target specific populations of college-aged substance users based on individual factors. However, comprehensive intervention will require a multifaceted approach. PMID:22686360

  2. Image Segmentation for Connectomics Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tasdizen, Tolga; Seyedhosseini, Mojtaba; Liu, TIng

    Reconstruction of neural circuits at the microscopic scale of individual neurons and synapses, also known as connectomics, is an important challenge for neuroscience. While an important motivation of connectomics is providing anatomical ground truth for neural circuit models, the ability to decipher neural wiring maps at the individual cell level is also important in studies of many neurodegenerative diseases. Reconstruction of a neural circuit at the individual neuron level requires the use of electron microscopy images due to their extremely high resolution. Computational challenges include pixel-by-pixel annotation of these images into classes such as cell membrane, mitochondria and synaptic vesiclesmore » and the segmentation of individual neurons. State-of-the-art image analysis solutions are still far from the accuracy and robustness of human vision and biologists are still limited to studying small neural circuits using mostly manual analysis. In this chapter, we describe our image analysis pipeline that makes use of novel supervised machine learning techniques to tackle this problem.« less

  3. Reminiscence in dementia: a concept analysis.

    PubMed

    Dempsey, Laura; Murphy, Kathy; Cooney, Adeline; Casey, Dympna; O'Shea, Eamon; Devane, Declan; Jordan, Fionnuala; Hunter, Andrew

    2014-03-01

    This paper is a report of an analysis of the concept of reminiscence in dementia and highlights its uses as a therapeutic intervention used on individuals with dementia. No single definition of reminiscence exists in healthcare literature; however, definitions offered have similar components. The term life review is commonly used when discussing reminiscence; however, both terms are quite different in their goals, theory base and content. This concept analysis identified reminiscence as a process which occurs in stages, involving the recalling of early life events and interaction between individuals. The antecedents of reminiscence are age, life transitions, attention span, ability to recall, ability to vocalise and stressful situations. Reminiscence can lead to positive mental health, enhanced self esteem and improved communication skills. It also facilitates preparation for death, increases interaction between people, prepares for the future and evaluates a past life. Reminiscence therapy is used extensively in dementia care and evidence shows when used effectively it helps individuals retain a sense of self worth, identity and individuality.

  4. Masticatory process in individuals with maxillary and mandibular osteoporosis: electromyographic analysis.

    PubMed

    Siéssere, S; de Albuquerque Lima, N; Semprini, M; de Sousa, L G; Paulo Mardegan Issa, J; Aparecida Caldeira Monteiro, S; Cecílio Hallak Regalo, S

    2009-11-01

    The masseter and temporal muscles of patients with maxillary and mandibular osteoporosis were submitted to electromyographic analysis and compared with a control group. In conclusion, individuals with osteoporosis did not show significantly lower masticatory cycle performance and efficiency compared to the control group during the proposal mastications. This study aimed to examine electromyographically the masseter and temporal muscles of patients with maxillary and mandibular osteoporosis and compare these patients with control patients. Sixty individuals of both genders with an average age of 53.0 +/- 5 years took part in the study, distributed in two groups with 30 individuals each: (1) individuals with osteoporosis; (2) control patients during the habitual and non-habitual mastication. The electromyographic apparel used was a Myosystem-BR1-DataHomins Technology Ltda., with five channels of acquisition and electrodes active differentials. Statistical analysis of the results was performed using SPSS version 15.0 (Chicago, IL, USA). The result of the Student's t test indicated no significant differences (p > 0.05) between the normalized values of the ensemble average obtained in masticatory cycles in both groups. Based on the results of this study, it was concluded that individuals with osteoporosis did not show significantly lower masticatory cycle performance and efficiency compared to control subjects during the habitual and non-habitual mastications. This result is very important because it demonstrates the functionality of the complex physiological process of mastication in individuals with osteoporosis at the bones that compose the face.

  5. Non-destructive evaluation of laboratory scale hydraulic fracturing using acoustic emission

    NASA Astrophysics Data System (ADS)

    Hampton, Jesse Clay

    The primary objective of this research is to develop techniques to characterize hydraulic fractures and fracturing processes using acoustic emission monitoring based on laboratory scale hydraulic fracturing experiments. Individual microcrack AE source characterization is performed to understand the failure mechanisms associated with small failures along pre-existing discontinuities and grain boundaries. Individual microcrack analysis methods include moment tensor inversion techniques to elucidate the mode of failure, crack slip and crack normal direction vectors, and relative volumetric deformation of an individual microcrack. Differentiation between individual microcrack analysis and AE cloud based techniques is studied in efforts to refine discrete fracture network (DFN) creation and regional damage quantification of densely fractured media. Regional damage estimations from combinations of individual microcrack analyses and AE cloud density plotting are used to investigate the usefulness of weighting cloud based AE analysis techniques with microcrack source data. Two granite types were used in several sample configurations including multi-block systems. Laboratory hydraulic fracturing was performed with sample sizes ranging from 15 x 15 x 25 cm3 to 30 x 30 x 25 cm 3 in both unconfined and true-triaxially confined stress states using different types of materials. Hydraulic fracture testing in rock block systems containing a large natural fracture was investigated in terms of AE response throughout fracture interactions. Investigations of differing scale analyses showed the usefulness of individual microcrack characterization as well as DFN and cloud based techniques. Individual microcrack characterization weighting cloud based techniques correlated well with post-test damage evaluations.

  6. Genetic Testing for Hereditary Breast Cancer: The Decision to Decline.

    PubMed

    White, V Brook; Walsh, Kendall K; Foss, Kimberly Showers; Amacker-North, Lisa; Lenarcic, Stacy; McNeely, Lindsay; White, Richard L

    2018-01-01

    Genetic testing is important for comprehensive cancer care. Commercial analysis of the BRCA1/2 genes has been available since 1996, and testing for hereditary breast and ovarian cancer syndrome is well established. The National Comprehensive Cancer Network (NCCN) guidelines identify individuals for whom BRCA1/2 analysis is appropriate and define management recommendations for mutation carriers. Despite recommendations, not all who meet NCCN criteria undergo genetic testing. We assess the frequency that individuals meeting NCCN criteria decline BRCA1/2 analysis, as well as factors that affect the decision-making process. A retrospective chart review was performed from September 2013 through August 2014 of individuals who received genetic counseling at the Levine Cancer Institute. A total of 1082 individuals identified through the retrospective chart review met NCCN criteria for BRCA1/2 analysis. Of these, 267 (24.7%) did not pursue genetic testing. Of the Nontested cohort, 59 (22.1%) were disinterested in testing and 108 (40.4%) were advised to gather additional genetic or medical information about their relatives before testing. The remaining 100 (37.5%) individuals were insured and desired to undergo genetic testing but were prohibited by the expense. Eighty five of these 100 patients were responsible for the total cost of the test, whereas the remaining 15 faced a prohibitive copay expense. Financial concerns are a major deterrent to the pursuit of BRCA1/2 analysis among those who meet NCNN criteria, especially in patients diagnosed with breast or ovarian cancer. These findings highlight the need to address financial concerns for genetic testing in this high-risk population.

  7. What is more important for national well-being: money or autonomy? A meta-analysis of well-being, burnout, and anxiety across 63 societies.

    PubMed

    Fischer, Ronald; Boer, Diana

    2011-07-01

    What is more important: to provide citizens with more money or with more autonomy for their subjective well-being? In the current meta-analysis, the authors examined national levels of well-being on the basis of lack of psychological health, anxiety, and stress measures. Data are available for 63 countries, with a total sample of 420,599 individuals. Using a 3-level variance-known model, the authors found that individualism was a consistently better predictor than wealth, after controlling for measurement, sample, and temporal variations. Despite some emerging nonlinear trends and interactions between wealth and individualism, the overall pattern strongly suggests that greater individualism is consistently associated with more well-being. Wealth may influence well-being only via its effect on individualism. Implications of the findings for well-being research and applications are outlined. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  8. [Forensic age determination in living individuals at the Institute of Legal Medicine in Berlin (Charité): analysis of the expert reports from 2001 to 2007].

    PubMed

    Schmidt, Sven; Knüfermann, Raidun; Tsokos, Michael; Schmeling, Andreas

    2009-01-01

    The analysis included the age reports provided by the Institute of Legal Medicine in Berlin (Charité) in the period from 2001 to 2007. A total of 416 age estimations were carried out, 289 in criminal and 127 in civil proceedings. 357 of the examined individuals were male, 59 were female. The vast majority of the individuals came from Vietnam. In 112 cases, there were no deviations between the indicated age and the estimated minimum age, while the actual age of the individuals was partly clearly above the estimated age. In 300 cases, there were discrepancies of up to 11 years between the indicated age and the estimated age. The study demonstrates that forensic age estimation in living individuals can make an important contribution to legal certainty.

  9. Anger profiles in social anxiety disorder.

    PubMed

    Versella, Mark V; Piccirillo, Marilyn L; Potter, Carrie M; Olino, Thomas M; Heimberg, Richard G

    2016-01-01

    Individuals with social anxiety disorder (SAD) exhibit elevated levels of anger and anger suppression, which are both associated with increased depression, diminished quality of life, and poorer treatment outcomes. However, little is known about how anger experiences differ among individuals with SAD and whether any heterogeneity might relate to negative outcomes. This investigation sought to empirically define anger profiles among 136 treatment-seeking individuals with SAD and to assess their association with distress and impairment. A latent class analysis was conducted utilizing the trait subscales of the State-Trait Anger Expression Inventory-2 as indicators of class membership. Analysis revealed four distinct anger profiles, with greatest distress and impairment generally demonstrated by individuals with elevated trait anger, a greater tendency to suppress the expression of anger, and diminished ability to adaptively control their anger expression. These results have implications for tailoring more effective interventions for socially anxious individuals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. All individuals are not created equal; accounting for interindividual variation in fitting life-history responses to toxicants.

    PubMed

    Jager, Tjalling

    2013-02-05

    The individuals of a species are not equal. These differences frustrate experimental biologists and ecotoxicologists who wish to study the response of a species (in general) to a treatment. In the analysis of data, differences between model predictions and observations on individual animals are usually treated as random measurement error around the true response. These deviations, however, are mainly caused by real differences between the individuals (e.g., differences in physiology and in initial conditions). Understanding these intraspecies differences, and accounting for them in the data analysis, will improve our understanding of the response to the treatment we are investigating and allow for a more powerful, less biased, statistical analysis. Here, I explore a basic scheme for statistical inference to estimate parameters governing stress that allows individuals to differ in their basic physiology. This scheme is illustrated using a simple toxicokinetic-toxicodynamic model and a data set for growth of the springtail Folsomia candida exposed to cadmium in food. This article should be seen as proof of concept; a first step in bringing more realism into the statistical inference for process-based models in ecotoxicology.

  11. Working together versus working autonomously: a new power-dependence perspective on the individual-level of analysis.

    PubMed

    de Jong, Simon B

    2014-01-01

    Recent studies have indicated that it is important to investigate the interaction between task interdependence and task autonomy because this interaction can affect team effectiveness. However, only a limited number of studies have been conducted and those studies focused solely on the team level of analysis. Moreover, there has also been a dearth of theoretical development. Therefore, this study develops and tests an alternative theoretical perspective in an attempt to understand if, and if so why, this interaction is important at the individual level of analysis. Based on interdependence theory and power-dependence theory, we expected that highly task-interdependent individuals who reported high task autonomy would be more powerful and better performers. In contrast, we expected that similarly high task-interdependent individuals who reported less task autonomy would be less powerful and would be weaker performers. These expectations were supported by multi-level and bootstrapping analyses performed on a multi-source dataset (self-, peer-, manager-ratings) comprised of 182 employees drawn from 37 teams. More specifically, the interaction between task interdependence and task autonomy was γ =.128, p <.05 for power and γ =.166, p <.05 for individual performance. The 95% bootstrap interval ranged from .0038 to .0686.

  12. Modeling of Individual and Organizational Factors Affecting Traumatic Occupational Injuries Based on the Structural Equation Modeling: A Case Study in Large Construction Industries.

    PubMed

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Akbarzadeh, Mehdi

    2016-09-01

    Individual and organizational factors are the factors influencing traumatic occupational injuries. The aim of the present study was the short path analysis of the severity of occupational injuries based on individual and organizational factors. The present cross-sectional analytical study was implemented on traumatic occupational injuries within a ten-year timeframe in 13 large Iranian construction industries. Modeling and data analysis were done using the structural equation modeling (SEM) approach and the IBM SPSS AMOS statistical software version 22.0, respectively. The mean age and working experience of the injured workers were 28.03 ± 5.33 and 4.53 ± 3.82 years, respectively. The portions of construction and installation activities of traumatic occupational injuries were 64.4% and 18.1%, respectively. The SEM findings showed that the individual, organizational and accident type factors significantly were considered as effective factors on occupational injuries' severity (P < 0.05). Path analysis of occupational injuries based on the SEM reveals that individual and organizational factors and their indicator variables are very influential on the severity of traumatic occupational injuries. So, these should be considered to reduce occupational accidents' severity in large construction industries.

  13. Analysis of Heart Rate and Self-Injury with and without Restraint in an Individual with Autism

    ERIC Educational Resources Information Center

    Jennett, Heather; Hagopian, Louis P.; Beaulieu, Lauren

    2011-01-01

    The relation between self-injury and heart rate was analyzed for an individual who appeared anxious while engaging in self-injury. The analysis involved manipulating the presence or absence of restraint while simultaneously measuring heart rate. The following findings were obtained and replicated: (a) when some form of restraint was applied, heart…

  14. Planning Skills in Autism Spectrum Disorder across the Lifespan: A Meta-Analysis and Meta-Regression

    ERIC Educational Resources Information Center

    Olde Dubbelink, Linda M. E.; Geurts, Hilde M.

    2017-01-01

    Individuals with an autism spectrum disorder (ASD) are thought to encounter planning difficulties, but experimental research regarding the mastery of planning in ASD is inconsistent. By means of a meta-analysis of 50 planning studies with a combined sample size of 1755 individuals with and 1642 without ASD, we aim to determine whether planning…

  15. Behavioral Intervention Plans: Pedagogical and Legal Analysis of Issues

    ERIC Educational Resources Information Center

    Etscheidt, Susan

    2006-01-01

    Both law and pedagogy require that educators address behavior interfering with educational progress for students with disabilities. The reauthorized Individuals with Disabilities Education Improvement Act (IDEIA, 2004) and its predecessor, the Individuals with Disabilities Education Act (IDEA, 1997a), require individualized education program (IEP)…

  16. Students' meaning making in classroom discussions: the importance of peer interaction

    NASA Astrophysics Data System (ADS)

    Rudsberg, Karin; Östman, Leif; Aaro Östman, Elisabeth

    2017-09-01

    The aim is to investigate how encounters with peers affect an individual's meaning making in argumentation about socio-scientific issues, and how the individual's meaning making influences the argumentation at the collective level. The analysis is conducted using the analytical method "transactional argumentation analysis" (TAA) which enables in situ studies. TAA combines a transactional perspective on meaning making based on John Dewey's pragmatic philosophy with an argument analysis based on Toulmin's argument pattern. Here TAA is developed further to enable analysis that in detail clarifies the dynamic interplay between the individual and the collective—the intra- and the inter-personal dimensions—and the result of this interplay in terms of meaning making and learning. The empirical material in this study consists of a video-recorded lesson in a Swedish upper secondary school. The results show that the analysed student is influenced by peers when construing arguments, and thereby acts on others' reasoning when making meaning. Further, the results show that most of the additions made by the analysed student are taken further by peers in the subsequent discussion. This study shows how an individual's earlier experiences, knowledge and thinking contribute to the collective meaning making in the classroom.

  17. Using sperm morphometry and multivariate analysis to differentiate species of gray Mazama

    PubMed Central

    Duarte, José Maurício Barbanti

    2016-01-01

    There is genetic evidence that the two species of Brazilian gray Mazama, Mazama gouazoubira and Mazama nemorivaga, belong to different genera. This study identified significant differences that separated them into distinct groups, based on characteristics of the spermatozoa and ejaculate of both species. The characteristics that most clearly differentiated between the species were ejaculate colour, white for M. gouazoubira and reddish for M. nemorivaga, and sperm head dimensions. Multivariate analysis of sperm head dimension and format data accurately discriminated three groups for species with total percentage of misclassified of 0.71. The individual analysis, by animal, and the multivariate analysis have also discriminated correctly all five animals (total percentage of misclassified of 13.95%), and the canonical plot has shown three different clusters: Cluster 1, including individuals of M. nemorivaga; Cluster 2, including two individuals of M. gouazoubira; and Cluster 3, including a single individual of M. gouazoubira. The results obtained in this work corroborate the hypothesis of the formation of new genera and species for gray Mazama. Moreover, the easily applied method described herein can be used as an auxiliary tool to identify sibling species of other taxonomic groups. PMID:28018612

  18. Design and rationale of a prospective, collaborative meta-analysis of all randomized controlled trials of angiotensin receptor antagonists in Marfan syndrome, based on individual patient data: A report from the Marfan Treatment Trialists' Collaboration

    PubMed Central

    Pitcher, Alex; Emberson, Jonathan; Lacro, Ronald V.; Sleeper, Lynn A.; Stylianou, Mario; Mahony, Lynn; Pearson, Gail D.; Groenink, Maarten; Mulder, Barbara J.; Zwinderman, Aeilko H.; De Backer, Julie; De Paepe, Anne M.; Arbustini, Eloisa; Erdem, Guliz; Jin, Xu Yu; Flather, Marcus D.; Mullen, Michael J.; Child, Anne H.; Forteza, Alberto; Evangelista, Arturo; Chiu, Hsin-Hui; Wu, Mei-Hwan; Sandor, George; Bhatt, Ami B.; Creager, Mark A.; Devereux, Richard B.; Loeys, Bart; Forfar, J. Colin; Neubauer, Stefan; Watkins, Hugh; Boileau, Catherine; Jondeau, Guillaume; Dietz, Harry C.; Baigent, Colin

    2015-01-01

    Rationale A number of randomized trials are underway, which will address the effects of angiotensin receptor blockers (ARBs) on aortic root enlargement and a range of other end points in patients with Marfan syndrome. If individual participant data from these trials were to be combined, a meta-analysis of the resulting data, totaling approximately 2,300 patients, would allow estimation across a number of trials of the treatment effects both of ARB therapy and of β-blockade. Such an analysis would also allow estimation of treatment effects in particular subgroups of patients on a range of end points of interest and would allow a more powerful estimate of the effects of these treatments on a composite end point of several clinical outcomes than would be available from any individual trial. Design A prospective, collaborative meta-analysis based on individual patient data from all randomized trials in Marfan syndrome of (i) ARBs versus placebo (or open-label control) and (ii) ARBs versus β-blockers will be performed. A prospective study design, in which the principal hypotheses, trial eligibility criteria, analyses, and methods are specified in advance of the unblinding of the component trials, will help to limit bias owing to data-dependent emphasis on the results of particular trials. The use of individual patient data will allow for analysis of the effects of ARBs in particular patient subgroups and for time-to-event analysis for clinical outcomes. The meta-analysis protocol summarized in this report was written on behalf of the Marfan Treatment Trialists' Collaboration and finalized in late 2012, without foreknowledge of the results of any component trial, and will be made available online (http://www.ctsu.ox.ac.uk/research/meta-trials). PMID:25965707

  19. Occam's shadow: levels of analysis in evolutionary ecology - where to next?

    USGS Publications Warehouse

    Cooch, E.G.; Cam, E.; Link, W.A.

    2002-01-01

    Evolutionary ecology is the study of evolutionary processes, and the ecological conditions that influence them. A fundamental paradigm underlying the study of evolution is natural selection. Although there are a variety of operational definitions for natural selection in the literature, perhaps the most general one is that which characterizes selection as the process whereby heritable variation in fitness associated with variation in one or more phenotypic traits leads to intergenerational change in the frequency distribution of those traits. The past 20 years have witnessed a marked increase in the precision and reliability of our ability to estimate one or more components of fitness and characterize natural selection in wild populations, owing particularly to significant advances in methods for analysis of data from marked individuals. In this paper, we focus on several issues that we believe are important considerations for the application and development of these methods in the context of addressing questions in evolutionary ecology. First, our traditional approach to estimation often rests upon analysis of aggregates of individuals, which in the wild may reflect increasingly non-random (selected) samples with respect to the trait(s) of interest. In some cases, analysis at the aggregate level, rather than the individual level, may obscure important patterns. While there are a growing number of analytical tools available to estimate parameters at the individual level, and which can cope (to varying degrees) with progressive selection of the sample, the advent of new methods does not reduce the need to consider carefully the appropriate level of analysis in the first place. Estimation should be motivated a priori by strong theoretical analysis. Doing so provides clear guidance, in terms of both (i) assisting in the identification of realistic and meaningful models to include in the candidate model set, and (ii) providing the appropriate context under which the results are interpreted. Second, while it is true that selection (as defined) operates at the level of the individual, the selection gradient is often (if not generally) conditional on the abundance of the population. As such, it may be important to consider estimating transition rates conditional on both the parameter values of the other individuals in the population (or at least their distribution), and population abundance. This will undoubtedly pose a considerable challenge, for both single- and multi-strata applications. It will also require renewed consideration of the estimation of abundance, especially for open populations. Thirdly, selection typically operates on dynamic, individually varying traits. Such estimation may require characterizing fitness in terms of individual plasticity in one or more state variables, constituting analysis of the norms of reaction of individuals to variable environments. This can be quite complex, especially for traits that are under facultative control. Recent work has indicated that the pattern of selection on such traits is conditional on the relative rates of movement among and frequency of spatially heterogeneous habitats, suggesting analyses of evolution of life histories in open populations can be misleading in some cases.

  20. Individual analysis of inter and intragrain defects in electrically characterized polycrystalline silicon nanowire TFTs by multicomponent dark-field imaging based on nanobeam electron diffraction two-dimensional mapping

    NASA Astrophysics Data System (ADS)

    Asano, Takanori; Takaishi, Riichiro; Oda, Minoru; Sakuma, Kiwamu; Saitoh, Masumi; Tanaka, Hiroki

    2018-04-01

    We visualize the grain structures for individual nanosized thin film transistors (TFTs), which are electrically characterized, with an improved data processing technique for the dark-field image reconstruction of nanobeam electron diffraction maps. Our individual crystal analysis gives the one-to-one correspondence of TFTs with different grain boundary structures, such as random and coherent boundaries, to the characteristic degradations of ON-current and threshold voltage. Furthermore, the local crystalline uniformity inside a single grain is detected as the difference in diffraction intensity distribution.

  1. [Social actors and phenomenologic modelling].

    PubMed

    Laflamme, Simon

    2012-05-01

    The phenomenological approach has a quasi-monopoly in the individual and subjectivity analyses in social sciences. However, the conceptual apparatus associated with this approach is very restrictive. The human being has to be understood as rational, conscious, intentional, interested, and autonomous. Because of this, a large dimension of human activity cannot be taken into consideration: all that does not fit into the analytical categories (nonrational, nonconscious, etc.). Moreover, this approach cannot really move toward a relational analysis unless it is between individuals predefined by its conceptual apparatus. This lack of complexity makes difficult the establishment of links between phenomenology and systemic analysis in which relation (and its derivatives such as recursiveness, dialectic, correlation) plays an essential role. This article intends to propose a way for systemic analysis to apprehend the individual with respect to his complexity.

  2. Statistical analysis and handling of missing data in cluster randomized trials: a systematic review.

    PubMed

    Fiero, Mallorie H; Huang, Shuang; Oren, Eyal; Bell, Melanie L

    2016-02-09

    Cluster randomized trials (CRTs) randomize participants in groups, rather than as individuals and are key tools used to assess interventions in health research where treatment contamination is likely or if individual randomization is not feasible. Two potential major pitfalls exist regarding CRTs, namely handling missing data and not accounting for clustering in the primary analysis. The aim of this review was to evaluate approaches for handling missing data and statistical analysis with respect to the primary outcome in CRTs. We systematically searched for CRTs published between August 2013 and July 2014 using PubMed, Web of Science, and PsycINFO. For each trial, two independent reviewers assessed the extent of the missing data and method(s) used for handling missing data in the primary and sensitivity analyses. We evaluated the primary analysis and determined whether it was at the cluster or individual level. Of the 86 included CRTs, 80 (93%) trials reported some missing outcome data. Of those reporting missing data, the median percent of individuals with a missing outcome was 19% (range 0.5 to 90%). The most common way to handle missing data in the primary analysis was complete case analysis (44, 55%), whereas 18 (22%) used mixed models, six (8%) used single imputation, four (5%) used unweighted generalized estimating equations, and two (2%) used multiple imputation. Fourteen (16%) trials reported a sensitivity analysis for missing data, but most assumed the same missing data mechanism as in the primary analysis. Overall, 67 (78%) trials accounted for clustering in the primary analysis. High rates of missing outcome data are present in the majority of CRTs, yet handling missing data in practice remains suboptimal. Researchers and applied statisticians should carry out appropriate missing data methods, which are valid under plausible assumptions in order to increase statistical power in trials and reduce the possibility of bias. Sensitivity analysis should be performed, with weakened assumptions regarding the missing data mechanism to explore the robustness of results reported in the primary analysis.

  3. The impact of mother's literacy on child dental caries: Individual data or aggregate data analysis?

    PubMed

    Haghdoost, Ali-Akbar; Hessari, Hossein; Baneshi, Mohammad Reza; Rad, Maryam; Shahravan, Arash

    2017-01-01

    To evaluate the impact of mother's literacy on child dental caries based on a national oral health survey in Iran and to investigate the possibility of ecological fallacy in aggregate data analysis. Existing data were from second national oral health survey that was carried out in 2004, which including 8725 6 years old participants. The association of mother's literacy with caries occurrence (DMF (Decayed, Missing, Filling) total score >0) of her child was assessed using individual data by logistic regression model. Then the association of the percentages of mother's literacy and the percentages of decayed teeth in each 30 provinces of Iran was assessed using aggregated data retrieved from the data of second national oral health survey of Iran and alternatively from census of "Statistical Center of Iran" using linear regression model. The significance level was set at 0.05 for all analysis. Individual data analysis showed a statistically significant association between mother's literacy and decayed teeth of children ( P = 0.02, odds ratio = 0.83). There were not statistical significant association between mother's literacy and child dental caries in aggregate data analysis of oral health survey ( P = 0.79, B = 0.03) and census of "Statistical Center of Statistics" ( P = 0.60, B = 0.14). Literate mothers have a preventive effect on occurring dental caries of children. According to the high percentage of illiterate parents in Iran, it's logical to consider suitable methods of oral health education which do not need reading or writing. Aggregate data analysis and individual data analysis had completely different results in this study.

  4. Multivariable Regression Analysis in Schistosoma mansoni-Infected Individuals in the Sudan Reveals Unique Immunoepidemiological Profiles in Uninfected, egg+ and Non-egg+ Infected Individuals.

    PubMed

    Elfaki, Tayseer Elamin Mohamed; Arndts, Kathrin; Wiszniewsky, Anna; Ritter, Manuel; Goreish, Ibtisam A; Atti El Mekki, Misk El Yemen A; Arriens, Sandra; Pfarr, Kenneth; Fimmers, Rolf; Doenhoff, Mike; Hoerauf, Achim; Layland, Laura E

    2016-05-01

    In the Sudan, Schistosoma mansoni infections are a major cause of morbidity in school-aged children and infection rates are associated with available clean water sources. During infection, immune responses pass through a Th1 followed by Th2 and Treg phases and patterns can relate to different stages of infection or immunity. This retrospective study evaluated immunoepidemiological aspects in 234 individuals (range 4-85 years old) from Kassala and Khartoum states in 2011. Systemic immune profiles (cytokines and immunoglobulins) and epidemiological parameters were surveyed in n = 110 persons presenting patent S. mansoni infections (egg+), n = 63 individuals positive for S. mansoni via PCR in sera but egg negative (SmPCR+) and n = 61 people who were infection-free (Sm uninf). Immunoepidemiological findings were further investigated using two binary multivariable regression analysis. Nearly all egg+ individuals had no access to latrines and over 90% obtained water via the canal stemming from the Atbara River. With regards to age, infection and an egg+ status was linked to young and adolescent groups. In terms of immunology, S. mansoni infection per se was strongly associated with increased SEA-specific IgG4 but not IgE levels. IL-6, IL-13 and IL-10 were significantly elevated in patently-infected individuals and positively correlated with egg load. In contrast, IL-2 and IL-1β were significantly lower in SmPCR+ individuals when compared to Sm uninf and egg+ groups which was further confirmed during multivariate regression analysis. Schistosomiasis remains an important public health problem in the Sudan with a high number of patent individuals. In addition, SmPCR diagnostics revealed another cohort of infected individuals with a unique immunological profile and provides an avenue for future studies on non-patent infection states. Future studies should investigate the downstream signalling pathways/mechanisms of IL-2 and IL-1β as potential diagnostic markers in order to distinguish patent from non-patent individuals.

  5. An Analysis of Leisure Attitudes of the Individuals Participating in Dance Activities and the Relationship between Leisure Attitude and Life Satisfaction

    ERIC Educational Resources Information Center

    Gökyürek, Belgin

    2016-01-01

    This study sought to explore the leisure attitudes of the individuals participating in the dance activities, to compare them on the basis of various variables and to contribute to the understanding of the relationship between these attitudes and the life satisfaction of the individual. The research sample includes 302 individuals participating in…

  6. The Impacts of Individualization on Equity Educational Policies

    ERIC Educational Resources Information Center

    Francia, Guadalupe

    2013-01-01

    The present article has as its aim to illustrate and discuss the impacts of individualization strategies on equity educational policies through the analysis of individualized teaching strategies applied within the framework of educational priority policies in Sweden. The methodology used in our research work includes: (a) the study of research…

  7. Over-Education and Assortative Matching in Partnerships: A Theoretical Analysis

    ERIC Educational Resources Information Center

    Tampieri, Alessandro

    2016-01-01

    This paper argues that assortative matching may explain over-education. Education determines individuals' income and, due to the presence of assortative matching, the quality of partners in personal, social and working life. Thus, an individual acquires education to improve the expected partners' quality. However, since every individual of the…

  8. Family Perspectives on a Successful Transition to Adulthood for Individuals with Disabilities

    ERIC Educational Resources Information Center

    Henninger, Natalie A.; Taylor, Julie Lounds

    2014-01-01

    When researchers evaluate adult outcomes for individuals with intellectual and/or developmental disabilities (IDD), the perspective of families is not always considered. Parents of individuals with IDD ("N" = 198) answered an online survey about their definition of a successful transition to adulthood. Content analysis was used to…

  9. Individual Differences in Consumer Buying Patterns: A Behavioral Economic Analysis

    ERIC Educational Resources Information Center

    Cavalcanti, Paulo R.; Oliveira-Castro, Jorge M.; Foxall, Gordon R.

    2013-01-01

    Although previous studies have identified several regularities in buying behavior, no integrated view of individual differences related to such patterns has been yet proposed. The present research examined individual differences in patterns of buying behavior of fast-moving consumer goods, using panel data with information concerning purchases of…

  10. An Individual Differences Analysis of Double-Aspect Stimulus Perception.

    ERIC Educational Resources Information Center

    Forsyth, G. Alfred; Huber, R. John

    Any theory of information processing must address both what is processed and how that processing takes place. Most studies investigating variables which alter physical dimension utilization have ignored the large individual differences in selective attention or cue utilization. A paradigm was developed using an individual focus on information…

  11. Students' Attitudes towards Individuals with an Intellectual Disability

    ERIC Educational Resources Information Center

    Patel, Meera; Rose, John

    2014-01-01

    The aim of the study was to investigate attitudes held by a British student population towards individuals with an intellectual disability. Students participated in focus groups addressing their attitudes, behaviours and perceptions of individuals with an intellectual disability. Thematic analysis was the method used to identify emergent themes.…

  12. Understanding complex interactions using social network analysis.

    PubMed

    Pow, Janette; Gayen, Kaberi; Elliott, Lawrie; Raeside, Robert

    2012-10-01

    The aim of this paper is to raise the awareness of social network analysis as a method to facilitate research in nursing research. The application of social network analysis in assessing network properties has allowed greater insight to be gained in many areas including sociology, politics, business organisation and health care. However, the use of social networks in nursing has not received sufficient attention. Review of literature and illustration of the application of the method of social network analysis using research examples. First, the value of social networks will be discussed. Then by using illustrative examples, the value of social network analysis to nursing will be demonstrated. The method of social network analysis is found to give greater insights into social situations involving interactions between individuals and has particular application to the study of interactions between nurses and between nurses and patients and other actors. Social networks are systems in which people interact. Two quantitative techniques help our understanding of these networks. The first is visualisation of the network. The second is centrality. Individuals with high centrality are key communicators in a network. Applying social network analysis to nursing provides a simple method that helps gain an understanding of human interaction and how this might influence various health outcomes. It allows influential individuals (actors) to be identified. Their influence on the formation of social norms and communication can determine the extent to which new interventions or ways of thinking are accepted by a group. Thus, working with key individuals in a network could be critical to the success and sustainability of an intervention. Social network analysis can also help to assess the effectiveness of such interventions for the recipient and the service provider. © 2012 Blackwell Publishing Ltd.

  13. Somatotype analysis of physically active individuals.

    PubMed

    Almeida, A H S; Santos, S A G; Castro, P J P; Rizzo, J A; Batista, G R

    2013-06-01

    The present study aimed at comparing demographic variables, physical activity level, and health-related anthropometric indicators according to somatotype among physically active individuals. This is a descriptive cross-sectional study, in which the sample consisted of 304 individuals, who are users of the jogging track at the Federal University of Pernambuco (UFPE) in Recife, state of Pernambuco, northeastern Brazil. Somatotypes were analyzed using the anthropometric technique proposed by Heath & Carter (1990). To assess physical activity level, we used the short version of the International Physical Activity Questionnaire (IPAQ). We used as health-related anthropometric indicators: body mass index (BMI), waist circumference (WC), waist-hip ratio (WHR), and conicity index (CI). We used descriptive statistics to characterize the sample, and then used a multivariate analysis of variance (a = 0.05) to test for differences. In the somatotype analysis, we observed among women significant predominance of the endomorphy and lower predominance of the ectomorphy in comparison to men. In the age group ≤ 29 years significantly lower values were found for endomorphy than in other age groups. Irregularly active individuals had significantly lower values of endomorphy. We observed that individuals with obesity and risk in WHR, WC and CI had higher scores of endomorphy and mesomorphy and lower scores of ectomorphy. The somatotype of physically active individuals in the present study raises health concern, mainly related to high relative adiposity represented by endomorphy.

  14. Modelling of human exposure to air pollution in the urban environment: a GPS-based approach.

    PubMed

    Dias, Daniela; Tchepel, Oxana

    2014-03-01

    The main objective of this work was the development of a new modelling tool for quantification of human exposure to traffic-related air pollution within distinct microenvironments by using a novel approach for trajectory analysis of the individuals. For this purpose, mobile phones with Global Positioning System technology have been used to collect daily trajectories of the individuals with higher temporal resolution and a trajectory data mining, and geo-spatial analysis algorithm was developed and implemented within a Geographical Information System to obtain time-activity patterns. These data were combined with air pollutant concentrations estimated for several microenvironments. In addition to outdoor, pollutant concentrations in distinct indoor microenvironments are characterised using a probabilistic approach. An example of the application for PM2.5 is presented and discussed. The results obtained for daily average individual exposure correspond to a mean value of 10.6 and 6.0-16.4 μg m(-3) in terms of 5th-95th percentiles. Analysis of the results shows that the use of point air quality measurements for exposure assessment will not explain the intra- and inter-variability of individuals' exposure levels. The methodology developed and implemented in this work provides time-sequence of the exposure events thus making possible association of the exposure with the individual activities and delivers main statistics on individual's air pollution exposure with high spatio-temporal resolution.

  15. Analysis of differentially expressed genes between fluoride-sensitive and fluoride-endurable individuals in midgut of silkworm, Bombyx mori.

    PubMed

    Qian, Heying; Li, Gang; He, Qingling; Zhang, Huaguang; Xu, Anying

    2016-08-15

    Fluoride tolerance is an economically important trait of silkworm. Near-isogenic lines (NILs) of the dominant endurance to fluoride (Def) gene in Bombyx mori has been constructed before. Here, we analyzed the gene expression profiles of midgut of fluoride-sensitive and fluoride-endurable individuals of Def NILs by using high-throughput Illumina sequencing technology and bioinformatics tools, and identified differentially expressed genes between these individuals. A total of 3,612,399 and 3,567,631 clean tags for the libraries of fluoride-endurable and fluoride-sensitive individuals were obtained, which corresponded to 32,933 and 43,976 distinct clean tags, respectively. Analysis of differentially expressed genes indicates that 241 genes are differentially expressed between the two libraries. Among the 241 genes, 30 are up-regulated and 211 are down-regulated in fluoride-endurable individuals. Pathway enrichment analysis demonstrates that genes related to ribosomes, pancreatic secretion, steroid biosynthesis, glutathione metabolism, steroid biosynthesis, and glycerolipid metabolism are down-regulated in fluoride-endurable individuals. qRT-PCR was conducted to confirm the results of the DGE. The present study analyzed differential expression of related genes and tried to find out whether the crucial genes were related to fluoride detoxification which might elucidate fluoride effect and provide a new way in the fluorosis research. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Species-specific treatment effects of helminth/HIV-1 co-infection: a systematic review and meta-analysis.

    PubMed

    Sangaré, Laura R; Herrin, Bradley R; Herrin, Bradely R; John-Stewart, Grace; Walson, Judd L

    2011-10-01

    In sub-Saharan Africa, over 22 million people are estimated to be co-infected with both helminths and HIV-1. Several studies have suggested that de-worming individuals with HIV-1 may delay HIV-1 disease progression, and that the benefit of de-worming may vary by individual helminth species. We conducted a systematic review and meta-analysis of the published literature to determine the effect of treatment of individual helminth infections on markers of HIV-1 progression (CD4 count and HIV viral load). There was a trend towards an association between treatment for Schistosoma mansoni and a decrease in HIV viral load (Weighted mean difference (WMD)=-0·10; 95% Confidence interval (CI): -0·24, 0·03), although this association was not seen for Ascaris lumbricoides, hookworm or Trichuris trichiura. Treatment of A. lumbricoides, S. mansoni, hookworm or T. trichiura was not associated with a change in CD4 count. While pooled data from randomized trials suggested clinical benefit of de-worming for individual helminth species, these effects decreased when observational data were included in the pooled analysis. While further trials are needed to confirm the role of anthelmintic treatment in HIV-1 co-infected individuals, providing anthelmintics to individuals with HIV-1 may be a safe, inexpensive and practical intervention to slow progression of HIV-1.

  17. Species-specific treatment effects of helminth/HIV-1 co-infection: a systematic review and meta-analysis

    PubMed Central

    SANGARÉ, LAURA R.; HERRIN, BRADELY R.; JOHN-STEWART, GRACE; WALSON, JUDD L.

    2012-01-01

    SUMMARY In sub-Saharan Africa, over 22 million people are estimated to be co-infected with both helminths and HIV-1. Several studies have suggested that de-worming individuals with HIV-1 may delay HIV-1 disease progression, and that the benefit of de-worming may vary by individual helminth species. We conducted a systematic review and meta-analysis of the published literature to determine the effect of treatment of individual helminth infections on markers of HIV-1 progression (CD4 count and HIV viral load). There was a trend towards an association between treatment for Schistosoma mansoni and a decrease in HIV viral load (Weighted mean difference (WMD)=−0·10; 95% Confidence interval (CI): −0·24, 0·03), although this association was not seen for Ascaris lumbricoides, hookworm or Trichuris trichiura. Treatment of A. lumbricoides, S. mansoni, hookworm or T. trichiura was not associated with a change in CD4 count. While pooled data from randomized trials suggested clinical benefit of de-worming for individual helminth species, these effects decreased when observational data were included in the pooled analysis. While further trials are needed to confirm the role of anthelmintic treatment in HIV-1 co-infected individuals, providing anthelmintics to individuals with HIV-1 may be a safe, inexpensive and practical intervention to slow progression of HIV-1. PMID:21729353

  18. The prevalence, metabolic risk and effects of lifestyle intervention for metabolically healthy obesity: a systematic review and meta-analysis: A PRISMA-compliant article.

    PubMed

    Lin, Hanli; Zhang, Liqun; Zheng, Ruizhi; Zheng, Yishan

    2017-11-01

    We conducted a systematic review and meta-analysis to firstly obtain a reliable estimation of the prevalence of metabolically healthy obese (MHO) individuals in obesity, then assessed the risk of developing metabolic abnormalities (MA) among MHO individuals. At last, we evaluated the effects of traditional lifestyle interventions on metabolic level for MHO subjects. A systematic review and meta-analysis (PRISMA) guideline were conducted, and original studies were searched up to December 31, 2016. The prevalence of MHO in obesity from each study was pooled using random effects models. The relative risks (RRs) were pooled to determine the risk of developing MA for MHO compared with metabolically healthy normal-weight (MHNW) subjects. For the meta-analysis of intervention studies, the mean difference and standardized mean differences were both estimated for each metabolic parameter within each study, and then pooled using a random-effects model. Overall, 40 population-based studies reported the prevalence of MHO in obesity, 12 cohort studies and 7 intervention studies were included in the meta-analysis. About 35.0% obese individuals were metabolically healthy in the obese subjects. There were dramatic differences in the prevalence among different areas. However, 0.49 (95% confidence intervals [CI]: 0.38 to 0.60) of the MHO individuals would develop one or more MA within 10 years. Compared with MHNW subjects, the MHO subjects presented higher risk of incident MA (pooled RR = 1.80, 95%CI: 1.53-2.11). Following intervention, there was certain and significant improvement of metabolic state for metabolically abnormal obesity (MAO) subjects. Only diastolic blood pressure had reduced for MHO individuals after intervention. Almost one-third of the obese individuals are in metabolic health. However, they are still at higher risk of advancing to unhealthy state. Therefore, it is still needed to advise MHO individuals to maintain or adopt a healthy lifestyle, so as to counterbalance the adverse effects of obesity. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  19. A study protocol to evaluate the relationship between outdoor air pollution and pregnancy outcomes

    PubMed Central

    2010-01-01

    Background The present study protocol is designed to assess the relationship between outdoor air pollution and low birth weight and preterm births outcomes performing a semi-ecological analysis. Semi-ecological design studies are widely used to assess effects of air pollution in humans. In this type of analysis, health outcomes and covariates are measured in individuals and exposure assignments are usually based on air quality monitor stations. Therefore, estimating individual exposures are one of the major challenges when investigating these relationships with a semi-ecologic design. Methods/Design Semi-ecologic study consisting of a retrospective cohort study with ecologic assignment of exposure is applied. Health outcomes and covariates are collected at Primary Health Care Center. Data from pregnant registry, clinical record and specific questionnaire administered orally to the mothers of children born in period 2007-2010 in Portuguese Alentejo Litoral region, are collected by the research team. Outdoor air pollution data are collected with a lichen diversity biomonitoring program, and individual pregnancy exposures are assessed with spatial geostatistical simulation, which provides the basis for uncertainty analysis of individual exposures. Awareness of outdoor air pollution uncertainty will improve validity of individual exposures assignments for further statistical analysis with multivariate regression models. Discussion Exposure misclassification is an issue of concern in semi-ecological design. In this study, personal exposures are assigned to each pregnant using geocoded addresses data. A stochastic simulation method is applied to lichen diversity values index measured at biomonitoring survey locations, in order to assess spatial uncertainty of lichen diversity value index at each geocoded address. These methods assume a model for spatial autocorrelation of exposure and provide a distribution of exposures in each study location. We believe that variability of simulated exposure values at geocoded addresses will improve knowledge on variability of exposures, improving therefore validity of individual exposures to input in posterior statistical analysis. PMID:20950449

  20. Dual-process models of health-related behaviour and cognition: a review of theory.

    PubMed

    Houlihan, S

    2018-03-01

    The aim of this review was to synthesise a spectrum of theories incorporating dual-process models of health-related behaviour. Review of theory, adapted loosely from Cochrane-style systematic review methodology. Inclusion criteria were specified to identify all relevant dual-process models that explain decision-making in the context of decisions made about human health. Data analysis took the form of iterative template analysis (adapted from the conceptual synthesis framework used in other reviews of theory), and in this way theories were synthesised on the basis of shared theoretical constructs and causal pathways. Analysis and synthesis proceeded in turn, instead of moving uni-directionally from analysis of individual theories to synthesis of multiple theories. Namely, the reviewer considered and reconsidered individual theories and theoretical components in generating the narrative synthesis' main findings. Drawing on systematic review methodology, 11 electronic databases were searched for relevant dual-process theories. After de-duplication, 12,198 records remained. Screening of title and abstract led to the exclusion of 12,036 records, after which 162 full-text records were assessed. Of those, 21 records were included in the review. Moving back and forth between analysis of individual theories and the synthesis of theories grouped on the basis of theme or focus yielded additional insights into the orientation of a theory to an individual. Theories could be grouped in part on their treatment of an individual as an irrational actor, as social actor, as actor in a physical environment or as a self-regulated actor. Synthesising identified theories into a general dual-process model of health-related behaviour indicated that such behaviour is the result of both propositional and unconscious reasoning driven by an individual's response to internal cues (such as heuristics, attitude and affect), physical cues (social and physical environmental stimuli) as well as regulating factors (such as habit) that mediate between them. Copyright © 2017. Published by Elsevier Ltd.

  1. A study protocol to evaluate the relationship between outdoor air pollution and pregnancy outcomes.

    PubMed

    Ribeiro, Manuel C; Pereira, Maria J; Soares, Amílcar; Branquinho, Cristina; Augusto, Sofia; Llop, Esteve; Fonseca, Susana; Nave, Joaquim G; Tavares, António B; Dias, Carlos M; Silva, Ana; Selemane, Ismael; de Toro, Joaquin; Santos, Mário J; Santos, Fernanda

    2010-10-15

    The present study protocol is designed to assess the relationship between outdoor air pollution and low birth weight and preterm births outcomes performing a semi-ecological analysis. Semi-ecological design studies are widely used to assess effects of air pollution in humans. In this type of analysis, health outcomes and covariates are measured in individuals and exposure assignments are usually based on air quality monitor stations. Therefore, estimating individual exposures are one of the major challenges when investigating these relationships with a semi-ecologic design. Semi-ecologic study consisting of a retrospective cohort study with ecologic assignment of exposure is applied. Health outcomes and covariates are collected at Primary Health Care Center. Data from pregnant registry, clinical record and specific questionnaire administered orally to the mothers of children born in period 2007-2010 in Portuguese Alentejo Litoral region, are collected by the research team. Outdoor air pollution data are collected with a lichen diversity biomonitoring program, and individual pregnancy exposures are assessed with spatial geostatistical simulation, which provides the basis for uncertainty analysis of individual exposures. Awareness of outdoor air pollution uncertainty will improve validity of individual exposures assignments for further statistical analysis with multivariate regression models. Exposure misclassification is an issue of concern in semi-ecological design. In this study, personal exposures are assigned to each pregnant using geocoded addresses data. A stochastic simulation method is applied to lichen diversity values index measured at biomonitoring survey locations, in order to assess spatial uncertainty of lichen diversity value index at each geocoded address. These methods assume a model for spatial autocorrelation of exposure and provide a distribution of exposures in each study location. We believe that variability of simulated exposure values at geocoded addresses will improve knowledge on variability of exposures, improving therefore validity of individual exposures to input in posterior statistical analysis.

  2. Prevalence of cymothoid isopods (Crustacea, Isopoda) and proximate analysis of parasites and their host fishes, Southeastern India.

    PubMed

    Rajaram, Rajendran; Rakesh Kumar, Kulanthasamy; Vinothkumar, Shanmugam; Metillo, Ephrime B

    2018-06-01

    Cymothoid isopods (Crustacea, Isopoda) are considered as potential threat to the health of different fish species. In order to evaluate the prevalence and proximate analysis of Cymothoid isopods and its host, an investigation was carried out fish species belonging to families Hemiramphidae and Belonidae in the Palk Bay region, Southeastern India. A total of 1265 individuals of teleost fish belonging to family Hemiramphidae species, Hemiramphus far (462), H. archipelagicus (78), and H. lutkie (277) and another family Belonidae species, Tylosurus crocodilus (448), were examined for cymothoid ectoparasitic infestation. Prevalence in H. far was the highest (39%) for the cymothoid Mothocys plagulophora , while T. crocodilus was most infested (13%) with Mothocys renardi, H. far and H. lutkie were not infested by M. renardi while T. crocodilus was not infested by M. plagulophora. Proximate analysis showed reduced level of protein in parasite infested fish compared with non-infested individuals. However, carbohydrate and lipid concentrations were lower in infested fish than non-infested individuals. Proximate analysis values in the two parasites of Mothocys species were similar, and these values were comparable to those unaffected fish species indicating that parasites were well nourished. The proximate analysis of isopod parasite M. plagulophora showed 21.6 ± 7.7, 1.26 ± 0.05, 5.49 ± 1.06% of protein, carbohydrate and lipid respectively, and in M. renardi, 21.09 ± 6.6, 1.32 ± 0.12, 5.83 ± 0.72% of protein, carbohydrate and lipid respectively. Cadmium levels were similar between affected and non-affected fish individuals and among species. The Pb levels were comparable among all T. crocodilus individuals, but the levels of Cd not showed much variation between affected and unaffected individuals in all four fish species.

  3. Social cohesion matters in health

    PubMed Central

    2013-01-01

    Introduction The concept of social cohesion has invoked debate due to the vagueness of its definition and the limitations of current measurements. This paper attempts to examine the concept of social cohesion, develop measurements, and investigate the relationship between social cohesion and individual health. Methods This study used a multilevel study design. The individual-level samples from 29 high-income countries were obtained from the 2000 World Value Survey (WVS) and the 2002 European Value Survey. National-level social cohesion statistics were obtained from Organization of Economic Cooperation and Development datasets, World Development Indicators, and Asian Development Bank key indicators for the year 2000, and from aggregating responses from the WVS. In total 47,923 individuals were included in this study. The factor analysis was applied to identify dimensions of social cohesion, which were used as entities in the cluster analysis to generate a regime typology of social cohesion. Then, multilevel regression models were applied to assess the influences of social cohesion on an individual’s self-rated health. Results and discussion Factor analysis identified five dimensions of social cohesion: social equality, social inclusion, social development, social capital, and social diversity. Then, the cluster analysis revealed five regimes of social cohesion. A multi-level analysis showed that respondents in countries with higher social inclusion, social capital, and social diversity were more likely to report good health above and beyond individual-level characteristics. Conclusions This study is an innovative effort to incorporate different aspects of social cohesion. This study suggests that social cohesion was associated with individual self-rated after controlling individual characteristics. To achieve further advancement in population health, developed countries should consider policies that would foster a society with a high level of social inclusion, social capital, and social diversity. Future research could focus on identifying possible pathways by which social cohesion influences various health outcomes. PMID:24165541

  4. Understanding the structure of skill through a detailed analysis of Individuals' performance on the Space Fortress game.

    PubMed

    Towne, Tyler J; Boot, Walter R; Ericsson, K Anders

    2016-09-01

    In this paper we describe a novel approach to the study of individual differences in acquired skilled performance in complex laboratory tasks based on an extension of the methodology of the expert-performance approach (Ericsson & Smith, 1991) to shorter periods of training and practice. In contrast to more traditional approaches that study the average performance of groups of participants, we explored detailed behavioral changes for individual participants across their development on the Space Fortress game. We focused on dramatic individual differences in learning and skill acquisition at the individual level by analyzing the archival game data of several interesting players to uncover the specific structure of their acquired skill. Our analysis revealed that even after maximal values for game-generated subscores were reached, the most skilled participant's behaviors such as his flight path, missile firing, and mine handling continued to be refined and improved (Participant 17 from Boot et al., 2010). We contrasted this participant's behavior with the behavior of several other participants and found striking differences in the structure of their performance, which calls into question the appropriateness of averaging their data. For example, some participants engaged in different control strategies such as "world wrapping" or maintaining a finely-tuned circular flight path around the fortress (in contrast to Participant 17's angular flight path). In light of these differences, we raise fundamental questions about how skill acquisition for individual participants should be studied and described. Our data suggest that a detailed analysis of individuals' data is an essential step for generating a general theory of skill acquisition that explains improvement at the group and individual levels. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Assessment of bruise age on dark-skinned individuals using tristimulus colorimetry.

    PubMed

    Thavarajah, D; Vanezis, P; Perrett, D

    2012-01-01

    Studies on the ageing of bruises have been reported on Caucasians or individuals of fair ethnicity. This study focuses on bruise changes in dark-skinned individuals using tristimulus colorimetry for forensic analysis in such individuals. Eighteen subjects of South Indian or Sri-Lankan ethnicity were recruited. Subjects were bruised using a vacuum pump and then daily colour measurements were taken of the bruise using a tristimulus colorimeter. The L*a*b* readings were recorded of a control area and of the bruise until it disappeared. Two Caucasians were used for comparison. This study showed that, using colorimetry, bruises on dark-skinned individuals can be measured and analysed even if the bruises are unclear visually. As the bruise is beneath the skin, the colour difference ΔL*, Δa* and Δb* were calculated. All values showed a trend, indicating that the L*a*b* measuring technique is a reliable method to analyse bruises on dark-skinned individuals. Comparisons of Asian subjects and Caucasian subjects were performed. The largest difference was seen in the b* value. Statistical analysis showed that ΔL* colour difference was the most consistent (95% CI -4.05 to -2.49) showing a significant difference between days 1-4 and 5-8. Objective assessment of bruises on dark-skinned individuals using the L*a*b* method of measuring gave reproducible results. Furthermore, the study showed that the yellowing of a bruise cannot be seen or measured with a tristimulus colorimeter on dark-skinned individuals due to the pigmentation of the skin. With further studies and more subjects, the age of bruises could potentially be assessed for use in forensic analysis.

  6. Social Cognitive and Planned Behavior Variables Associated with Stages of Change for Physical Activity in Spinal Cord Injury: A Multivariate Analysis

    ERIC Educational Resources Information Center

    Keegan, John; Ditchman, Nicole; Dutta, Alo; Chiu, Chung-Yi; Muller, Veronica; Chan, Fong; Kundu, Madan

    2016-01-01

    Purpose: To apply the constructs of social cognitive theory (SCT) and the theory of planned behavior (TPB) to understand the stages of change (SOC) for physical activities among individuals with a spinal cord injury (SCI). Method: Ex post facto design using multivariate analysis of variance (MANOVA). The participants were 144 individuals with SCI…

  7. Using Public Libraries To Provide Technology Access for Individuals in Poverty: A Nationwide Analysis of Library Market Areas Using a Geographic Information System.

    ERIC Educational Resources Information Center

    Jue, Dean K.; Koontz, Christie M.; Magpantay, J. Andrew; Lance, Keith Curry; Seidl, Ann M.

    1999-01-01

    Assesses the distribution of poverty areas in the United States relative to public library outlet locations to begin discussion on the best possible public library funding and development policies that would serve individuals in poverty areas. Provides a comparative analysis of poverty relative to public library outlets using two common methods of…

  8. Rapid analysis of the microfibril angle of loblolly pine from two test sites using near-infrared analysis

    Treesearch

    ChiLeung So; Jennifer Myszewski; Thomas Elder; Les Groom

    2013-01-01

    Abstract There have been several recent studies employing near infrared (NIR) spectroscopy for the rapid determination of microfibril angle (MFA). However, only a few have utilized samples cut from individual rings of increment cores, and none have been as large as this present study, sampling over 600 trees from two test sites producing over 3000 individual ring...

  9. Analysis of Variables to Predict First Year Persistence Using Logistic Regression Analysis at the University of South Florida

    ERIC Educational Resources Information Center

    Miller, T. E.; Herreid, C. H.

    2008-01-01

    This article presents a project intended to produce a model for predicting the risk of attrition of individual students enrolled at the University of South Florida. The project is premised upon the principle that college student attrition is as highly individual and personal as any other aspect of the college-going experience. Students make…

  10. Analysis of an industry in transition.

    PubMed

    Baliga, B R; Johnson, B

    1986-12-01

    The health care industry is undergoing major structural changes. The significance of these changes for individual competitors moving toward the 1990s is not yet clear. This article assesses the implications of the current changes by applying Porter's industry structure and generic strategy frameworks to the health care industry. Present trends are compared to this analysis to highlight areas where individual hospitals might improve their competitive positioning.

  11. Discrimination of Closely-Spaced Geosynchronous Satellites - Phase Curve Analysis & New Small Business Innovative Research (SBIR) Efforts

    DTIC Science & Technology

    2010-09-01

    Discrimination of Closely-Spaced Geosynchronous Satellites – Phase Curve Analysis & New Small Business Innovative Research (SBIR) Efforts...such objects from one time epoch to another showcases the deficiencies in associating individual objects before and after the configuration change...1]) have emphasized examples of multiple satellites occupying the same geosynchronous slot, with individual satellites maneuvering about one another

  12. Sample pooling for real-time PCR detection and virulence determination of the footrot pathogen Dichelobacter nodosus.

    PubMed

    Frosth, Sara; König, Ulrika; Nyman, Ann-Kristin; Aspán, Anna

    2017-09-01

    Dichelobacter nodosus is the principal cause of ovine footrot and strain virulence is an important factor in disease severity. Therefore, detection and virulence determination of D. nodosus is important for proper diagnosis of the disease. Today this is possible by real-time PCR analysis. Analysis of large numbers of samples is costly and laborious; therefore, pooling of individual samples is common in surveillance programs. However, pooling can reduce the sensitivity of the method. The aim of this study was to develop a pooling method for real-time PCR analysis that would allow sensitive detection and simultaneous virulence determination of D. nodosus. A total of 225 sheep from 17 flocks were sampled using ESwabs within the Swedish Footrot Control Program in 2014. Samples were first analysed individually and then in pools of five by real-time PCR assays targeting the 16S rRNA and aprV2/B2 genes of D. nodosus. Each pool consisted of four negative and one positive D. nodosus samples with varying amounts of the bacterium. In the individual analysis, 61 (27.1%) samples were positive in the 16S rRNA and the aprV2/B2 PCR assays and 164 (72.9%) samples were negative. All samples positive in the aprV2/B2 PCR-assay were of aprB2 variant. The pooled analysis showed that all 41 pools were also positive for D. nodosus 16S rRNA and the aprB2 variant. The diagnostic sensitivity for pooled and individual samples was therefore similar. Our method includes concentration of the bacteria before DNA-extraction. This may account for the maintenance of diagnostic sensitivity. Diagnostic sensitivity in the real-time PCR assays of the pooled samples were comparable to the sensitivity obtained for individually analysed samples. Even sub-clinical infections were able to be detected in the pooled PCR samples which is important for control of the disease. This method may therefore be implemented in footrot control programs where it can replace analysis of individual samples.

  13. 3D structure of individual nanocrystals in solution by electron microscopy

    NASA Astrophysics Data System (ADS)

    Park, Jungwon; Elmlund, Hans; Ercius, Peter; Yuk, Jong Min; Limmer, David T.; Chen, Qian; Kim, Kwanpyo; Han, Sang Hoon; Weitz, David A.; Zettl, A.; Alivisatos, A. Paul

    2015-07-01

    Knowledge about the synthesis, growth mechanisms, and physical properties of colloidal nanoparticles has been limited by technical impediments. We introduce a method for determining three-dimensional (3D) structures of individual nanoparticles in solution. We combine a graphene liquid cell, high-resolution transmission electron microscopy, a direct electron detector, and an algorithm for single-particle 3D reconstruction originally developed for analysis of biological molecules. This method yielded two 3D structures of individual platinum nanocrystals at near-atomic resolution. Because our method derives the 3D structure from images of individual nanoparticles rotating freely in solution, it enables the analysis of heterogeneous populations of potentially unordered nanoparticles that are synthesized in solution, thereby providing a means to understand the structure and stability of defects at the nanoscale.

  14. The views and habits of the individuals with mental illness about physical activity and nutrition.

    PubMed

    Çelik Ince, Sevecen; Partlak Günüşen, Neslihan

    2018-05-07

    The aim of this study is to determine the views and habits of the individuals with mental illness on physical activities and nutrition behaviors. This study was carried out descriptive qualitative method. The sample of the study consisted of 15 individuals with mental illness. The data were collected with Socio-Demographic Information Form and Semi-Structured Interview Form and analyzed by content analysis. Four main themes emerged as the result of the analysis of the data. These themes are the barriers, facilitators, habits, and the needs. Mental health nurses should be aware of the barriers of individuals with mental illness. It is recommended that mental health nurses make interventions to encourage patients to have physical activity and healthy eating. © 2018 Wiley Periodicals, Inc.

  15. Neuroforecasting Aggregate Choice

    PubMed Central

    Knutson, Brian; Genevsky, Alexander

    2018-01-01

    Advances in brain-imaging design and analysis have allowed investigators to use neural activity to predict individual choice, while emerging Internet markets have opened up new opportunities for forecasting aggregate choice. Here, we review emerging research that bridges these levels of analysis by attempting to use group neural activity to forecast aggregate choice. A survey of initial findings suggests that components of group neural activity might forecast aggregate choice, in some cases even beyond traditional behavioral measures. In addition to demonstrating the plausibility of neuroforecasting, these findings raise the possibility that not all neural processes that predict individual choice forecast aggregate choice to the same degree. We propose that although integrative choice components may confer more consistency within individuals, affective choice components may generalize more broadly across individuals to forecast aggregate choice. PMID:29706726

  16. Psychometric Properties and Utility of the Social Vulnerability Questionnaire for Individuals with Intellectual and Developmental Disabilities.

    PubMed

    Fisher, Marisa H; Shivers, Carolyn M; Josol, Cynde K

    2018-06-05

    Although it is well-known that individuals with intellectual and developmental disabilities (IDD) are socially vulnerable, the field lacks valid assessments to identify risk factors for victimization. Parents/caregivers of 428 individuals with IDD (ages 12-53) completed the social vulnerability questionnaire (SVQ), a measure developed to assess specific aspects of social vulnerability among individuals with various forms of IDD. This study examined the psychometric structure of the SVQ (exploratory and confirmatory factor analysis), and the utility of the factors of the SVQ as predictors of diagnostic category (through discriminate function analysis). Results provide psychometric support for use of the SVQ and its factors for further research and as part of a clinical assessment battery to assess social vulnerability and to develop interventions.

  17. Understanding the Connection Between Traumatic Brain Injury and Alzheimer’s Disease: A Population-Based Medical Record Review Analysis

    DTIC Science & Technology

    2017-10-01

    individuals with a confirmed TBI to age- and sex -matched individuals from the population without a TBI. Target completion 6-24-months f. Determine...reviewed, yielding 1,428 confirmed cases (yield rate of 26%). 3 e. Match individuals with a confirmed TBI to age- and sex -matched individuals...a confirmed TBI to two (2) age- and sex -matched individuals from the population without a TBI. b. For TBI events that were associated with other

  18. 22q11.2 deletion syndrome in diverse populations.

    PubMed

    Kruszka, Paul; Addissie, Yonit A; McGinn, Daniel E; Porras, Antonio R; Biggs, Elijah; Share, Matthew; Crowley, T Blaine; Chung, Brian H Y; Mok, Gary T K; Mak, Christopher C Y; Muthukumarasamy, Premala; Thong, Meow-Keong; Sirisena, Nirmala D; Dissanayake, Vajira H W; Paththinige, C Sampath; Prabodha, L B Lahiru; Mishra, Rupesh; Shotelersuk, Vorasuk; Ekure, Ekanem Nsikak; Sokunbi, Ogochukwu Jidechukwu; Kalu, Nnenna; Ferreira, Carlos R; Duncan, Jordann-Mishael; Patil, Siddaramappa Jagdish; Jones, Kelly L; Kaplan, Julie D; Abdul-Rahman, Omar A; Uwineza, Annette; Mutesa, Leon; Moresco, Angélica; Obregon, María Gabriela; Richieri-Costa, Antonio; Gil-da-Silva-Lopes, Vera L; Adeyemo, Adebowale A; Summar, Marshall; Zackai, Elaine H; McDonald-McGinn, Donna M; Linguraru, Marius George; Muenke, Maximilian

    2017-04-01

    22q11.2 deletion syndrome (22q11.2 DS) is the most common microdeletion syndrome and is underdiagnosed in diverse populations. This syndrome has a variable phenotype and affects multiple systems, making early recognition imperative. In this study, individuals from diverse populations with 22q11.2 DS were evaluated clinically and by facial analysis technology. Clinical information from 106 individuals and images from 101 were collected from individuals with 22q11.2 DS from 11 countries; average age was 11.7 and 47% were male. Individuals were grouped into categories of African descent (African), Asian, and Latin American. We found that the phenotype of 22q11.2 DS varied across population groups. Only two findings, congenital heart disease and learning problems, were found in greater than 50% of participants. When comparing the clinical features of 22q11.2 DS in each population, the proportion of individuals within each clinical category was statistically different except for learning problems and ear anomalies (P < 0.05). However, when Africans were removed from analysis, six additional clinical features were found to be independent of ethnicity (P ≥ 0.05). Using facial analysis technology, we compared 156 Caucasians, Africans, Asians, and Latin American individuals with 22q11.2 DS with 156 age and gender matched controls and found that sensitivity and specificity were greater than 96% for all populations. In summary, we present the varied findings from global populations with 22q11.2 DS and demonstrate how facial analysis technology can assist clinicians in making accurate 22q11.2 DS diagnoses. This work will assist in earlier detection and in increasing recognition of 22q11.2 DS throughout the world. © 2017 Wiley Periodicals, Inc.

  19. Meta-analysis of quantitative pleiotropic traits for next-generation sequencing with multivariate functional linear models

    PubMed Central

    Chiu, Chi-yang; Jung, Jeesun; Chen, Wei; Weeks, Daniel E; Ren, Haobo; Boehnke, Michael; Amos, Christopher I; Liu, Aiyi; Mills, James L; Ting Lee, Mei-ling; Xiong, Momiao; Fan, Ruzong

    2017-01-01

    To analyze next-generation sequencing data, multivariate functional linear models are developed for a meta-analysis of multiple studies to connect genetic variant data to multiple quantitative traits adjusting for covariates. The goal is to take the advantage of both meta-analysis and pleiotropic analysis in order to improve power and to carry out a unified association analysis of multiple studies and multiple traits of complex disorders. Three types of approximate F -distributions based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants. Simulation analysis is performed to evaluate false-positive rates and power of the proposed tests. The proposed methods are applied to analyze lipid traits in eight European cohorts. It is shown that it is more advantageous to perform multivariate analysis than univariate analysis in general, and it is more advantageous to perform meta-analysis of multiple studies instead of analyzing the individual studies separately. The proposed models require individual observations. The value of the current paper can be seen at least for two reasons: (a) the proposed methods can be applied to studies that have individual genotype data; (b) the proposed methods can be used as a criterion for future work that uses summary statistics to build test statistics to meta-analyze the data. PMID:28000696

  20. Meta-analysis of quantitative pleiotropic traits for next-generation sequencing with multivariate functional linear models.

    PubMed

    Chiu, Chi-Yang; Jung, Jeesun; Chen, Wei; Weeks, Daniel E; Ren, Haobo; Boehnke, Michael; Amos, Christopher I; Liu, Aiyi; Mills, James L; Ting Lee, Mei-Ling; Xiong, Momiao; Fan, Ruzong

    2017-02-01

    To analyze next-generation sequencing data, multivariate functional linear models are developed for a meta-analysis of multiple studies to connect genetic variant data to multiple quantitative traits adjusting for covariates. The goal is to take the advantage of both meta-analysis and pleiotropic analysis in order to improve power and to carry out a unified association analysis of multiple studies and multiple traits of complex disorders. Three types of approximate F -distributions based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants. Simulation analysis is performed to evaluate false-positive rates and power of the proposed tests. The proposed methods are applied to analyze lipid traits in eight European cohorts. It is shown that it is more advantageous to perform multivariate analysis than univariate analysis in general, and it is more advantageous to perform meta-analysis of multiple studies instead of analyzing the individual studies separately. The proposed models require individual observations. The value of the current paper can be seen at least for two reasons: (a) the proposed methods can be applied to studies that have individual genotype data; (b) the proposed methods can be used as a criterion for future work that uses summary statistics to build test statistics to meta-analyze the data.

  1. Effects of exercise on c-reactive protein in healthy patients and in patients with heart disease: A meta-analysis.

    PubMed

    Hammonds, Tracy L; Gathright, Emily C; Goldstein, Carly M; Penn, Marc S; Hughes, Joel W

    2016-01-01

    Decreases in circulating hsCRP have been associated with increased physical activity and exercise training, although the ability of exercise interventions to reduce hsCRP and which individuals benefit the most remains unclear. This meta-analysis evaluates the ability of exercise to reduce hsCRP levels in healthy individuals and in individuals with heart disease. A systematic review and meta-analysis was conducted that included exercise interventions trials from 1995 to 2012. Forty-three studies were included in the final analysis for a total of 3575 participants. Exercise interventions significantly reduced hsCRP (standardized mean difference -0.53 mg/L; 95% CI, -0.74 to -0.33). Results of sub-analysis revealed no significant difference in reductions in hsCRP between healthy adults and those with heart disease (p = .20). Heterogeneity between studies could not be attributed to age, gender, intervention length, intervention type, or inclusion of diet modification. Exercise interventions reduced hsCRP levels in adults irrespective of the presence of heart disease.​. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A method for environmental acoustic analysis improvement based on individual evaluation of common sources in urban areas.

    PubMed

    López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón

    2014-01-15

    Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.

  3. Implications of supermarket access, neighbourhood walkability and poverty rates for diabetes risk in an employee population.

    PubMed

    Herrick, Cynthia J; Yount, Byron W; Eyler, Amy A

    2016-08-01

    Diabetes is a growing public health problem, and the environment in which people live and work may affect diabetes risk. The goal of the present study was to examine the association between multiple aspects of environment and diabetes risk in an employee population. This was a retrospective cross-sectional analysis. Home environment variables were derived using employees' zip code. Descriptive statistics were run on all individual- and zip-code-level variables, stratified by diabetes risk and worksite. A multivariable logistic regression analysis was then conducted to determine the strongest associations with diabetes risk. Data were collected from employee health fairs in a Midwestern health system, 2009-2012. The data set contains 25 227 unique individuals across four years of data. From this group, using an individual's first entry into the database, 15 522 individuals had complete data for analysis. The prevalence of high diabetes risk in this population was 2·3 %. There was significant variability in individual- and zip-code-level variables across worksites. From the multivariable analysis, living in a zip code with higher percentage of poverty and higher walk score was positively associated with high diabetes risk, while living in a zip code with higher supermarket density was associated with a reduction in high diabetes risk. Our study underscores the important relationship between poverty, home neighbourhood environment and diabetes risk, even in a relatively healthy employed population, and suggests a role for the employer in promoting health.

  4. CloudMan as a platform for tool, data, and analysis distribution.

    PubMed

    Afgan, Enis; Chapman, Brad; Taylor, James

    2012-11-27

    Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions.

  5. Identifying and Analyzing Preferences for the Next Decade of Astrophysics

    NASA Astrophysics Data System (ADS)

    Mesmer, Bryan; Weger, Kristin

    2018-06-01

    The Decadal Survey is conducted by the United States National Academies and is a summary of opinions from individuals in the Astronomy community, used to recommend the next decade of prioritized astrophysics missions and activities. From a systems engineering and psychology perspective, the Decadal Survey process is interesting due to the: large and diverse community being sampled, the diverse preferences, and the group interactions that result in a common voice. When preparing input to be reviewed in such a process, it is important to recognize and understand both individual factors, as well as group factors. By understanding these dynamics it is possible to better predict the likely outcome.This research looks to better understand the preferences of the Astronomy community as they relate to the coming decade. Preferences are the desires held by an individual. Along with beliefs and alternatives, preferences are one of three necessary elements to make a decision, according to normative decision analysis. Hence, by understanding preferences, and making assumptions on beliefs and available alternatives, one can determine what decision an individual ought to make through normative decision analysis. Due to the community focus of the Decadal Study, it is important to understand the interactions of individuals that results in a group outcome. This is where game theory is an effective tool, enabling the mathematical analysis of interacting individuals.Before any analysis is performed preferences must be captured and mathematically represented through value models, which is precisely what this research examines. This Iposter is associated with a questionnaire to better understand the preferences of individuals. The questionnaire will be promoted through the Iposter as well as by the authors at the conference. The questionnaire will attempt to gather data to enable the formation of value functions resulting in a better understanding of the community likings. The research is applicable to a wide range of similar community-driven recommendations, such as NSF proposal reviews.

  6. Lunar mission safety and rescue: Hazards analysis and safety requirements

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The results are presented of the hazards analysis which was concerned only with hazards to personnel and not with loss of equipment or property. Hazards characterization includes the definition of a hazard, the hazard levels, and the hazard groups. The analysis methodology is described in detail. The methodology was used to prepare the top level functional flow diagrams, to perform the first level hazards assessment, and to develop a list of conditions and situations requiring individual hazard studies. The 39 individual hazard study results are presented in total.

  7. Genetic analysis of a four generation Indian family with Usher syndrome: a novel insertion mutation in MYO7A.

    PubMed

    Kumar, Arun; Babu, Mohan; Kimberling, William J; Venkatesh, Conjeevaram P

    2004-11-24

    Usher syndrome (USH) is a rare autosomal recessive disorder characterized by deafness and retinitis pigmentosa. The purpose of this study was to determine the genetic cause of USH in a four generation Indian family. Peripheral blood samples were collected from individuals for genomic DNA isolation. To determine the linkage of this family to known USH loci, microsatellite markers were selected from the candidate regions of known loci and used to genotype the family. Exon specific intronic primers for the MYO7A gene were used to amplify DNA samples from one affected individual from the family. PCR products were subsequently sequenced to detect mutation. PCR-SSCP analysis was used to determine if the mutation segregated with the disease in the family and was not present in 50 control individuals. All affected individuals had a classic USH type I (USH1) phenotype which included deafness, vestibular dysfunction and retinitis pigmentosa. Pedigree analysis suggested an autosomal recessive mode of inheritance of USH in the family. Haplotype analysis suggested linkage of this family to the USH1B locus on chromosome 11q. DNA sequence analysis of the entire coding region of the MYO7A gene showed a novel insertion mutation c.2663_2664insA in a homozygous state in all affected individuals, resulting in truncation of MYO7A protein. This is the first study from India which reports a novel MYO7A insertion mutation in a four generation USH family. The mutation is predicted to produce a truncated MYO7A protein. With the novel mutation reported here, the total number of USH causing mutations in the MYO7A gene described to date reaches to 75.

  8. Individual and contextual factors associated to the self-perception of oral health in Brazilian adults.

    PubMed

    Silva, Janmille Valdivino da; Oliveira, Angelo Giuseppe Roncalli da Costa

    2018-04-09

    To analyze how individual characteristics and the social context, together, are associated with self-perception of the oral health. A multilevel cross-sectional study with data from the Brazilian National Health Survey 2013, the United Nations Development Program, and the National Registry of Health Establishments. The explanatory variables for the "oral health perception" outcome were grouped, according to the study framework, into biological characteristics (sex, color, age), proximal social determinants (literacy, household crowding, and socioeconomic stratification), and distal (years of schooling expectancy at age 18, GINI, Human Development Index, and per capita income). The described analysis was performed, along with bivariate Poisson analysis and multilevel Poisson analysis for the construction of the explanatory model of oral health perception. All analyzes considered the sample weights. Both the biological characteristics and the proximal and distal social determinants were associated with the perception of oral health in the bivariate analysis. A higher prevalence of bad oral health was associated to lower years of schooling expectancy (PR = 1.31), lower per capita income (PR = 1.45), higher income concentration (PR = 1.41), and worse human development (PR = 1.45). Inversely, oral health services in both primary and secondary care were negatively associated with oral health perception. All the biological and individual social characteristics, except reading and writing, made up the final explanatory model along with the distal social determinants of the Human Development Index and coverage of basic care in the multilevel analysis. Biological factors, individual and contextual social determinants were associate synergistically with the population's perception of oral health. It is necessary to improve individual living conditions and the implementation of public social policies to improve the oral health of the population.

  9. Analysis of prototypical narratives produced by aphasic individuals and cognitively healthy subjects

    PubMed Central

    Silveira, Gabriela; Mansur, Letícia Lessa

    2015-01-01

    Aphasia can globally or selectively affect comprehension and production of verbal and written language. Discourse analysis can aid language assessment and diagnosis. Objective [1] To explore narratives that produce a number of valid indicators for diagnosing aphasia in speakers of Brazilian Portuguese. [2] To analyze the macrostructural aspects of the discourse of normal individuals. [3] To analyze the macrostructural aspects of the discourse of aphasic individuals. Methods The macrostructural aspects of three narratives produced by aphasic individuals and cognitively healthy subjects were analyzed. Results A total of 30 volunteers were examined comprising 10 aphasic individuals (AG) and 20 healthy controls (CG). The CG included 5 males. The CG had a mean age of 38.9 years (SD=15.61) and mean schooling of 13 years (SD=2.67) whereas the AG had a mean age of 51.7 years (SD=17.3) and mean schooling of 9.1 years (SD=3.69). Participants were asked to narrate three fairy tales as a basis for analyzing the macrostructure of discourse. Comparison of the three narratives revealed no statistically significant difference in number of propositions produced by the groups. A significant negative correlation was found between age and number of propositions produced. Also, statistically significant differences were observed in the number of propositions produced by the individuals in the CG and the AG for the three tales. Conclusion It was concluded that the three tales are applicable for discourse assessment, containing a similar number of propositions and differentiating aphasic individuals and cognitively healthy subjects based on analysis of the macrostructure of discourse. PMID:29213973

  10. Understanding Individual Behavior: 6427.02.

    ERIC Educational Resources Information Center

    Norris, Jack A., Jr.

    A course was designed to study the causes of individual behavior through an analysis of the factors involved in the development of personality, with special emphasis on the individual's perception and unique response to his environment. The course is based on the premise that the learner will investigate how and why people behave and then attempt…

  11. Individual Autonomy, Law, and Technology: Should Soft Determinism Guide Legal Analysis?

    ERIC Educational Resources Information Center

    Cockfield, Arthur J.

    2010-01-01

    How one thinks about the relationship between individual autonomy (sometimes referred to as individual willpower or human agency) and technology can influence the way legal thinkers develop policy at the intersection of law and technology. Perspectives that fall toward the "machines control us" end of the spectrum may support more interventionist…

  12. A UTILITY THEORY OF OLD AGE.

    ERIC Educational Resources Information Center

    HAMLIN, ROY M.

    HERZBERG'S JOB SATISFACTION MODEL SERVES AS THE BASIS FOR AN ANALYSIS OF OLD AGE. THE PATTERN VARIES AMONG INDIVIDUALS, BUT THE CAPACITY FOR ORGANIZED BEHAVIOR RATHER THAN RANDOM STRESS REDUCTION SUPPLIES EACH INDIVIDUAL WITH A TASK. THE HYPOTHESIS IS THAT IF THE OLDER INDIVIDUAL REALIZES UTILITY IN HIS YEARS BEYOND 70, HE WILL RETAIN COMPETENCE…

  13. An Implementation of the Action Space Concept in Behavioral Analysis.

    ERIC Educational Resources Information Center

    Higgs, Gary K.

    The Contact Action Space (CAS) of an individual, or group of individuals, has a significant impact on the location of activities and the organization of the use of space. Beginning with the most basic components of a CAS, the individual behavior pattern element is developed, and operational variations affecting alignment and configuration are…

  14. A Comparative Analysis of Individualism and Collectivism in Post-Industrial America and Post-Revolutionary China.

    ERIC Educational Resources Information Center

    Kraft, Richard J.

    Collectivist versus individualistic attitudes in China and the United States are compared with particular emphasis on the effects of these attitudes on educational objectives and practice in China. Individualism is interpreted to include attitudes such as personal liberty, individual initiative, moral relativism, and self-direction. This…

  15. Standing Postural Control in Individuals with Autism Spectrum Disorder: Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Lim, Yi Huey; Partridge, Katie; Girdler, Sonya; Morris, Susan L.

    2017-01-01

    Impairments in postural control affect the development of motor and social skills in individuals with autism spectrum disorder (ASD). This review compared the effect of different sensory conditions on static standing postural control between ASD and neurotypical individuals. Results from 19 studies indicated a large difference in postural control…

  16. Intra- and Inter-Individual Variation in Self-Reported Code-Switching Patterns of Adult Multilinguals

    ERIC Educational Resources Information Center

    Dewaele, Jean-Marc; Li, Wei

    2014-01-01

    The present study is a large-scale quantitative analysis of intra-individual variation (linked to type of interlocutor) and inter-individual variation (linked to multilingualism, sociobiographical variables and three personality traits) in self-reported frequency of code-switching (CS) among 2116 multilinguals. We found a significant effect of…

  17. Parental Attachment, Separation-Individuation, and College Student Adjustment: A Structural Equation Analysis of Mediational Effects

    ERIC Educational Resources Information Center

    Mattanah, Jonathan F.; Hancock, Gregory R.; Brand, Bethany L.

    2004-01-01

    Secure parental attachment and healthy levels of separation-individuation have been consistently linked to greater college student adjustment. The present study proposes that the relation between parental attachment and college adjustment is mediated by healthy separation-individuation. The authors gathered data on maternal and paternal…

  18. The effects of context on multidimensional spatial cognitive models. Ph.D. Thesis - Arizona Univ.

    NASA Technical Reports Server (NTRS)

    Dupnick, E. G.

    1979-01-01

    Spatial cognitive models obtained by multidimensional scaling represent cognitive structure by defining alternatives as points in a coordinate space based on relevant dimensions such that interstimulus dissimilarities perceived by the individual correspond to distances between the respective alternatives. The dependence of spatial models on the context of the judgments required of the individual was investigated. Context, which is defined as a perceptual interpretation and cognitive understanding of a judgment situation, was analyzed and classified with respect to five characteristics: physical environment, social environment, task definition, individual perspective, and temporal setting. Four experiments designed to produce changes in the characteristics of context and to test the effects of these changes upon individual cognitive spaces are described with focus on experiment design, objectives, statistical analysis, results, and conclusions. The hypothesis is advanced that an individual can be characterized as having a master cognitive space for a set of alternatives. When the context changes, the individual appears to change the dimension weights to give a new spatial configuration. Factor analysis was used in the interpretation and labeling of cognitive space dimensions.

  19. The stability of individual patterns of autonomic responses to motion sickness stimulation

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Toscano, William B.; Naifeh, Karen H.

    1990-01-01

    As part of a program to develop a treatment for motion sickness based on self-regulation of autonomic nervous system (ANS) activity, this study examined the stability of an individual's pattern of ANS responses to motion sickness stimulation on repeated occasions. Motion sickness symptoms were induced in 58 people during two rotating chair test. Physiological responses measured were heart rate, finger pulse volume, respiration rate, and skin conductance. Using standard scores, stability of responses of specific magnitudes across both tests is as examined. Correlational analyses, analysis of variance, and a components of variance analysis all revealed marked, but quite stable, individual differences in ANS responses to both mild and severe motion sickness. These findings confirm the prior observation that people are sufficiently unique in their ANS responses to motion sickness provocation to make it nesessary to individually tailor self-regulation training. Further, these data support the contention that individual ANS patterns are sufficiently consistent from test to test so as to serve as an objective indicator of individual motion sickness malaise levels.

  20. RNA-Seq Alignment to Individualized Genomes Improves Transcript Abundance Estimates in Multiparent Populations

    PubMed Central

    Munger, Steven C.; Raghupathy, Narayanan; Choi, Kwangbom; Simons, Allen K.; Gatti, Daniel M.; Hinerfeld, Douglas A.; Svenson, Karen L.; Keller, Mark P.; Attie, Alan D.; Hibbs, Matthew A.; Graber, Joel H.; Chesler, Elissa J.; Churchill, Gary A.

    2014-01-01

    Massively parallel RNA sequencing (RNA-seq) has yielded a wealth of new insights into transcriptional regulation. A first step in the analysis of RNA-seq data is the alignment of short sequence reads to a common reference genome or transcriptome. Genetic variants that distinguish individual genomes from the reference sequence can cause reads to be misaligned, resulting in biased estimates of transcript abundance. Fine-tuning of read alignment algorithms does not correct this problem. We have developed Seqnature software to construct individualized diploid genomes and transcriptomes for multiparent populations and have implemented a complete analysis pipeline that incorporates other existing software tools. We demonstrate in simulated and real data sets that alignment to individualized transcriptomes increases read mapping accuracy, improves estimation of transcript abundance, and enables the direct estimation of allele-specific expression. Moreover, when applied to expression QTL mapping we find that our individualized alignment strategy corrects false-positive linkage signals and unmasks hidden associations. We recommend the use of individualized diploid genomes over reference sequence alignment for all applications of high-throughput sequencing technology in genetically diverse populations. PMID:25236449

  1. White Matter Integrity Deficit Associated with Betel Quid Dependence.

    PubMed

    Yuan, Fulai; Zhu, Xueling; Kong, Lingyu; Shen, Huaizhen; Liao, Weihua; Jiang, Canhua

    2017-01-01

    Betel quid (BQ) is a commonly consumed psychoactive substance, which has been regarded as a human carcinogen. Long-term BQ chewing may cause Diagnostic and Statistical Manual of Mental Disorders-IV dependence symptoms, which can lead to decreased cognitive functions, such as attention and inhibition control. Although betel quid dependence (BQD) individuals have been reported with altered brain structure and function, there is little evidence showing white matter microstructure alternation in BQD individuals. The present study aimed to investigate altered white matter microstructure in BQD individuals using diffusion tensor imaging. Tract-based spatial statistics was used to analyze the data. Compared with healthy controls, BQD individuals exhibited higher mean diffusivity (MD) in anterior thalamic radiation (ATR). Further analysis revealed that the ATR in BQD individuals showed less fractional anisotropy (FA) than that in healthy controls. Correlation analysis showed that both the increase of MD and reduction of FA in BQD individuals were associated with severity of BQ dependence. These results suggested that BQD would disrupt the balance between prefrontal cortex and subcortical areas, causing declined inhibition control.

  2. Salmonella testing of pooled pre-enrichment broth cultures for screening multiple food samples.

    PubMed

    Price, W R; Olsen, R A; Hunter, J E

    1972-04-01

    A method has been described for testing multiple food samples for Salmonella without loss in sensitivity. The method pools multiple pre-enrichment broth cultures into single enrichment broths. The subsequent stages of the Salmonella analysis are not altered. The method was found applicable to several dry food materials including nonfat dry milk, dried egg albumin, cocoa, cottonseed flour, wheat flour, and shredded coconut. As many as 25 pre-enrichment broth cultures were pooled without apparent loss in the sensitivity of Salmonella detection as compared to individual sample analysis. The procedure offers a simple, yet effective, way to increase sample capacity in the Salmonella testing of foods, particularly where a large proportion of samples ordinarily is negative. It also permits small portions of pre-enrichment broth cultures to be retained for subsequent individual analysis if positive tests are found. Salmonella testing of pooled pre-enrichment broths provides increased consumer protection for a given amount of analytical effort as compared to individual sample analysis.

  3. Friendship Group Composition and Juvenile Institutional Misconduct.

    PubMed

    Reid, Shannon E

    2017-02-01

    The present study examines both the patterns of friendship networks and how these network characteristics relate to the risk factors of institutional misconduct for incarcerated youth. Using friendship networks collected from males incarcerated with California's Division of Juvenile Justice (DJJ), latent profile analysis was utilized to create homogeneous groups of friendship patterns based on alter attributes and network structure. The incarcerated youth provided 144 egocentric networks reporting 558 social network relationships. Latent profile analysis identified three network profiles: expected group (67%), new breed group (20%), and model citizen group (13%). The three network profiles were integrated into a multiple group analysis framework to examine the relative influence of individual-level risk factors on their rate of institutional misconduct. The analysis finds variation in predictors of institutional misconduct across profile types. These findings suggest that the close friendships of incarcerated youth are patterned across the individual characteristics of the youth's friends and that the friendship network can act as a moderator for individual risk factors for institutional misconduct.

  4. I feel like a scrambled egg in my head: an idiographic case study of meaning making and anger using interpretative phenomenological analysis.

    PubMed

    Eatough, Virginia; Smith, Jonathan A

    2006-03-01

    What does it feel like when one's meaning making is impoverished and threatens to break down? The aim of this study is to show how meaning making is achieved in the context of one's life and how this achievement is often a struggle for the individual. The study reports data from semi-structured interviews with a female participant, which was analysed using interpretative phenomenological analysis (IPA). This paper examines how cultural discourses and conventions are experienced and given meaning by the individual. First, the analysis demonstrates how dominant discourses are used to explain anger and aggression. These include hormones, alcohol, and the influence of past relationships on present action. Second, it examines how the participant's meaning making is often ambiguous and confused, and how she variously accepts and challenges available meanings. Finally, the analysis demonstrates how meaning making can break down and the consequences of this for the individual's sense of self.

  5. Analysis of biodiesel by high performance liquid chromatography using refractive index detector.

    PubMed

    Syed, Mahin Basha

    2017-01-01

    High-performance liquid chromatography (HPLC) was used for the determination of compounds occurring during the production of biodiesel from karanja and jatropha oil. Methanol was used for fast monitoring of conversion of karanja and jatropha oil triacylglycerols to fatty acid methyl esters and for quantitation of residual triacylglycerols (TGs), in the final biodiesel product. The individual sample compounds were identified using HPLC. Analysis of fatty acid methyl esters (FAMES) in blends of biodiesel by HPLC using a refractive index and a UV detector at 238 nm. Individual triacylglycerols, diacylglycerols, monoacylglycerols and methyl esters of oleic, linoleic and linolenic acids and free fatty acids were separated within 40 min. Hence HPLC was found to be best for the analysis of biodiesel. Analysis of biodiesel by HPLC using RID detector. Estimation of amount of FAMES in biodiesel. Individual triacylglycerols, diacylglycerols, monoacylglycerols and methyl esters of oleic, linoleic and linolenic acids and free fatty acids were separated within 40 min.

  6. Sensorimotor Control in Individuals With Idiopathic Neck Pain and Healthy Individuals: A Systematic Review and Meta-Analysis.

    PubMed

    de Zoete, Rutger M J; Osmotherly, Peter G; Rivett, Darren A; Farrell, Scott F; Snodgrass, Suzanne J

    2017-06-01

    (1) To identify reported tests used to assess sensorimotor control in individuals with idiopathic neck pain and (2) to investigate whether these tests can quantify differences between individuals with idiopathic neck pain and healthy individuals. Allied and Complementary Medicine Database, CINAHL, Cochrane Central Register of Controlled Trials, Embase, MEDLINE, Physiotherapy Evidence Database, Scopus, and SPORTDiscus. Studies reporting sensorimotor outcomes in individuals with idiopathic neck pain or healthy individuals were identified. There were 1,677 records screened independently by 2 researchers for eligibility: 43 studies were included in the review, with 30 of these studies included in the meta-analysis. Methodologic quality was determined using the Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies. Data were extracted using a standardized extraction table. Sensorimotor control was most commonly assessed by joint position error and postural sway. Pooled means for joint position error after cervical rotation in individuals with neck pain (range, 2.2°-9.8°) differed significantly (P=.04) compared with healthy individuals (range, 1.66°-5.1°). Postural sway with eyes open ranged from 4.85 to 10.5cm 2 (neck pain) and 3.5 to 6.6cm 2 (healthy) (P=.16), and postural sway with eyes closed ranged from 2.51 to 16.6cm 2 (neck pain) and 2.74 to 10.9cm 2 (healthy) (P=.30). Individual studies, but not meta-analysis, demonstrated differences between neck pain and healthy groups for postural sway. Other test conditions and other tests were not sufficiently investigated to enable pooling of data. The findings from this review suggest sensorimotor control testing may be clinically useful in individuals with idiopathic neck pain. However, results should be interpreted with caution because clinical differences were small; therefore, further cross-sectional research with larger samples is needed to determine the magnitude of the relation between sensorimotor control and pain and to assess any potential clinical significance. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. "Our commonality is our past:" a qualitative analysis of re-entry community health workers' meaningful experiences.

    PubMed

    Bedell, Precious; Wilson, John L; White, Ann Marie; Morse, Diane S

    Re-entry community health workers (CHWs) are individuals who connect diverse community residents at risk for chronic health issues such as Hepatitis C virus and cardiovascular disease with post-prison healthcare and re-entry services. While the utilization of CHWs has been documented in other marginalized populations, there is little knowledge surrounding the work of re-entry CHWs with individuals released from incarceration. Specifically, CHWs' experiences and perceptions of the uniqueness of their efforts to link individuals to healthcare have not been documented systematically. This study explored what is meaningful to formerly incarcerated CHWs as they work with released individuals. The authors conducted a qualitative thematic analysis of twelve meaningful experiences written by re-entry CHWs employed by the Transitions Clinic Network who attended a CHW training program during a conference in San Francisco, CA. Study participants were encouraged to recount meaningful CHW experiences and motivations for working with re-entry populations in a manner consistent with journal-based qualitative analysis techniques. Narratives were coded using an iterative process and subsequently organized according to themes in ATLAS.ti. Study personnel came to consensus with coding and major themes. The narratives highlighted thought processes and meaning related to re-entry CHWs' work helping patients navigate complex social services for successful re-integration. Six major themes emerged from the analysis: advocacy and support, empathy relating to a personal history of incarceration, giving back, professional satisfaction and responsibilities, resiliency and educational advancement, and experiences of social inequities related to race. Re-entry CHWs described former incarceration, employment, and social justice as sources of meaning for assisting justice-involved individuals receive effective, efficient, and high-quality healthcare. Health clinics for individuals released from incarceration provide a unique setting that links high risk patients to needed care and professionalizes career opportunities for formerly incarcerated re-entry CHWs. The commonality of past correctional involvement is a strong indicator of the meaning and perceived effectiveness re-entry CHWs find in working with individuals leaving prison. Expansion of reimbursable visits with re-entry CHWs in transitions clinics designed for re-entering individuals is worthy of further consideration.

  8. Prevalence of antisocial personality disorder among Chinese individuals receiving treatment for heroin dependence: a meta-analysis

    PubMed Central

    ZHONG, Baoliang; XIANG, Yutao; CAO, Xiaolan; LI, Yan; ZHU, Junhong; CHIU, Helen F. K.

    2014-01-01

    Background Studies from Western countries consistently report very high rates of comorbid Antisocial Personality Disorder (ASPD) among individuals with heroin addiction, but the reported proportion of Chinese individuals with heroin addiction who have co-morbid ASPD varies widely, possibly because Chinese clinicians do not consider personality issues when treating substance abuse problems. Aim Conduct a meta-analysis of studies that assessed the proportion of Chinese individuals with heroin dependence who have comorbid ASPD. Methods We searched for relevant studies in both Chinese databases (China National Knowledge Infrastructure, Wanfang Data Knowledge Service Platform, Taiwan Electronic Periodical Services) and western databases (PubMed, EMBASE, and PsycInfo). Two authors independently retrieved the literature, identified studies that met pre-defined inclusion and exclusion criteria, assessed the quality of included studies, and extracted the data used in the analysis. Statistical analysis was performed using StatsDirect 3.0 and R software. Results The search yielded 15 eligible studies with a total of 3692 individuals with heroin dependence. Only 2 of the studies were rated as high-quality studies. All studies were conducted in rehabilitation centers or hospitals. The pooled lifetime prevalence of ASPD in these subjects was 30% (95%CI: 23%-38%), but the heterogeneity of results across studies was great (I2 =95%, p<0.001). Men had a higher prevalence than women (44% vs. 21%), and injection heroin users had higher prevalence than those who smoked heroin (44% vs. 27%). Studies that were methodologically stronger had higher reported prevalence of ASPD among heroin dependent individuals. Conclusions There are substantial methodological problems in the available literature about ASPD in Chinese individuals receiving treatment for heroin dependence, but we estimate that about one-third of them meet criteria for ASPD. Further work is needed to increase clinicians’ awareness of this issue; to compare the pathogenesis, treatment responsiveness and recidivism of those with and without ASPD; and to develop and test targeted interventions for this difficult-to-treat subgroup of individuals with heroin dependence. PMID:25477719

  9. Somatotype of the individuals with lower extremity amputation and its association with cardiovascular risk.

    PubMed

    Mozumdar, Arupendra; Roy, Subrata K

    2008-03-01

    Anthropometric somatotyping is one of the methods to describe the shape of the human body, which shows some associations with an individual's health and disease condition, especially with cardiovascular diseases (CVD). Individuals with lower extremity amputation (LEA) are known to be more vulnerable to the cardiovascular risk. The objectives of the present study are to report the somatotype of the individuals having lower extremity amputation, to study the possible variation in somatotype between two groups of amputated individuals, and to study the association between cardiovascular disease risk factor and somatotype components among individuals with locomotor disability. 102 adult male individuals with unilateral lower-extremity amputation residing in Calcutta and adjoining areas were investigated. The anthropometric data for somatotyping and data on cardiovascular risk traits (such as body mass index, blood pressure measurements, blood lipids) have been collected. The somatotyping technique of Carter & Heath (1990) has been followed. The result shows high mean values of endomorphy and mesomorphy components and a low mean value of the ectomorphy component among the amputated individuals having cardiovascular risks. The results of both discriminant analysis and logistic regression analysis show a significant relationship between somatotype components and CVD risk among the individuals with LEA. The findings of the present study support the findings of similar studies conducted on the normal population. Diagnosis of CVD risk condition through somatotyping can be utilized in prevention/treatment management for the individuals with LEA.

  10. Contribution of Forensic Anthropology to Identification Process in Croatia: Examples of Victims Recovered in Wells

    PubMed Central

    Šlaus, Mario; Strinović, Davor; Petrovečki, Vedrana; Vyroubal, Vlasta

    2007-01-01

    Aim To describe the contribution of forensic anthropology to the recovery, analysis, and identification of victims from the 1991-1995 war in Croatia recovered in wells. Methods From 1996 to the present, human remains of a total of 61 individuals have been recovered from 13 wells. Six wells contained the remains of a single individual, one well contained the remains of 2 individuals, and 6 wells contained the remains 3 or more individuals. The majority of wells, containing 90.2% (55/61) of recovered individuals, were located within a 4 km radius of the Croatian-Serbian border. Results Forensic anthropologists re-individualized 26/61 (42.6%) individuals out of skeletonized and commingled remains, provided basic biological data on sex, age-at-death, and stature in all identifications (n = 37), as well as established positive identification by recognizing unique skeletal features (antemortem fractures and skeletal evidence of antemortem surgical interventions) in 3/37 (8.1%) cases. Trauma analyses carried out by forensic anthropologists contributed to the determination of the cause of death in 38/61 (62.3%) individuals and to the probable cause of death in an additional 18/61 (29.5%) individuals. The most frequent (27/38, 71.0%) type of trauma causing death in individuals recovered from wells was a single gunshot wound. Conclusion Forensic anthropologists, collaborating closely with forensic pathologists, forensic odontologists, forensic radiologists, criminologists, and molecular biologists contributed significantly to trauma analysis and identification of war victims recovered from wells. PMID:17696305

  11. Ethnic Diversity, Inter-group Attitudes and Countervailing Pathways of Positive and Negative Inter-group Contact: An Analysis Across Workplaces and Neighbourhoods.

    PubMed

    Laurence, James; Schmid, Katharina; Hewstone, Miles

    2018-01-01

    This study advances the current literature investigating the relationship between contextual out-group exposure, inter-group attitudes and the role of inter-group contact. Firstly, it introduces the concept of contact-valence into this relationship; that is, whether contact is experienced positively or negatively. Secondly, it presents a comparative analysis of how processes of out-group exposure and frequency of (valenced) contact affect prejudice across both neighbourhoods and workplaces. Applying path analysis modelling to a nationally-representative sample of white British individuals in England, we demonstrate, across both contexts, that increasing out-group exposure is associated with higher rates of both positively- and negatively-valenced contact. This results in exposure exhibiting both positive and negative indirect associations with prejudice via more frequent inter-group mixing. These countervailing contact-pathways help explain how out-group exposure is associated with inter-group attitudes. In neighbourhoods, increasing numbers of individuals experiencing positive-contact suppress an otherwise negative effect of neighbourhood diversity (driven partly by increasing numbers of individuals reporting negative contact). Across workplaces the effect differs such that increasing numbers of individuals experiencing negative-contact suppress an otherwise positive effect of workplace diversity (driven largely by increasing numbers of individuals experiencing positive contact).

  12. Glucocorticoid-induced hyperglycaemia in respiratory disease: a systematic review and meta-analysis.

    PubMed

    Breakey, S; Sharp, S J; Adler, A I; Challis, B G

    2016-12-01

    The relative risk of glucocorticoid-induced hyperglycaemia is poorly quantified. We undertook a meta-analysis to estimate the association between glucocorticoid treatment and hyperglycaemia, overall and separately in individuals with and without diabetes and underlying respiratory disease. We searched electronic databases for clinical trials of adults randomized to either glucocorticoid treatment or placebo. Eight articles comprising 2121 participants were identified. We performed a random effects meta-analysis to determine relative risks for the associations between glucocorticoid use and both hyperglycaemia and starting hypoglycaemic therapy. In all individuals, the relative risk of hyperglycaemia comparing glucocorticoid treatment with placebo was 1.72 [95% confidence interval (CI) 1.50-2.04; p < .001]. The relative risks in individuals with and those without diabetes were 2.10 (95% CI 0.92-5.02; p = .079) and 1.50 (95% CI 0.79-2.86; p = .22), respectively. In all individuals, the relative risk of hyperglycaemia requiring initiation of hypoglycaemic therapy, comparing glucocorticoid treatment with placebo, was 1.73 (95% CI 1.40-2.14; p < .001). In conclusion, glucocorticoid therapy increases the risk of hyperglycaemia in all individuals with underlying respiratory disease but not when diabetic status is analysed separately. © 2016 The Authors. Diabetes, Obesity and Metabolism published by John Wiley & Sons Ltd.

  13. Influence of erroneous patient records on population pharmacokinetic modeling and individual bayesian estimation.

    PubMed

    van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H

    2012-10-01

    Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.

  14. Modeling of Individual and Organizational Factors Affecting Traumatic Occupational Injuries Based on the Structural Equation Modeling: A Case Study in Large Construction Industries

    PubMed Central

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Akbarzadeh, Mehdi

    2016-01-01

    Background Individual and organizational factors are the factors influencing traumatic occupational injuries. Objectives The aim of the present study was the short path analysis of the severity of occupational injuries based on individual and organizational factors. Materials and Methods The present cross-sectional analytical study was implemented on traumatic occupational injuries within a ten-year timeframe in 13 large Iranian construction industries. Modeling and data analysis were done using the structural equation modeling (SEM) approach and the IBM SPSS AMOS statistical software version 22.0, respectively. Results The mean age and working experience of the injured workers were 28.03 ± 5.33 and 4.53 ± 3.82 years, respectively. The portions of construction and installation activities of traumatic occupational injuries were 64.4% and 18.1%, respectively. The SEM findings showed that the individual, organizational and accident type factors significantly were considered as effective factors on occupational injuries’ severity (P < 0.05). Conclusions Path analysis of occupational injuries based on the SEM reveals that individual and organizational factors and their indicator variables are very influential on the severity of traumatic occupational injuries. So, these should be considered to reduce occupational accidents’ severity in large construction industries. PMID:27800465

  15. Primary, Secondary, and Meta-Analysis of Research

    ERIC Educational Resources Information Center

    Glass, Gene V.

    1976-01-01

    Examines data analysis at three levels: analysis of data; secondary analysis is the re-analysis of data for the purpose of answering the original research question with better statistical techniques, or answering new questions with old data; and, meta-analysis refers to the statistical analysis of many analysis results from individual studies for…

  16. Investigation of 15q11-q13, 16p11.2 and 22q13 CNVs in Autism Spectrum Disorder Brazilian Individuals with and without Epilepsy

    PubMed Central

    Moreira, Danielle P.; Griesi-Oliveira, Karina; Bossolani-Martins, Ana L.; Lourenço, Naila C. V.; Takahashi, Vanessa N. O.; da Rocha, Kátia M.; Moreira, Eloisa S.; Vadasz, Estevão; Meira, Joanna Goes Castro; Bertola, Debora; Halloran, Eoghan O’; Magalhães, Tiago R.; Fett-Conte, Agnes C.; Passos-Bueno, Maria Rita

    2014-01-01

    Copy number variations (CNVs) are an important cause of ASD and those located at 15q11-q13, 16p11.2 and 22q13 have been reported as the most frequent. These CNVs exhibit variable clinical expressivity and those at 15q11-q13 and 16p11.2 also show incomplete penetrance. In the present work, through multiplex ligation-dependent probe amplification (MLPA) analysis of 531 ethnically admixed ASD-affected Brazilian individuals, we found that the combined prevalence of the 15q11-q13, 16p11.2 and 22q13 CNVs is 2.1% (11/531). Parental origin could be determined in 8 of the affected individuals, and revealed that 4 of the CNVs represent de novo events. Based on CNV prediction analysis from genome-wide SNP arrays, the size of those CNVs ranged from 206 kb to 2.27 Mb and those at 15q11-q13 were limited to the 15q13.3 region. In addition, this analysis also revealed 6 additional CNVs in 5 out of 11 affected individuals. Finally, we observed that the combined prevalence of CNVs at 15q13.3 and 22q13 in ASD-affected individuals with epilepsy (6.4%) was higher than that in ASD-affected individuals without epilepsy (1.3%; p<0.014). Therefore, our data show that the prevalence of CNVs at 15q13.3, 16p11.2 and 22q13 in Brazilian ASD-affected individuals is comparable to that estimated for ASD-affected individuals of pure or predominant European ancestry. Also, it suggests that the likelihood of a greater number of positive MLPA results might be found for the 15q13.3 and 22q13 regions by prioritizing ASD-affected individuals with epilepsy. PMID:25255310

  17. Investigation of 15q11-q13, 16p11.2 and 22q13 CNVs in autism spectrum disorder Brazilian individuals with and without epilepsy.

    PubMed

    Moreira, Danielle P; Griesi-Oliveira, Karina; Bossolani-Martins, Ana L; Lourenço, Naila C V; Takahashi, Vanessa N O; da Rocha, Kátia M; Moreira, Eloisa S; Vadasz, Estevão; Meira, Joanna Goes Castro; Bertola, Debora; O'Halloran, Eoghan; Magalhães, Tiago R; Fett-Conte, Agnes C; Passos-Bueno, Maria Rita

    2014-01-01

    Copy number variations (CNVs) are an important cause of ASD and those located at 15q11-q13, 16p11.2 and 22q13 have been reported as the most frequent. These CNVs exhibit variable clinical expressivity and those at 15q11-q13 and 16p11.2 also show incomplete penetrance. In the present work, through multiplex ligation-dependent probe amplification (MLPA) analysis of 531 ethnically admixed ASD-affected Brazilian individuals, we found that the combined prevalence of the 15q11-q13, 16p11.2 and 22q13 CNVs is 2.1% (11/531). Parental origin could be determined in 8 of the affected individuals, and revealed that 4 of the CNVs represent de novo events. Based on CNV prediction analysis from genome-wide SNP arrays, the size of those CNVs ranged from 206 kb to 2.27 Mb and those at 15q11-q13 were limited to the 15q13.3 region. In addition, this analysis also revealed 6 additional CNVs in 5 out of 11 affected individuals. Finally, we observed that the combined prevalence of CNVs at 15q13.3 and 22q13 in ASD-affected individuals with epilepsy (6.4%) was higher than that in ASD-affected individuals without epilepsy (1.3%; p<0.014). Therefore, our data show that the prevalence of CNVs at 15q13.3, 16p11.2 and 22q13 in Brazilian ASD-affected individuals is comparable to that estimated for ASD-affected individuals of pure or predominant European ancestry. Also, it suggests that the likelihood of a greater number of positive MLPA results might be found for the 15q13.3 and 22q13 regions by prioritizing ASD-affected individuals with epilepsy.

  18. Analysis of density and epitopes of D antigen on the surface of erythrocytes from DEL phenotypic individuals carrying the RHD1227A allele.

    PubMed

    Gu, Juan; Sun, An-Yuan; Wang, Xue-Dong; Shao, Chao-Peng; Li, Zheng; Huang, Li-Hua; Pan, Zhao-Lin; Wang, Qing-Ping; Sun, Guang-Ming

    2014-04-01

    The characteristics of the D antigen are important as they influence the immunogenicity of D variant cells. Several studies on antigenic sites have been reported in normal D positive, weak D and partial D cases, including a comprehensive analysis of DEL types in Caucasians. The aim of this study was to assess D antigen density and epitopes on the erythrocyte surface of Asian type DEL phenotypic individuals carrying the RHD1227A allele in the Chinese population. A total of 154 DEL phenotypic individuals carrying the RHD1227A allele were identified through adsorption and elution tests and polymerase chain reaction analysis with sequence-specific primers in the Chinese population. D antigen density on the erythrocyte surface of these individuals was detected using a flow cytometric method. An erythrocyte sample with known D antigen density was used as a standard. Blood samples from D-negative and D-positive individuals were used as controls. In addition, D antigen epitopes on the erythrocyte surface of DEL individuals carrying the RHD1227A allele were investigated with 18 monoclonal anti-D antibodies specific for different D antigen epitopes. The means of the median fluorescence intensity of D antigen on the erythrocyte membrane surface of D-negative, D-positive and DEL individuals were 2.14±0.25, 193.61±11.43 and 2.45±0.82, respectively. The DEL samples were estimated to have approximately 22 D antigens per cell. The samples from all 154 DEL individuals reacted positively with 18 monoclonal anti-D antibodies specific for different D antigen epitopes. In this study, D antigen density on the erythrocyte surface of DEL individuals carrying the RHD1227A allele was extremely low, there being only very few antigenic molecules per cell, but the D antigen epitopes were grossly complete.

  19. Laser mass spectrometry for DNA fingerprinting for forensic applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.H.; Tang, K.; Taranenko, N.I.

    The application of DNA fingerprinting has become very broad in forensic analysis, patient identification, diagnostic medicine, and wildlife poaching, since every individual`s DNA structure is identical within all tissues of their body. DNA fingerprinting was initiated by the use of restriction fragment length polymorphisms (RFLP). In 1987, Nakamura et al. found that a variable number of tandem repeats (VNTR) often occurred in the alleles. The probability of different individuals having the same number of tandem repeats in several different alleles is very low. Thus, the identification of VNTR from genomic DNA became a very reliable method for identification of individuals.more » DNA fingerprinting is a reliable tool for forensic analysis. In DNA fingerprinting, knowledge of the sequence of tandem repeats and restriction endonuclease sites can provide the basis for identification. The major steps for conventional DNA fingerprinting include (1) specimen processing (2) amplification of selected DNA segments by PCR, and (3) gel electrophoresis to do the final DNA analysis. In this work we propose to use laser desorption mass spectrometry for fast DNA fingerprinting. The process and advantages are discussed.« less

  20. Early intervention services for psychosis and time until application for disability income support: a survival analysis.

    PubMed

    Krupa, Terry; Oyewumi, Kola; Archie, Suzanne; Lawson, J Stuart; Nandlal, Joan; Conrad, Gretchen

    2012-10-01

    Ensuring the financial security of individuals recovering from first episode psychosis is imperative, but disability income programs can be powerful disincentives to employment, compromising the social and occupational aspects of recovery. Survival analysis and Cox regression analysis were used to examine the rate at which individuals served by early intervention for psychosis (EIP) services apply for government disability income benefits and factors that predict rate of application. Health records for 558 individuals served by EIP programs were reviewed. Within the first year of receiving services 30% will make application for disability income; 60% will do so by 5 years. Rate of application is predicted by rate of hospital admission, financial status and engagement in productivity roles at the time of entry to EIP service. The findings suggest the need to examine the extent to which the recovery goals of EI services are undermined by early application for government income support. They also suggest the need to develop best practice guidelines related to ensuring the economic security of individuals served.

Top