Science.gov

Sample records for acceleration dsa method

  1. Distribution in energies and acceleration times in DSA, and their effect on the cut-off

    NASA Astrophysics Data System (ADS)

    Brooks, A.; Protheroe, R. J.

    2001-08-01

    We have conducted Monte Carlo simulations of diffusive shock acceleration (DSA) to determine the distribution of times since injection taken to reach energy E > E0. This distribution of acceleration times for the case of momentum dependent diffusion is compared with that given by Drury and Forman (1983) based on extrapolation of the exact result (Toptygin 1980) for the case of the diffusion coefficient being independent of momentum. As a result of this distribution we find, as suggested by Drury et al. (1999), that Monte Carlo simulations result in smoother cut-offs and pile-ups in spectra of accelerated particles than expected from simple "box model" treatments of shock acceleration (e.g., Protheroe and Stanev 1999, Drury et al. 1999). This is particularly so for the case synchrotron pile-ups, which we find are replaced by a small bump at an energy about a factor of 2 below the expected cut-off, followed by a smooth cut-off with particles extending to energies well beyond the expected cut-off energy.

  2. Effect of Material Homogeneity on the Performance of DSA for Even-Parity Sn Methods

    SciTech Connect

    Azmy, Y.Y.; Morel, J.; Wareing, T.

    1999-09-27

    A spectral analysis is conducted for the Source Iteration (SI), and Diffusion Synthetic Acceleration (DSA) operators previously formulated for solving the Even-Parity Method (EPM) equations. In order to accommodate material heterogenity, the analysis is performed for the Periodic Horizontal Interface (PHI) configuration. The dependence of the spectral radius on the optical thickness of the two PHI layers illustrates the deterioration in the rate of convergence with increasing material discontinuity, especially when one of the layers approaches a void. The rate at which this deterioration occurs is determined for a specific material discontinuity in order to demonstrate the conditional robustness of the EPM-DSA iterations. The results of the analysis are put in perspective via numerical tests with the DANTE code (McGhee, et. al., 1997) which exhibits a deterioration in the spectral radius consistent with the theory.

  3. Final report on DSA methods for monitoring alumina in aluminum reduction cells with cermet anodes

    NASA Astrophysics Data System (ADS)

    Windisch, C. F., Jr.

    1992-04-01

    The Sensors Development Program was conducted at the Pacific Northwest Laboratory (PNL) for the US Department of Energy, Office of Industrial Processes. The work was performed in conjunction with the Inert Electrodes Program at PNL. The objective of the Sensors Development Program in FY 1990 through FY 1992 was to determine whether methods based on digital signal analysis (DSA) could be used to measure alumina concentration in aluminum reduction cells. Specifically, this work was performed to determine whether useful correlations exist between alumina concentration and various DSA-derived quantification parameters, calculated for current and voltage signals from laboratory and field aluminum reduction cells. If appropriate correlations could be found, then the quantification parameters might be used to monitor and, consequently, help control the alumina concentration in commercial reduction cells. The control of alumina concentration is especially important for cermet anodes, which have exhibited instability and excessive wear at alumina concentrations removed from saturation.

  4. A non-rigid registration method for cerebral DSA images based on forward and inverse stretching - avoiding bilinear interpolation.

    PubMed

    Liu, Bin; Zhang, Bingbing; Wan, Chao; Dong, Yihuan

    2014-01-01

    In order to reduce the motion artifact caused by the patient in cerebral DSA images, a non-rigid registration method based on stretching transformation is presented in this paper. Unlike other traditional methods, it does not need bilinear interpolation which is rather time-consuming and even produce 'originally non-existent gray value'. By this method, the mask image is rasterized to generate appropriate control points. The Energy of Histogram of Differences criterion is adopted as similarity measurement, and the Powell algorithm is utilized for acceleration. A forward stretching transformation is used to complete motion estimation and an inverse stretching transformation to generate target image by pixel mapping strategy. This method is effective to maintain the topological relationships of the gray value before and after the image deformation. The mask image remains clear and accurate contours, and the quality of the subtraction image after the registration is favorable. This method can provide support for clinical treatment and diagnosis of cerebral disease. PMID:24212008

  5. Final report on DSA methods for monitoring alumina in aluminum reduction cells with cermet anodes. Inert Electrodes Program

    SciTech Connect

    Windisch, C.F. Jr.

    1992-04-01

    The Sensors Development Program was conducted at the Pacific Northwest Laboratory (PNL) for the US Department of Energy, Office of Industrial Processes. The work was performed in conjunction with the Inert Electrodes Program at PNL. The objective of the Sensors Development Program in FY 1990 through FY 1992 was to determine whether methods based on digital signal analysis (DSA) could be used to measure alumina concentration in aluminum reduction cells. Specifically, this work was performed to determine whether useful correlations exist between alumina concentration and various DSA-derived quantification parameters, calculated for current and voltage signals from laboratory and field aluminum reduction cells. If appropriate correlations could be found, then the quantification parameters might be used to monitor and, consequently, help control the alumina concentration in commercial reduction cells. The control of alumina concentration is especially important for cermet anodes, which have exhibited instability and excessive wear at alumina concentrations removed from saturation.

  6. Accelerator system and method of accelerating particles

    NASA Technical Reports Server (NTRS)

    Wirz, Richard E. (Inventor)

    2010-01-01

    An accelerator system and method that utilize dust as the primary mass flux for generating thrust are provided. The accelerator system can include an accelerator capable of operating in a self-neutralizing mode and having a discharge chamber and at least one ionizer capable of charging dust particles. The system can also include a dust particle feeder that is capable of introducing the dust particles into the accelerator. By applying a pulsed positive and negative charge voltage to the accelerator, the charged dust particles can be accelerated thereby generating thrust and neutralizing the accelerator system.

  7. Accelerated molecular dynamics methods

    SciTech Connect

    Perez, Danny

    2011-01-04

    The molecular dynamics method, although extremely powerful for materials simulations, is limited to times scales of roughly one microsecond or less. On longer time scales, dynamical evolution typically consists of infrequent events, which are usually activated processes. This course is focused on understanding infrequent-event dynamics, on methods for characterizing infrequent-event mechanisms and rate constants, and on methods for simulating long time scales in infrequent-event systems, emphasizing the recently developed accelerated molecular dynamics methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics). Some familiarity with basic statistical mechanics and molecular dynamics methods will be assumed.

  8. NEW ACCELERATION METHODS

    SciTech Connect

    Sessler, A.M.

    1984-07-01

    But a glance at the Livingston chart, Fig. 1, of accelerator particle energy as a function of time shows that the energy has steadily, exponentially, increased. Equally significant is the fact that this increase is the envelope of diverse technologies. If one is to stay on, or even near, the Livingston curve in future years then new acceleration techniques need to be developed. What are the new acceleration methods? In these two lectures I would like to sketch some of these new ideas. I am well aware that they will probably not result in high energy accelerators within this or the next decade, but conversely, it is likely that these ideas will form the basis for the accelerators of the next century. Anyway, the ideas are stimulating and suffice to show that accelerator physicists are not just 'engineers', but genuine scientists deserving to be welcomed into the company of high energy physicists. I believe that outsiders will find this field surprisingly fertile and, certainly fun. To put it more personally, I very much enjoy working in this field and lecturing on it. There are a number of review articles which should be consulted for references to the original literature. In addition there are three books on the subject. Given this material, I feel free to not completely reference the material in the remainder of this article; consultation of the review articles and books will be adequate as an introduction to the literature for references abound (hundreds are given). At last, by way of introduction, I should like to quote from the end of Ref. 2 for I think the remarks made there are most germane. Remember that the talk was addressed to accelerator physicists: 'Finally, it is often said, I think by physicists who are not well-informed, that accelerator builders have used up their capital and now are bereft of ideas, and as a result, high energy physics will eventually--rather soon, in fact--come to a halt. After all, one can't build too many machines greater than

  9. Manufacturability considerations for DSA

    NASA Astrophysics Data System (ADS)

    Farrell, Richard A.; Hosler, Erik R.; Schmid, Gerard M.; Xu, Ji; Preil, Moshe E.; Rastogi, Vinayak; Mohanty, Nihar; Kumar, Kaushik; Cicoria, Michael J.; Hetzer, David R.; DeVilliers, Anton

    2014-03-01

    Implementation of Directed Self-Assembly (DSA) as a viable lithographic technology for high volume manufacturing will require significant efforts to co-optimize the DSA process options and constraints with existing work flows. These work flows include established etch stacks, integration schemes, and design layout principles. The two foremost patterning schemes for DSA, chemoepitaxy and graphoepitaxy, each have their own advantages and disadvantages. Chemoepitaxy is well suited for regular repeating patterns, but has challenges when non-periodic design elements are required. As the line-space polystyrene-block-polymethylmethacrylate chemoepitaxy DSA processes mature, considerable progress has been made on reducing the density of topological (dislocation and disclination) defects but little is known about the existence of 3D buried defects and their subsequent pattern transfer to underlayers. In this paper, we highlight the emergence of a specific type of buried bridging defect within our two 28 nm pitch DSA flows and summarize our efforts to characterize and eliminate the buried defects using process, materials, and plasma-etch optimization. We also discuss how the optimization and removal of the buried defects impacts both the process window and pitch multiplication, facilitates measurement of the pattern roughness rectification, and demonstrate hard-mask open within a back-end-of-line integration flow. Finally, since graphoepitaxy has intrinsic benefits in terms of design flexibility when compared to chemoepitaxy, we highlight our initial investigations on implementing high-chi block copolymer patterning using multiple graphoepitaxy flows to realize sub-20 nm pitch line-space patterns and discuss the benefits of using high-chi block copolymers for roughness reduction.

  10. Accelerated adaptive integration method.

    PubMed

    Kaus, Joseph W; Arrar, Mehrnoosh; McCammon, J Andrew

    2014-05-15

    Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

  11. Accelerated Adaptive Integration Method

    PubMed Central

    2015-01-01

    Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

  12. Driving DSA into volume manufacturing

    NASA Astrophysics Data System (ADS)

    Somervell, Mark; Yamauchi, Takashi; Okada, Soichiro; Tomita, Tadatoshi; Nishi, Takanori; Kawakami, Shinichiro; Muramatsu, Makoto; Iijima, Etsuo; Rastogi, Vinayak; Nakano, Takeo; Iwao, Fumiko; Nagahara, Seiji; Iwaki, Hiroyuki; Dojun, Makiko; Yatsuda, Koichi; Tobana, Toshikatsu; Romo Negreira, Ainhoa; Parnell, Doni; Rathsack, Benjamen; Nafus, Kathleen; Peyre, Jean-Luc; Kitano, Takahiro

    2015-03-01

    Directed Self-Assembly (DSA) is being extensively evaluated for application in semiconductor process integration.1-7 Since 2011, the number of publications on DSA at SPIE has exploded from roughly 26 to well over 80, indicating the groundswell of interest in the technology. Driving this interest are a number of attractive aspects of DSA including the ability to form both line/space and hole patterns at dimensions below 15 nm, the ability to achieve pitch multiplication to extend optical lithography, and the relatively low cost of the processes when compared with EUV or multiple patterning options. Tokyo Electron Limited has focused its efforts in scaling many laboratory demonstrations to 300 mm wafers. Additionally, we have recognized that the use of DSA requires specific design considerations to create robust layouts. To this end, we have discussed the development of a DSA ecosystem that will make DSA a viable technology for our industry, and we have partnered with numerous companies to aid in the development of the ecosystem. This presentation will focus on our continuing role in developing the equipment required for DSA implementation specifically discussing defectivity reduction on flows for making line-space and hole patterns, etch transfer of DSA patterns into substrates of interest, and integration of DSA processes into larger patterning schemes.

  13. A grey diffusion acceleration method for time-dependent radiative transfer calculations

    SciTech Connect

    Nowak, P.F.

    1991-07-01

    The equations of thermal radiative transfer describe the emission, absorption and transport of photons in a material. As photons travel through the material they are absorbed and re-emitted in a Planckian distribution characterized by the material temperature. As a result of these processes, the material can change resulting in a change in the Planckian emission spectrum. When the coupling between the material and radiation is strong, as occurs when the material opacity or the time step is large, standard iterative techniques converge very slowly. As a result, nested iterative algorithms have been applied to the problem. One algorithm, is to use multifrequency DSA to accelerate the convergence of the multifrequency transport iteration and a grey transport acceleration (GTA) followed by a single group DSA. Here we summarize a new method which uses a grey diffusion equation (GDA) to directly solve the multifrequency transport (S{sub N}) problem. Results of Fourier analysis for both the continuous and discretized equations are discussed and the computational efficiency of GDA is compared with the DSA and GTA nested algorithms. 5 refs., 1 fig., 1 tab.

  14. 4D-DSA and 4D fluoroscopy: preliminary implementation

    NASA Astrophysics Data System (ADS)

    Mistretta, C. A.; Oberstar, E.; Davis, B.; Brodsky, E.; Strother, C. M.

    2010-04-01

    We have described methods that allow highly accelerated MRI using under-sampled acquisitions and constrained reconstruction. One is a hybrid acquisition involving the constrained reconstruction of time dependent information obtained from a separate scan of longer duration. We have developed reconstruction algorithms for DSA that allow use of a single injection to provide the temporal data required for flow visualization and the steady state data required for construction of a 3D-DSA vascular volume. The result is time resolved 3D volumes with typical resolution of 5123 at frame rates of 20-30 fps. Full manipulation of these images is possible during each stage of vascular filling thereby allowing for simplified interpretation of vascular dynamics. For intravenous angiography this time resolved 3D capability overcomes the vessel overlap problem that greatly limited the use of conventional intravenous 2D-DSA. Following further hardware development, it will be also be possible to rotate fluoroscopic volumes for use as roadmaps that can be viewed at arbitrary angles without a need for gantry rotation. The most precise implementation of this capability requires availability of biplane fluoroscopy data. Since the reconstruction of 3D volumes presently suppresses the contrast in the soft tissue, the possibility of using these techniques to derive complete indications of perfusion deficits based on cerebral blood volume (CBV), mean transit time (MTT) and time to peak (TTP) parameters requires further investigation. Using MATLAB post-processing, successful studies in animals and humans done in conjunction with both intravenous and intra-arterial injections have been completed. Real time implementation is in progress.

  15. Multidimensional MHD Simulations Of DSA Using AstroBEAR

    NASA Astrophysics Data System (ADS)

    Edmon, Paul; Jones, T.; Mitran, S.; Cunningham, A.; Frank, A.

    2009-05-01

    We present a modification to the AstroBEAR (Astronomical Boundary Embedded Adaptive Refinement) MHD code (Cunningham et. al. 2007) that allows it to treat time dependent Diffusive Shock Acceleration (DSA) of cosmic rays in multiple dimensions including dynamical feedback from the cosmic rays. Utilizing the power of Adaptive Mesh Refinement (AMR) in tandem with efficient methods for cosmic ray diffusion and advection, this allows us for the first time to explore the evolution of modified MHD shocks in more than one spatial dimension. Among the early applications of the code will be investigations of colliding and clumpy stellar winds, type II supernova remnants and cosmic ray driven instabilities. This work is supported at the University of Minnesota by NSF, NASA and the Minnesota Supercomputing Institute.

  16. ECG-synchronized DSA exposure control: improved cervicothoracic image quality

    SciTech Connect

    Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.

    1984-10-01

    An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.

  17. Lasers and new methods of particle acceleration

    SciTech Connect

    Parsa, Z.

    1998-02-01

    There has been a great progress in development of high power laser technology. Harnessing their potential for particle accelerators is a challenge and of great interest for development of future high energy colliders. The author discusses some of the advances and new methods of acceleration including plasma-based accelerators. The exponential increase in sophistication and power of all aspects of accelerator development and operation that has been demonstrated has been remarkable. This success has been driven by the inherent interest to gain new and deeper understanding of the universe around us. With the limitations of the conventional technology it may not be possible to meet the requirements of the future accelerators with demands for higher and higher energies and luminosities. It is believed that using the existing technology one can build a linear collider with about 1 TeV center of mass energy. However, it would be very difficult (or impossible) to build linear colliders with energies much above one or two TeV without a new method of acceleration. Laser driven high gradient accelerators are becoming more realistic and is expected to provide an alternative, (more compact, and more economical), to conventional accelerators in the future. The author discusses some of the new methods of particle acceleration, including laser and particle beam driven plasma based accelerators, near and far field accelerators. He also discusses the enhanced IFEL (Inverse Free Electron Laser) and NAIBEA (Nonlinear Amplification of Inverse-Beamstrahlung Electron Acceleration) schemes, laser driven photo-injector and the high energy physics requirements.

  18. Proactive DSA application and implementation

    SciTech Connect

    Draelos, T.; Hamilton, V.; Istrail, G.

    1998-05-03

    Data authentication as provided by digital signatures is a well known technique for verifying data sent via untrusted network links. Recent work has extended digital signatures to allow jointly generated signatures using threshold techniques. In addition, new proactive mechanisms have been developed to protect the joint private key over long periods of time and to allow each of the parties involved to verify the actions of the other parties. In this paper, the authors describe an application in which proactive digital signature techniques are a particularly valuable tool. They describe the proactive DSA protocol and discuss the underlying software tools that they found valuable in developing an implementation. Finally, the authors briefly describe the protocol and note difficulties they experienced and continue to experience in implementing this complex cryptographic protocol.

  19. Incorporating DSA in multipatterning semiconductor manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Badr, Yasmine; Torres, J. A.; Ma, Yuansheng; Mitra, Joydeep; Gupta, Puneet

    2015-03-01

    Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.

  20. Tracking of Acceleration with HNJ Method

    SciTech Connect

    Ruggiero,A.

    2008-02-01

    After reviewing the principle of operation of acceleration with the method of Harmonic Number Jump (HNJ) in a Fixed-Field Alternating Gradient (FFAG) accelerator for protons and heavy ions, we report in this talk the results of computer simulations performed to assess the capability and the limits of the method in a variety of practical situations. Though the study is not yet completed, and there still remain other cases to be investigated, nonetheless the tracking results so far obtained are very encouraging, and confirm the validity of the method.

  1. Ultra low radiation dose digital subtraction angiography (DSA) imaging using low rank constraint

    NASA Astrophysics Data System (ADS)

    Niu, Kai; Li, Yinsheng; Schafer, Sebastian; Royalty, Kevin; Wu, Yijing; Strother, Charles; Chen, Guang-Hong

    2015-03-01

    In this work we developed a novel denoising algorithm for DSA image series. This algorithm takes advantage of the low rank nature of the DSA image sequences to enable a dramatic reduction in radiation and/or contrast doses in DSA imaging. Both spatial and temporal regularizers were introduced in the optimization algorithm to further reduce noise. To validate the method, in vivo animal studies were conducted with a Siemens Artis Zee biplane system using different radiation dose levels and contrast concentrations. Both conventionally processed DSA images and the DSA images generated using the novel denoising method were compared using absolute noise standard deviation and the contrast to noise ratio (CNR). With the application of the novel denoising algorithm for DSA, image quality can be maintained with a radiation dose reduction by a factor of 20 and/or a factor of 2 reduction in contrast dose. Image processing is completed on a GPU within a second for a 10s DSA data acquisition.

  2. Improved cost-effectiveness of the block co-polymer anneal process for DSA

    NASA Astrophysics Data System (ADS)

    Pathangi, Hari; Stokhof, Maarten; Knaepen, Werner; Vaid, Varun; Mallik, Arindam; Chan, Boon Teik; Vandenbroeck, Nadia; Maes, Jan Willem; Gronheid, Roel

    2016-04-01

    This manuscript first presents a cost model to compare the cost of ownership of DSA and SAQP for a typical front end of line (FEoL) line patterning exercise. Then, we proceed to a feasibility study of using a vertical furnace to batch anneal the block co-polymer for DSA applications. We show that the defect performance of such a batch anneal process is comparable to the process of record anneal methods. This helps in increasing the cost benefit for DSA compared to the conventional multiple patterning approaches.

  3. Accelerated Learning: Madness with a Method.

    ERIC Educational Resources Information Center

    Zemke, Ron

    1995-01-01

    Accelerated learning methods have evolved into a variety of holistic techniques that involve participants in the learning process and overcome negative attitudes about learning. These components are part of the mix: the brain, learning environment, music, imaginative activities, suggestion, positive mental state, the arts, multiple intelligences,…

  4. Projected discrete ordinates methods for numerical transport problems

    SciTech Connect

    Larsen, E.W.

    1985-01-01

    A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.

  5. Application of image fusion techniques in DSA

    NASA Astrophysics Data System (ADS)

    Ye, Feng; Wu, Jian; Cui, Zhiming; Xu, Jing

    2007-12-01

    Digital subtraction angiography (DSA) is an important technology in both medical diagnoses and interposal therapy, which can eliminate the interferential background and give prominence to blood vessels by computer processing. After contrast material is injected into an artery or vein, a physician produces fluoroscopic images. Using these digitized images, a computer subtracts the image made with contrast material from a series of post injection images made without background information. By analyzing the characteristics of DSA medical images, this paper provides a solution of image fusion which is in allusion to the application of DSA subtraction. We fuse the images of angiogram and subtraction, in order to obtain the new image which has more data information. The image that fused by wavelet transform can display the blood vessels and background information clearly, and medical experts gave high score on the effect of it.

  6. An Accelerated Method for Soldering Testing

    SciTech Connect

    Han, Qingyou; Xu, Hanbing; Ried, Paul; Olson, Paul

    2007-01-01

    An accelerated method for testing die soldering has been developed. High intensity ultrasonic vibrations have been applied to simulate the die casting conditions such as high pressure and high molten metal velocity on the pin. The soldering tendency of steels and coated pins has been examined. The results suggest that in the low carbon steel/Al system, the onset of soldering is 60 times faster with ultrasonic vibration than that without ultrasonic vibration. In the H13/A380 system, the onset of soldering reaction is accelerated to between 30-60 times. Coatings significantly reduce the soldering tendency. For purposes of this study, several commercial coatings from Balzers demonstrated the potential for increasing the service life of core pins between 15 and 180 times.

  7. Influence of template fill in graphoepitaxy DSA

    NASA Astrophysics Data System (ADS)

    Doise, Jan; Bekaert, Joost; Chan, Boon Teik; Hong, SungEun; Lin, Guanyang; Gronheid, Roel

    2016-03-01

    Directed self-assembly (DSA) of block copolymers (BCP) is considered a promising patterning approach for the 7 nm node and beyond. Specifically, a grapho-epitaxy process using a cylindrical phase BCP may offer an efficient solution for patterning randomly distributed contact holes with sub-resolution pitches, such as found in via and cut mask levels. In any grapho-epitaxy process, the pattern density impacts the template fill (local BCP thickness inside the template) and may cause defects due to respectively over- or underfilling of the template. In order to tackle this issue thoroughly, the parameters that determine template fill and the influence of template fill on the resulting pattern should be investigated. In this work, using three process flow variations (with different template surface energy), template fill is experimentally characterized as a function of pattern density and film thickness. The impact of these parameters on template fill is highly dependent on the process flow, and thus pre-pattern surface energy. Template fill has a considerable effect on the pattern transfer of the DSA contact holes into the underlying layer. Higher fill levels give rise to smaller contact holes and worse critical dimension uniformity. These results are important towards DSA-aware design and show that fill is a crucial parameter in grapho-epitaxy DSA.

  8. Coarse-mesh diffusion synthetic acceleration in slab geometry

    SciTech Connect

    Kim, K.S.; Palmer, T.S.

    2000-07-01

    It has long been known that the success of a diffusion synthetic acceleration (DSA) scheme is very sensitive to the consistency between the discretization of the transport and diffusion acceleration equations. Acceleration schemes involving inconsistent discretizations have been successful, but no prescription is available that determines a priori an allowable degree of inconsistency. It is notable, however, that all current DSA schemes involve diffusion equations discretized on the spatial mesh used to solve the transport equations. Often the solution of a large number of low-order equations is an expensive part of the transport simulation. This motivates the desire to find stable and rapidly convergent acceleration schemes that are discretized on a mesh that is coarse relative to the transport mesh. The authors present here results showing that the low-order diffusion equation can be solved on a mesh coarser (by a factor of 2) than that used for the slab geometry transport equation. Their results show that coarse-mesh DSA is unconditionally stable and is as rapidly convergent as a DSA method discretized on the transport mesh. They have used Adams and Martin's modified four-step acceleration method (M4S) applied to the linear discontinuous (LD) finite element transport equations in slab geometry. To evaluate their procedure, they have performed a Fourier analysis to calculate theoretical spectral radii. They compare this analysis with convergence behavior observed in an implementation code for several model problems.

  9. N7 logic via patterning using templated DSA: implementation aspects

    NASA Astrophysics Data System (ADS)

    Bekaert, J.; Doise, J.; Gronheid, R.; Ryckaert, J.; Vandenberghe, G.; Fenger, G.; Her, Y. J.; Cao, Y.

    2015-07-01

    In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCP). Insertion of DSA for IC fabrication is seriously considered for the 7 nm node. At this node the DSA technology could alleviate costs for multiple patterning and limit the number of masks that would be required per layer. At imec, multiple approaches for inserting DSA into the 7 nm node are considered. One of the most straightforward approaches for implementation would be for via patterning through templated DSA; a grapho-epitaxy flow using cylindrical phase BCP material resulting in contact hole multiplication within a litho-defined pre-pattern. To be implemented for 7 nm node via patterning, not only the appropriate process flow needs to be available, but also DSA-aware mask decomposition is required. In this paper, several aspects of the imec approach for implementing templated DSA will be discussed, including experimental demonstration of density effect mitigation, DSA hole pattern transfer and double DSA patterning, creation of a compact DSA model. Using an actual 7 nm node logic layout, we derive DSA-friendly design rules in a logical way from a lithographer's view point. A concrete assessment is provided on how DSA-friendly design could potentially reduce the number of Via masks for a place-and-routed N7 logic pattern.

  10. Iterative convergence acceleration of neutral particle transport methods via adjacent-cell preconditioners

    SciTech Connect

    Azmy, Y.Y.

    1999-06-10

    The author proposes preconditioning as a viable acceleration scheme for the inner iterations of transport calculations in slab geometry. In particular he develops Adjacent-Cell Preconditioners (AP) that have the same coupling stencil as cell-centered diffusion schemes. For lowest order methods, e.g., Diamond Difference, Step, and 0-order Nodal Integral Method (ONIM), cast in a Weighted Diamond Difference (WDD) form, he derives AP for thick (KAP) and thin (NAP) cells that for model problems are unconditionally stable and efficient. For the First-Order Nodal Integral Method (INIM) he derives a NAP that possesses similarly excellent spectral properties for model problems. The two most attractive features of the new technique are:(1) its cell-centered coupling stencil, which makes it more adequate for extension to multidimensional, higher order situations than the standard edge-centered or point-centered Diffusion Synthetic Acceleration (DSA) methods; and (2) its decreasing spectral radius with increasing cell thickness to the extent that immediate pointwise convergence, i.e., in one iteration, can be achieved for problems with sufficiently thick cells. He implemented these methods, augmented with appropriate boundary conditions and mixing formulas for material heterogeneities, in the test code APID that he uses to successfully verify the analytical spectral properties for homogeneous problems. Furthermore, he conducts numerical tests to demonstrate the robustness of the KAP and NAP in the presence of sharp mesh or material discontinuities. He shows that the AP for WDD is highly resilient to such discontinuities, but for INIM a few cases occur in which the scheme does not converge; however, when it converges, AP greatly reduces the number of iterations required to achieve convergence.

  11. Coronary DSA: enhancing coronary tree visibility through discriminative learning and robust motion estimation

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Prummer, Simone; Chen, Terrence; Ostermeier, Martin; Comaniciu, Dorin

    2009-02-01

    Digital subtraction angiography (DSA) is a well-known technique for improving the visibility and perceptibility of blood vessels in the human body. Coronary DSA extends conventional DSA to dynamic 2D fluoroscopic sequences of coronary arteries which are subject to respiratory and cardiac motion. Effective motion compensation is the main challenge for coronary DSA. Without a proper treatment, both breathing and heart motion can cause unpleasant artifacts in coronary subtraction images, jeopardizing the clinical value of coronary DSA. In this paper, we present an effective method to separate the dynamic layer of background structures from a fluoroscopic sequence of the heart, leaving a clean layer of moving coronary arteries. Our method combines the techniques of learning-based vessel detection and robust motion estimation to achieve reliable motion compensation for coronary sequences. Encouraging results have been achieved on clinically acquired coronary sequences, where the proposed method considerably improves the visibility and perceptibility of coronary arteries undergoing breathing and cardiac movement. Perceptibility improvement is significant especially for very thin vessels. The potential clinical benefit is expected in the context of obese patients and deep angulation, as well as in the reduction of contrast dose in normal size patients.

  12. Cerebral vascular malformations: Time-resolved CT angiography compared to DSA

    PubMed Central

    Lum, Cheemun; Chakraborty, Santanu; dos Santos, Marlise P

    2015-01-01

    Purpose The purpose of this article is to prospectively test the hypothesis that time-resolved CT angiography (TRCTA) on a Toshiba 320-slice CT scanner enables the same characterization of cerebral vascular malformation (CVM) including arteriovenous malformation (AVM), dural arteriovenous fistula (DAVF), pial arteriovenous fistula (PAVF) and developmental venous anomaly (DVA) compared to digital subtraction angiography (DSA). Materials and methods Eighteen (eight males, 10 females) consecutive patients (11 AVM, four DAVF, one PAVF, and two DVA) underwent 19 TRCTA (Aquillion one, Toshiba) for suspected CVM diagnosed on routine CT or MRI. One patient with a dural AVF underwent TRCTA and DSA twice before and after treatment. Of the 18 patients, 13 were followed with DSA (Artis, Siemens) within two months of TRCTA. Twenty-three sequential volume acquisitions of the whole head were acquired after injection of 50 ml contrast at the rate of 4 ml/sec. Two patients with DVA did not undergo DSA. Two TRCTA were not assessed because of technical problems. TRCTAs were independently reviewed by two neuroradiologists and DSA by two other neuroradiologists and graded according to the Spetzler-Martin classification, Borden classification, overall diagnostic quality, and level of confidence. Weighted kappa coefficients (k) were calculated to compare reader’s assessment of DSA vs TRCTA. Results There was excellent (k = 0.83 and 1) to good (k = 0.56, 0.61, 0.65 and 0.67) agreement between the different possible pairs of neuroradiologists for the assessment of vascular malformations. Conclusion TRCTA may be a sufficient noninvasive substitute for conventional DSA in certain clinical situations. PMID:26246101

  13. An implementation of differential search algorithm (DSA) for inversion of surface wave data

    NASA Astrophysics Data System (ADS)

    Song, Xianhai; Li, Lei; Zhang, Xueqiang; Shi, Xinchun; Huang, Jianquan; Cai, Jianchao; Jin, Si; Ding, Jianping

    2014-12-01

    Surface wave dispersion analysis is widely used in geophysics to infer near-surface shear (S)-wave velocity profiles for a wide variety of applications. However, inversion of surface wave data is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this work, we proposed and implemented a new Rayleigh wave dispersion curve inversion scheme based on differential search algorithm (DSA), one of recently developed swarm intelligence-based algorithms. DSA is inspired from seasonal migration behavior of species of the living beings throughout the year for solving highly nonlinear, multivariable, and multimodal optimization problems. The proposed inverse procedure is applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DSA, four noise-free and four noisy synthetic data sets are firstly inverted. Then, the performance of DSA is compared with that of genetic algorithms (GA) by two noise-free synthetic data sets. Finally, a real-world example from a waste disposal site in NE Italy is inverted to examine the applicability and robustness of the proposed approach on surface wave data. Furthermore, the performance of DSA is compared against that of GA by real data to further evaluate scores of the inverse procedure described here. Simulation results from both synthetic and actual field data demonstrate that differential search algorithm (DSA) applied to nonlinear inversion of surface wave data should be considered good not only in terms of the accuracy but also in terms of the convergence speed. The great advantages of DSA are that the algorithm is simple, robust and easy to implement. Also there are fewer control parameters to tune.

  14. Accelerated Monte Carlo Methods for Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce

    2014-03-01

    We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.

  15. Intravenous Digital Subtraction Angiography (DSA) of Hemodialysis Access Fistulae

    PubMed Central

    Allen, Gregory J.; Burnett, Keith R.; Vaziri, Nosratola D.; Friedenberg, Richard M.

    1986-01-01

    Hemodialysis access fistulae or grafts are subject to a variety of complications, including thrombosis, stenoses, and aneurysm or pseudoaneurysm formation. The usual radiologic methods to evaluate these problems consist of retrograde venous angiography or standard femoral or brachial arteriography. Both are invasive, and may traumatize the artery or graft. Six patients with internal blood access were studied using digital subtraction angiography; five using a central venous injection and one with direct graft injection. Preliminary results indicate that intravenous digital subtraction angiography (IV-DSA) can depict the anatomy of access fistula with adequate spatial resolution. Pathologic entities (stenoses, aneurysms) can be demonstrated, as well as other findings of uncertain clinical significance (kinks and webs). In addition, hemodynamic data can be inferred from the near-physiologic sequence of vessel opacification. Methods are in development that will allow determination of absolute blood flow in pertinent vessels via IV-DSA. There were no complications in this small series, and all examinations were performed on outpatients utilizing standard technique. ImagesFigure 1Figure 2Figure 3Figure 4Figure 5Figure 6 PMID:3537322

  16. High chi polymer development for DSA applications using RAFT technology

    NASA Astrophysics Data System (ADS)

    Sheehan, Michael T.; Farnham, William B.; Tran, Hoang V.; Londono, J. David; Brun, Yefim

    2013-03-01

    Directed self-assembly (DSA) of block copolymers is proving to be an interesting and innovative method to make three-dimensional periodic, uniform patterns useful in a variety of microelectronics applications. Attributes critical to acceptable DSA performance of block copolymers include molecular weight uniformity, final purity, and reproducibility in all the steps involved in producing the polymers. Reversible Addition Fragmentation Chain Transfer (RAFT) polymerization technology enables the production of such materials provided that careful process monitoring and compositional homogeneity measurement systems are employed. It is uniquely suited to construction of multiblocks with components of widely divergent surface energies and functionality. We describe a high chi diblock system comprising partially fluorinated methacrylates and substituted styrenics. While special new polymer separation strategies involving controlled polymer particle assembly in liquid media are required for some monomer systems and molecular weight regimes, we have been able to demonstrate high yield and compositionally homogeneous diblocks of lamellar and cylindrical morphology with polydispersities < 1.1. During purification processes, these diblock materials undergo assembly processes in liquid media, and with appropriate controls, this allows for removal of soluble homopolymer contaminants. SAXS analyses of solid polymer samples provide estimates of lamellar d-spacing, and a good correlation with molecular weight is shown. This system will be described.

  17. PARTICLE ACCELERATOR AND METHOD OF CONTROLLING THE TEMPERATURE THEREOF

    DOEpatents

    Neal, R.B.; Gallagher, W.J.

    1960-10-11

    A method and means for controlling the temperature of a particle accelerator and more particularly to the maintenance of a constant and uniform temperature throughout a particle accelerator is offered. The novel feature of the invention resides in the provision of two individual heating applications to the accelerator structure. The first heating application provided is substantially a duplication of the accelerator heat created from energization, this first application being employed only when the accelerator is de-energized thereby maintaining the accelerator temperature constant with regard to time whether the accelerator is energized or not. The second heating application provided is designed to add to either the first application or energization heat in a manner to create the same uniform temperature throughout all portions of the accelerator.

  18. Influence of litho patterning on DSA placement errors

    NASA Astrophysics Data System (ADS)

    Wuister, Sander; Druzhinina, Tamara; Ambesi, Davide; Laenens, Bart; Yi, Linda He; Finders, Jo

    2014-03-01

    Directed self-assembly of block copolymers is currently being investigated as a shrinking technique complementary to lithography. One of the critical issues about this technique is that DSA induces the placement error. In this paper, study of the relation between confinement by lithography and the placement error induced by DSA is demonstrated. Here, both 193i and EUV pre-patterns are created using a simple algorithm to confine two contact holes formed by DSA on a pitch of 45nm. Full physical numerical simulations were used to compare the impact of the confinement on DSA related placement error, pitch variations due to pattern variations and phase separation defects.

  19. Systems and methods for the magnetic insulation of accelerator electrodes in electrostatic accelerators

    DOEpatents

    Grisham, Larry R

    2013-12-17

    The present invention provides systems and methods for the magnetic insulation of accelerator electrodes in electrostatic accelerators. Advantageously, the systems and methods of the present invention improve the practically obtainable performance of these electrostatic accelerators by addressing, among other things, voltage holding problems and conditioning issues. The problems and issues are addressed by flowing electric currents along these accelerator electrodes to produce magnetic fields that envelope the accelerator electrodes and their support structures, so as to prevent very low energy electrons from leaving the surfaces of the accelerator electrodes and subsequently picking up energy from the surrounding electric field. In various applications, this magnetic insulation must only produce modest gains in voltage holding capability to represent a significant achievement.

  20. Advanced CD-SEM metrology for qualification of DSA patterns using coordinated line epitaxy (COOL) process

    NASA Astrophysics Data System (ADS)

    Kato, Takeshi; Konishi, Junko; Ikota, Masami; Yamaguchi, Satoru; Seino, Yuriko; Sato, Hironobu; Kasahara, Yusuke; Azuma, Tsukasa

    2016-03-01

    Directed self-assembly (DSA) applying chemical epitaxy is one of the promising lithographic solutions for next generation semiconductor device manufacturing. Especially, DSA lithography using coordinated line epitaxy (COOL) process is obviously one of candidates which could be the first generation of DSA applying PS-b-PMMA block copolymer (BCP) for sub-15nm dense line patterning . DSA can enhance the pitch resolutions, and can mitigate CD errors to the values much smaller than those of the originally exposed guiding patterns. On the other hand, local line placement error often results in a worse value, with distinctive trends depending on the process conditions. To address this issue, we introduce an enhanced measurement technology of DSA line patterns with distinguishing their locations in order to evaluate nature of edge placement and roughness corresponding to individual pattern locations by using images of CD-SEM. Additionally correlations among edge roughness of each line and each space are evaluated and discussed. This method can visualize features of complicated roughness easily to control COOL process. As a result, we found the followings. (1) Line placement error and line placement roughness of DSA were slightly different each other depending on their relative position to the chemical guide patterns. (2) In middle frequency area of PSD (Power Spectral Density) analysis graphs, it was observed that shapes were sensitively changed by process conditions of chemical stripe guide size and anneals temperature. (3) Correlation coefficient analysis using PSD was able to clarify characteristics of latent defect corresponding to physical and chemical property of BCP materials.

  1. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  2. 5 CFR 1315.5 - Accelerated payment methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the payment due date. (b) Small business (as defined in FAR 19.001 (48 CFR 19.001)). Agencies may pay... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Accelerated payment methods. 1315.5... § 1315.5 Accelerated payment methods. (a) A single invoice under $2,500. Payments may be made as soon...

  3. 5 CFR 1315.5 - Accelerated payment methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT PAYMENT § 1315.5 Accelerated payment methods. (a) A single invoice under $2,500. Payments may be made as soon as the contract, proper invoice , receipt...

  4. Image registration for DSA quality enhancement.

    PubMed

    Buzug, T M; Weese, J

    1998-01-01

    A generalized framework for histogram-based similarity measures is presented and applied to the image-enhancement task in digital subtraction angiography (DSA). The class of differentiable, strictly convex weighting functions is identified as suitable weightings of histograms for measuring the degree of clustering that goes along with registration. With respect to computation time, the energy similarity measure is the function of choice for the registration of mask and contrast image prior to subtraction. The robustness of the energy measure is studied for geometrical image distortions like rotation and scaling. Additionally, it is investigated how the histogram binning and inhomogeneous motion inside the templates influence the quality of the similarity measure. Finally, the registration success for the automated procedure is compared with the manually shift-corrected image pair of the head. PMID:9719851

  5. EUV patterned templates with grapho-epitaxy DSA at the N5/N7 logic nodes

    NASA Astrophysics Data System (ADS)

    Gronheid, Roel; Boeckx, Carolien; Doise, Jan; Bekaert, Joost; Karageorgos, Ioannis; Ruckaert, Julien; Chan, Boon Teik; Lin, Chenxi; Zou, Yi

    2016-03-01

    In this paper, approaches are explored for combining EUV with DSA for via layer patterning at the N7 and N5 logic nodes. Simulations indicate opportunity for significant LCDU improvement at the N7 node without impacting the required exposure dose. A templated DSA process based on NXE:3300 exposed EUV pre-patterns has been developed and supports the simulations. The main point of improvement concerns pattern placement accuracy with this process. It is described how metrology contributes to the measured placement error numbers. Further optimization of metrology methods for determining local placement errors is required. Next, also via layer patterning at the N5 logic node is considered. On top of LCDU improvement, the combination of EUV with DSA also allows for maintaining a single mask solution at this technology node, due to the ability of the DSA process to repair merging vias. It is experimentally shown, how shaping of templates for such via multiplication helps in placement accuracy control. Peanut-shaped pre-patterns, which can be printed using EUV lithography, give significantly better placement accuracy control compared to elliptical pre-patterns.

  6. Physical verification and manufacturing of contact/via layers using grapho-epitaxy DSA processes

    NASA Astrophysics Data System (ADS)

    Torres, J. Andres; Sakajiri, Kyohei; Fryer, David; Granik, Yuri; Ma, Yuansheng; Krasnova, Polina; Fenger, Germain; Nagahara, Seiji; Kawakami, Shinichiro; Rathsack, Benjamen; Khaira, Gurdaman; de Pablo, Juan; Ryckaert, Julien

    2014-03-01

    This paper extends the state of the art by describing the practical material's challenges, as well as approaches to minimize their impact in the manufacture of contact/via layers using a grapho-epitaxy directed self assembly (DSA) process. Three full designs have been analyzed from the point of view of layout constructs. A construct is an atomic and repetitive section of the layout which can be analyzed in isolation. Results indicate that DSA's main benefit is its ability to be resilient to the shape of the guiding pattern across process window. The results suggest that directed self assembly can still be guaranteed even with high distortion of the guiding patterns when the guiding patterns have been designed properly for the target process. Focusing on a 14nm process based on 193i lithography, we present evidence of the need of DSA compliance methods and mask synthesis tools which consider pattern dependencies of adjacent structures a few microns away. Finally, an outlook as to the guidelines and challenges to DSA copolymer mixtures and process are discussed highlighting the benefits of mixtures of homo polymer and diblock copolymer to reduce the number of defects of arbitrarily placed hole configurations.

  7. Pattern transfer of directed self-assembly (DSA) patterns for CMOS device applications

    NASA Astrophysics Data System (ADS)

    Tsai, Hsin-Yu; Miyazoe, Hiroyuki; Engelmann, Sebastian; Bangsaruntip, Sarunya; Lauer, Isaac; Bucchignano, Jim; Klaus, Dave; Gignac, Lynne; Joseph, Eric; Cheng, Joy; Sanders, Dan; Guillorn, Michael

    2013-03-01

    We present a study on the optimization of etch transfer processes for circuit relevant patterning in the sub 30 nm pitch regime using directed self assembly (DSA) line-space patterning. This work is focused on issues that impact the patterning of thin silicon fins and gate stack materials. Plasma power, chuck temperature and end point strategy is discussed in terms of their effect on critical dimension (CD) control and pattern fidelity. A systematic study of post-plasma etch annealing processes shows that both CD and line edge roughness (LER) in crystalline Si features can be further reduced while maintaining a suitable geometry for scaled FinFET devices. Results from DSA patterning of gate structures featuring a high-k dielectric, a metal nitride and poly Si gate electrode and a SiN capping layer are also presented. We conclude with the presentation of a strategy for realizing circuit patterns from groups of DSA patterned fins. These combined results further establish the viability of DSA pattern generation as a potential method for CMOS integrated circuit patterning beyond the 10 nm node.

  8. Accelerated Test Method for Corrosion Protective Coatings Project

    NASA Technical Reports Server (NTRS)

    Falker, John; Zeitlin, Nancy; Calle, Luz

    2015-01-01

    This project seeks to develop a new accelerated corrosion test method that predicts the long-term corrosion protection performance of spaceport structure coatings as accurately and reliably as current long-term atmospheric exposure tests. This new accelerated test method will shorten the time needed to evaluate the corrosion protection performance of coatings for NASA's critical ground support structures. Lifetime prediction for spaceport structure coatings has a 5-year qualification cycle using atmospheric exposure. Current accelerated corrosion tests often provide false positives and negatives for coating performance, do not correlate to atmospheric corrosion exposure results, and do not correlate with atmospheric exposure timescales for lifetime prediction.

  9. Advanced CD-SEM metrology for pattern roughness and local placement of lamellar DSA

    NASA Astrophysics Data System (ADS)

    Kato, Takeshi; Sugiyama, Akiyuki; Ueda, Kazuhiro; Yoshida, Hiroshi; Miyazaki, Shinji; Tsutsumi, Tomohiko; Kim, JiHoon; Cao, Yi; Lin, Guanyang

    2014-04-01

    Directed self-assembly (DSA) applying chemical epitaxy is one of the promising lithographic solutions for next generation semiconductor device manufacturing. We introduced Fingerprint Edge Roughness (FER) as an index to evaluate edge roughness of non-guided lamella finger print pattern, and found its correlation with the Line Edge Roughness (LER) of the lines assembled on the chemical guiding patterns. In this work, we have evaluated both FER and LER at each process steps of the LiNe DSA flow utilizing PS-b-PMMA block copolymers (BCP) assembled on chemical template wafers fabricated with Focus Exposure Matrix (FEM). As a result, we found the followings. (1) Line widths and space distances of the DSA patterns slightly differ to each other depending on their relative position against the chemical guide patterns. Appropriate condition that all lines are in the same dimensions exists, but the condition is not always same for the spaces. (2) LER and LWR (Line Width Roughness) of DSA patterns neither depend on width nor LER of the guide patterns. (3) LWR of DSA patterns are proportional to the width roughness of fingerprint pattern. (4) FER is influenced not only by the BCP formulation, but also by its film thickness. We introduced new methods to optimize the BCP formulation and process conditions by using FER measurement and local CD valuation measurement. Publisher's Note: This paper, originally published on 2 April 2014, was replaced with a corrected/revised version on 14 May 2014. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.

  10. Multiscale DSA simulations for efficient hotspot analysis

    NASA Astrophysics Data System (ADS)

    Hori, Yoshihiro; Yoshimoto, Kenji; Taniguchi, Takashi; Ohshima, Masahiro

    2014-03-01

    In this study, we have investigated how to link "large-scale simulations with the simplified models" to "mesoscale simulations with the detailed models." For the simplified model, we have applied so-called the generalized Ohta-Kawasaki (gOK) model. Our simulation flow was implemented by two steps: 1) parallel computations of block copolymer annealing with the simplified model, 2) detailed analysis of the defects with the SCFT. The local volumetric densities of block copolymers calculated by the simplified models were used as an input for the SCFT. Then the SCFT simulations were performed under the constraints in which the density field was driven to be the one obtained from the simplified model. Using the resultant partition functions, we were able to obtain spatial distributions of the free chain ends and the connection points of the blocks. Note that the chain conformation of block copolymer is an important, but missing component of the simplified models; this multi-scale approach is expected to be useful for further understanding the origin and stability of DSA defects.

  11. Method Accelerates Training Of Some Neural Networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.

    1992-01-01

    Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.

  12. A simple eigenfunction convergence acceleration method for Monte Carlo

    SciTech Connect

    Booth, Thomas E

    2010-11-18

    Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k{sub 2}/k{sub 1}. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a {+-} weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of {+-} weights. Instead, only positive weights are used in the acceleration method.

  13. Miniature plasma accelerating detonator and method of detonating insensitive materials

    DOEpatents

    Bickes, R.W. Jr.; Kopczewski, M.R.; Schwarz, A.C.

    1985-01-04

    The invention is a detonator for use with high explosives. The detonator comprises a pair of parallel rail electrodes connected to a power supply. By shorting the electrodes at one end, a plasma is generated and accelerated toward the other end to impact against explosives. A projectile can be arranged between the rails to be accelerated by the plasma. An alternative arrangement is to a coaxial electrode construction. The invention also relates to a method of detonating explosives. 3 figs.

  14. Miniature plasma accelerating detonator and method of detonating insensitive materials

    DOEpatents

    Bickes, Jr., Robert W.; Kopczewski, Michael R.; Schwarz, Alfred C.

    1986-01-01

    The invention is a detonator for use with high explosives. The detonator comprises a pair of parallel rail electrodes connected to a power supply. By shorting the electrodes at one end, a plasma is generated and accelerated toward the other end to impact against explosives. A projectile can be arranged between the rails to be accelerated by the plasma. An alternative arrangement is to a coaxial electrode construction. The invention also relates to a method of detonating explosives.

  15. GPU accelerated marine data visualization method

    NASA Astrophysics Data System (ADS)

    Li, Bo; Chen, Ge; Tian, Fenglin; Shao, Baomin; Ji, Pengbo

    2014-12-01

    The study of marine data visualization is of great value. Marine data, due to its large scale, random variation and multi-resolution in nature, are hard to be visualized and analyzed. Nowadays, constructing an ocean model and visualizing model results have become some of the most important research topics of `Digital Ocean'. In this paper, a spherical ray casting method is developed to improve the traditional ray-casting algorithm and to make efficient use of GPUs. Aiming at the ocean current data, a 3D view-dependent line integral convolution method is used, in which the spatial frequency is adapted according to the distance from a camera. The study is based on a 3D virtual reality and visualization engine, namely the VV-Ocean. Some interactive operations are also provided to highlight the interesting structures and the characteristics of volumetric data. Finally, the marine data gathered in the East China Sea are displayed and analyzed. The results show that the method meets the requirements of real-time and interactive rendering.

  16. Acceleration of the transcorrelated method for solids

    NASA Astrophysics Data System (ADS)

    Sodeyama, Keitaro; Ochi, Masayuki; Sakuma, Rei; Tsuneyuki, Shinji

    2010-03-01

    To calculate the electronic structures of solids including electron correlation effects, we have developed the transcorrelated (TC) method. In the TC method, a many-body wave function is represented by a correlated wave function F φ, where φ is a single Slater determinant and F is a Jastrow function, F=[-∑imethod were feasible enough to determine the lattice constants and bulk moduli. However, it required a lot of computational time for solid that scales as O(Nk^3 Nband^4). In this presentation, we will demonstrate that the CPU cost can be reduced by orders of magnitude after revising the algorithm.

  17. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    SciTech Connect

    Nakata, Susumu

    2008-09-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  18. Method for phosphate-accelerated bioremediation

    DOEpatents

    Looney, Brian B.; Lombard, Kenneth H.; Hazen, Terry C.; Pfiffner, Susan M.; Phelps, Tommy J.; Borthen, James W.

    1996-01-01

    An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.

  19. Nonlinear Acceleration Methods for Even-Parity Neutron Transport

    SciTech Connect

    W. J. Martin; C. R. E. De Oliveira; H. Park

    2010-05-01

    Convergence acceleration methods for even-parity transport were developed that have the potential to speed up transport calculations and provide a natural avenue for an implicitly coupled multiphysics code. An investigation was performed into the acceleration properties of the introduction of a nonlinear quasi-diffusion-like tensor in linear and nonlinear solution schemes. Using the tensor reduced matrix as a preconditioner for the conjugate gradients method proves highly efficient and effective. The results for the linear and nonlinear case serve as the basis for further research into the application in a full three-dimensional spherical-harmonics even-parity transport code. Once moved into the nonlinear solution scheme, the implicit coupling of the convergence accelerated transport method into codes for other physics can be done seamlessly, providing an efficient, fully implicitly coupled multiphysics code with high order transport.

  20. Process highlights to enhance DSA contact patterning performances

    NASA Astrophysics Data System (ADS)

    Gharbi, A.; Tiron, R.; Argoud, M.; Chamiot-Maitral, G.; Fouquet, A.; Lapeyre, C.; Pimenta Barros, P.; Sarrazin, A.; Servin, I.; Delachat, F.; Bos, S.; Bérard-Bergery, S.; Hazart, J.; Chevalier, X.; Nicolet, C.; Navarro, C.; Cayrefourcq, I.; Bouanani, S.; Monget, C.

    2016-03-01

    In this paper, we focus on the directed-self-assembly (DSA) application for contact hole (CH) patterning using polystyrene-b-poly(methyl methacrylate) (PS-b-PMMA) block copolymers (BCPs). By employing the DSA planarization process, we highlight the DSA advantages for CH shrink, repair and multiplication which are extremely needed to push forward the limits of currently used lithography. Meanwhile, we overcome the issue of pattern densityrelated- defects that are encountered with the commonly-used graphoepitaxy process flow. Our study also aims to evaluate DSA performances as function of material properties and process conditions by monitoring main key manufacturing process parameters: CD uniformity (CDU), placement error (PE) and defectivity (Hole Open Yield = HOY). Concerning process, it is shown that the control of surface affinity and the optimization of self-assembly annealing conditions enable to significantly enhance CDU and PE. Regarding materials properties, we show that the best BCP composition for CH patterning should be set at 70/30 of PS/PMMA total weight ratio. Moreover, it is found that increasing the PS homopolymer content from 0.2% to 1% has no impact on DSA performances. Using a C35 BCP (cylinder-forming BCP of natural period L0 = 35nm), high DSA performances are achieved: CDU-3σ = 1.2nm, PE-3σ = 1.2nm and HOY = 100%. The stability of DSA process is also demonstrated through the process follow-up on both patterned and unpatterned surfaces over several weeks. Finally, simulation results, using a phase field model based on Ohta-Kawasaki energy functional are presented and discussed with regards to experiments.

  1. Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins

    NASA Astrophysics Data System (ADS)

    Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru

    2014-03-01

    A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.

  2. Just in Time DSA the Hanford Nuclear Safety Basis Strategy

    SciTech Connect

    JACKSON, M.W.

    2002-06-01

    The U.S. Department of Energy, Richland Operations Office (RL) is responsible for 30 hazard category 2 and 3 nuclear facilities that are operated by its prime contractors, Fluor Hanford, Incorporated (FHI), Bechtel Hanford, Incorporated (BHI) and Pacific Northwest National Laboratory (PNNL). The publication of Title 10, Code of Federal Regulations, Part 830, Subpart B, Safely Basis Requirements (the Rule) in January 2001 requires that the Documented Safety Analyses (DSA) for these facilities be reviewed against the requirements of the Rule. Those DSAs that do not meet the requirements must either be upgraded to satisfy the Rule, or an exemption must be obtained. RL and its prime contractors have developed a Nuclear Safety Strategy that provides a comprehensive approach for supporting RL's efforts to meet its long-term objectives for hazard category 2 and 3 facilities while also meeting the requirements of the Rule. This approach will result in a reduction of the total number of safety basis documents that must be developed and maintained to support the remaining mission and closure of the Hanford Site and ensure that the documentation that must be developed will support: Compliance with the Rule; A ''Just-In-Time'' approach to development of Rule-compliant safety bases supported by temporary exemptions; and Consolidation of safety basis documents that support multiple facilities with a common mission (e.g. decontamination, decommissioning and demolition [DD&D], waste management, surveillance and maintenance). This strategy provides a clear path to transition the safety bases for the various Hanford facilities from support of operation and stabilization missions through DD&D to accelerate closure. This ''Just-In-Time'' Strategy can also be tailored for other DOE Sites, creating the potential for large cost savings and schedule reductions throughout the DOE complex.

  3. Feasibility of reduced-dose 3D/4D-DSA using a weighted edge preserving filter

    NASA Astrophysics Data System (ADS)

    Oberstar, Erick L.; Speidel, Michael A.; Davis, Brian J.; Strother, Charles; Mistretta, Charles

    2016-03-01

    A conventional 3D/4D digital subtraction angiogram (DSA) requires two rotational acquisitions (mask and fill) to compute the log-subtracted projections that are used to reconstruct a 3D/4D volume. Since all of the vascular information is contained in the fill acquisition, it is hypothesized that it is possible to reduce the x-ray dose of the mask acquisition substantially and still obtain subtracted projections adequate to reconstruct a 3D/4D volume with noise level comparable to a full dose acquisition. A full dose mask and fill acquisition were acquired from a clinical study to provide a known full dose reference reconstruction. Gaussian noise was added to the mask acquisition to simulate a mask acquisition acquired at 10% relative dose. Noise in the low-dose mask projections was reduced with a weighted edge preserving (WEP) filter designed to preserve bony edges while suppressing noise. 2D log-subtracted projections were computed from the filtered low-dose mask and full-dose fill projections, and then 3D/4D-DSA reconstruction algorithms were applied. Additional bilateral filtering was applied to the 3D volumes. The signal-to-noise ratio measured in the filtered 3D/4D-DSA volumes was compared to the full dose case. The average ratio of filtered low-dose SNR to full-dose SNR was 1.07 for the 3D-DSA and 1.05 for the 4D-DSA, indicating the method is a feasible approach to restoring SNR in DSA scans acquired with a low-dose mask. The method was also tested in a phantom study with full dose fill and 22% dose mask.

  4. 5 CFR 1315.5 - Accelerated payment methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the payment due date. (b) Small business (as defined in FAR 19.001 (48 CFR 19.001)). Agencies may pay... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT...

  5. 5 CFR 1315.5 - Accelerated payment methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the payment due date. (b) Small business (as defined in FAR 19.001 (48 CFR 19.001)). Agencies may pay... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT...

  6. An Accelerated Method for Testing Soldering Tendency of Core Pins

    SciTech Connect

    Han, Qingyou; Xu, Hanbing; Ried, Paul; Olson, Paul

    2010-01-01

    An accelerated method for testing die soldering has been developed. High intensity ultrasonic vibrations has been used to simulate the die casting conditions such as high pressure and high impingement speed of molten metal on the pin. Soldering tendency of steels and coated pins has been examined. The results indicate that in the low carbon steel/Al system, the onset of soldering is 60 times faster with ultrasonic vibration than that without ultrasonic vibration. In the H13/A380 system, the onset of soldering reaction is accelerated to 30-60 times. Coating significantly reduces the soldering tendency of the core pins.

  7. Myocardial ischemia during intravenous DSA in patients with cardiac disease

    SciTech Connect

    Hesselink, J.R.; Hayman, L.A.; Chung, K.J.; McGinnis, B.D.; Davis, K.R.; Taveras, J.M.

    1984-12-01

    A prospective study was performed for 48 patients who had histories of angina and were referred for digital subtraction angiography (DSA). Cardiac disease was graded according to the American Heart Association (AHA) functional classification system. Each patient received 2-5 injections of 40-ml diatrizoate meglumine and diatrizoate sodium at 15 ml per second in the superior vena cava. Of the 28 patients in functional Classes I or II, 11% had angina and 32% had definite ischemic ECG changes after the DSA injections. Of the patients in functional Class III 63% had angina, and 58% had definite ischemic ECG changes after the injections. These observed cardiac effects following bolus injections of hypertonic ionic contrast media indicate that special precautions are necessary when performing intravenous DSA examinations on this group of high risk patients.

  8. Acceleration of reverse analysis method using hyperbolic activation function

    NASA Astrophysics Data System (ADS)

    Pwasong, Augustine; Sathasivam, Saratha

    2015-10-01

    Hyperbolic activation function is examined for its ability to accelerate the performance of doing data mining by using a technique named as Reverse Analysis method. In this paper, we describe how Hopfield network perform better with hyperbolic activation function and able to induce logical rules from large database by using reverse analysis method: given the values of the connections of a network, we can hope to know what logical rules are entrenched in the database. We limit our analysis to Horn clauses.

  9. Template affinity role in CH shrink by DSA planarization

    NASA Astrophysics Data System (ADS)

    Tiron, R.; Gharbi, A.; Pimenta Barros, P.; Bouanani, S.; Lapeyre, C.; Bos, S.; Fouquet, A.; Hazart, J.; Chevalier, X.; Argoud, M.; Chamiot-Maitral, G.; Barnola, S.; Monget, C.; Farys, V.; Berard-Bergery, S.; Perraud, L.; Navarro, C.; Nicolet, C.; Hadziioannou, G.; Fleury, G.

    2015-03-01

    Density multiplication and contact shrinkage of patterned templates by directed self-assembly (DSA) of block copolymers (BCP) stands out as a promising alternative to overcome the limitations of conventional lithography. The main goal of this paper is to investigate the potential of DSA to address contact and via levels patterning with high resolution by performing either CD shrink or contact multiplication. Different DSA processes are benchmarked based on several success criteria such as: CD control, defectivity (missing holes) as well as placement control. More specifically, the methodology employed to measure DSA contact overlay and the impact of process parameters on placement error control is detailed. Using the 300mm pilot line available in LETI and Arkema's materials, our approach is based on the graphoepitaxy of PS-b-PMMA block copolymers. Our integration scheme, depicted in figure 1, is based on BCP self-assembly inside organic hard mask guiding patterns obtained using 193i nm lithography. The process is monitored at different steps: the generation of guiding patterns, the directed self-assembly of block copolymers and PMMA removal, and finally the transfer of PS patterns into the metallic under layer by plasma etching. Furthermore, several process flows are investigated, either by tuning different material related parameters such as the block copolymer intrinsic period or the interaction with the guiding pattern surface (sidewall and bottom-side affinity). The final lithographic performances are finely optimized as a function of the self-assembly process parameters such as the film thickness and bake (temperature and time). Finally, DSA performances as a function of guiding patterns density are investigated. Thus, for the best integration approach, defect-free isolated and dense patterns for both contact shrink and multiplication (doubling and more) have been achieved on the same processed wafer. These results show that contact hole shrink and

  10. The use of eDR-71xx for DSA defect review and automated classification

    NASA Astrophysics Data System (ADS)

    Pathangi, Hari; Van Den Heuvel, Dieter; Bayana, Hareen; Bouckou, Loemba; Brown, Jim; Parisi, Paolo; Gosain, Rohan

    2015-03-01

    The Liu-Nealey (LiNe) chemo-epitaxy Directed Self Assembly flow has been screened thoroughly in the past years in terms of defects. Various types of DSA specific defects have been identified and best known methods have been developed to be able to get sufficient S/N for defect inspection to help understand the root causes for the various defect types and to reduce the defect levels to prepare the process for high volume manufacturing. Within this process development, SEM-review and defect classification play a key role. This paper provides an overview of the challenges that DSA brings also in this metrology aspect and we will provide successful solutions in terms of making the automated defect review. In addition, a new Real Time Automated Defect Classification (RT-ADC) will be introduced that can save up to 90% in the time required for manual defect classification. This will enable a much larger sampling for defect review, resulting in a better understanding of signatures and behaviors of various DSA specific defect types, such as dislocations, 1-period bridges and line wiggling.

  11. Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

    NASA Astrophysics Data System (ADS)

    Slattery, Stuart R.

    This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It

  12. Distributed Minimal Residual (DMR) method for acceleration of iterative algorithms

    NASA Technical Reports Server (NTRS)

    Lee, Seungsoo; Dulikravich, George S.

    1991-01-01

    A new method for enhancing the convergence rate of iterative algorithms for the numerical integration of systems of partial differential equations was developed. It is termed the Distributed Minimal Residual (DMR) method and it is based on general Krylov subspace methods. The DMR method differs from the Krylov subspace methods by the fact that the iterative acceleration factors are different from equation to equation in the system. At the same time, the DMR method can be viewed as an incomplete Newton iteration method. The DMR method was applied to Euler equations of gas dynamics and incompressible Navier-Stokes equations. All numerical test cases were obtained using either explicit four stage Runge-Kutta or Euler implicit time integration. The formulation for the DMR method is general in nature and can be applied to explicit and implicit iterative algorithms for arbitrary systems of partial differential equations.

  13. Reproduction of natural corrosion by accelerated laboratory testing methods

    SciTech Connect

    Luo, J.S.; Wronkiewicz, D.J.; Mazer, J.J.; Bates, J.K.

    1996-05-01

    Various laboratory corrosion tests have been developed to study the behavior of glass waste forms under conditions similar to those expected in an engineered repository. The data generated by laboratory experiments are useful for understanding corrosion mechanisms and for developing chemical models to predict the long-term behavior of glass. However, it is challenging to demonstrate that these test methods produce results that can be directly related to projecting the behavior of glass waste forms over time periods of thousands of years. One method to build confidence in the applicability of the test methods is to study the natural processes that have been taking place over very long periods in environments similar to those of the repository. In this paper, we discuss whether accelerated testing methods alter the fundamental mechanisms of glass corrosion by comparing the alteration patterns that occur in naturally altered glasses with those that occur in accelerated laboratory environments. This comparison is done by (1) describing the alteration of glasses reacted in nature over long periods of time and in accelerated laboratory environments and (2) establishing the reaction kinetics of naturally altered glass and laboratory reacted glass waste forms.

  14. Method for generating a plasma wave to accelerate electrons

    DOEpatents

    Umstadter, D.; Esarey, E.; Kim, J.K.

    1997-06-10

    The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention. 21 figs.

  15. Method for generating a plasma wave to accelerate electrons

    DOEpatents

    Umstadter, Donald; Esarey, Eric; Kim, Joon K.

    1997-01-01

    The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention.

  16. Half-range acceleration for one-dimensional transport problems

    SciTech Connect

    Zika, M.R.; Larsen, E.W.

    1998-12-31

    Researchers have devoted considerable effort to developing acceleration techniques for transport iterations in highly diffusive problems. The advantages and disadvantages of source iteration, rebalance, diffusion synthetic acceleration (DSA), transport synthetic acceleration (TSA), and projection acceleration methods are documented in the literature and will not be discussed here except to note that no single method has proven to be applicable to all situations. Here, the authors describe a new acceleration method that is based solely on transport sweeps, is algebraically linear (and is therefore amenable to a Fourier analysis), and yields a theoretical spectral radius bounded by one-third for all cases. This method does not introduce spatial differencing difficulties (as is the case for DSA) nor does its theoretical performance degrade as a function of mesh and material properties (as is the case for TSA). Practical simulations of the new method agree with the theoretical predictions, except for scattering ratios very close to unity. At this time, they believe that the discrepancy is due to the effect of boundary conditions. This is discussed further.

  17. Etch challenges for DSA implementation in CMOS via patterning

    NASA Astrophysics Data System (ADS)

    Pimenta Barros, P.; Barnola, S.; Gharbi, A.; Argoud, M.; Servin, I.; Tiron, R.; Chevalier, X.; Navarro, C.; Nicolet, C.; Lapeyre, C.; Monget, C.; Martinez, E.

    2014-03-01

    This paper reports on the etch challenges to overcome for the implementation of PS-b-PMMA block copolymer's Directed Self-Assembly (DSA) in CMOS via patterning level. Our process is based on a graphoepitaxy approach, employing an industrial PS-b-PMMA block copolymer (BCP) from Arkema with a cylindrical morphology. The process consists in the following steps: a) DSA of block copolymers inside guiding patterns, b) PMMA removal, c) brush layer opening and finally d) PS pattern transfer into typical MEOL or BEOL stacks. All results presented here have been performed on the DSA Leti's 300mm pilot line. The first etch challenge to overcome for BCP transfer involves in removing all PMMA selectively to PS block. In our process baseline, an acetic acid treatment is carried out to develop PMMA domains. However, this wet development has shown some limitations in terms of resists compatibility and will not be appropriated for lamellar BCPs. That is why we also investigate the possibility to remove PMMA by only dry etching. In this work the potential of a dry PMMA removal by using CO based chemistries is shown and compared to wet development. The advantages and limitations of each approach are reported. The second crucial step is the etching of brush layer (PS-r-PMMA) through a PS mask. We have optimized this step in order to preserve the PS patterns in terms of CD, holes features and film thickness. Several integrations flow with complex stacks are explored for contact shrinking by DSA. A study of CD uniformity has been addressed to evaluate the capabilities of DSA approach after graphoepitaxy and after etching.

  18. 300mm pilot line DSA contact hole process stability

    NASA Astrophysics Data System (ADS)

    Argoud, M.; Servin, I.; Gharbi, A.; Pimenta Barros, P.; Jullian, K.; Sanche, M.; Chamiot-Maitral, G.; Barnola, S.; Tiron, R.; Navarro, C.; Chevalier, X.; Nicolet, C.; Fleury, G.; Hadziioannou, G.; Asai, M.; Pieczulewski, C.

    2014-03-01

    Directed Self-Assembly (DSA) is today a credible alternative lithographic technology for semiconductor industry [1]. In the coming years, DSA integration could be a standard complementary step with other lithographic techniques (193nm immersion, e-beam, extreme ultraviolet). Its main advantages are a high pattern resolution (down to 10nm), a capability to decrease an initial pattern edge roughness [2], an absorption of pattern guide size variation, no requirement of a high-resolution mask and can use standard fab-equipment (tracks and etch tools). The potential of DSA must next be confirmed viable for high volume manufacturing. Developments are necessary to transfer this technology on 300mm wafers in order to demonstrate semiconductor fab-compatibility [3-7]. The challenges concern especially the stability, both uniformity and defectivity, of the entire process, including tools and Blok Co-Polymer (BCP) materials. To investigate the DSA process stability, a 300mm pilot line with DSA dedicated track (SOKUDO DUO) is used at CEALeti. BCP morphologies with PMMA cylinders in a PS matrix are investigated (about 35nm natural period). BCP selfassembly in unpatterned surface and patterned surface (graphoepitaxy) configurations are considered in this study. Unpatterned configuration will initially be used for process optimization and fix a process of record. Secondly, this process of record will be monitored with a follow-up in order to validate its stability. Steps optimization will be applied to patterned surface configurations (graphoepitaxy) for contact hole patterning application. A process window of contact hole shrink process will be defined. Process stability (CD uniformity and defectivity related to BCP lithography) will be investigated.

  19. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  20. Surgical Methods for the Acceleration of the Orthodontic Tooth Movement.

    PubMed

    Almpani, Konstantinia; Kantarci, Alpdogan

    2016-01-01

    Surgical techniques for the acceleration of the orthodontic tooth movement have been tested for more than 100 years in clinical practice. Since original methods have been extremely invasive and have been associated with increased tooth morbidity and various other gaps, the research in this field has always followed an episodic trend. Modern approaches represent a well-refined strategy where the concept of the bony block has been abandoned and only a cortical plate around the orthodontic tooth movement has been desired. Selective alveolar decortication has been a reproducible gold standard to this end. Its proposed mechanism has been the induction of rapid orthodontic tooth movement through the involvement of the periodontal ligament. More recent techniques included further refinement of this procedure through less invasive techniques such as the use of piezoelectricity and corticision. This chapter focuses on the evolution of the surgical approaches and the mechanistic concepts underlying the biological process during the surgically accelerated orthodontic tooth movement. PMID:26599122

  1. Development of an artificial climatic complex accelerated corrosion tester and investigation of complex accelerated corrosion test methods

    SciTech Connect

    Li, J.; Li, M.; Sun, Z. )

    1999-05-01

    During recent decades, accelerated corrosion test equipment and methods simulating atmospheric corrosion have been developed to incorporate the many factors involved in complex accelerated corrosion. A new accelerated corrosion tester was developed to simulate various kinds of atmospheric corrosion environments. The equipment can be used to simulate various types of atmospheric corrosion environments with up to eight factors and can be used to carry out 18 kinds of standard corrosion and environmental tasks.

  2. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  3. A Ray Casting Accelerated Method of Segmented Regular Volume Data

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Guo, Ming; Wang, Liting; Dai, Yujin

    The size of volume data field which is constructed by large-scale war industry product ICT images is large, and empty voxels in the volume data field occupy little ratio. The effect of existing ray casting accelerated methods is not distinct. In 3D visualization fault diagnosis of large-scale war industry product, only some of the information in the volume data field can help surveyor check out fault inside it. Computational complexity will greatly increase if all volume data is 3D reconstructed. So a new ray casting accelerated method based on segmented volume data is put forward. Segmented information volume data field is built by use of segmented result. Consulting the conformation method of existing hierarchical volume data structures, hierarchical volume data structure on the base of segmented information is constructed. According to the structure, the construction parts defined by user are identified automatically in ray casting. The other parts are regarded as empty voxels, hence the sampling step is adjusted dynamically, the sampling point amount is decreased, and the volume rendering speed is improved. Experimental results finally reveal the high efficiency and good display performance of the proposed method.

  4. Analytic Method to Estimate Particle Acceleration in Flux Ropes

    NASA Technical Reports Server (NTRS)

    Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.

    2015-01-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.

  5. Directed Self Assembly (DSA) compliant flow with immersion lithography: from material to design and patterning

    NASA Astrophysics Data System (ADS)

    Ma, Yuansheng; Wang, Yan; Word, James; Lei, Junjiang; Mitra, Joydeep; Torres, J. Andres; Hong, Le; Fenger, Germain; Khaira, Daman; Preil, Moshe; Yuan, Lei; Kye, Jongwook; Levinson, Harry J.

    2016-03-01

    In this paper, we present a DSA compliant flow for contact/via layers with immersion lithography assuming the grapho-epitaxy process for cylinders' formation. We demonstrate that the DSA technology enablement needs co-optimization among material, design, and lithography. We show that the number of DSA grouping constructs is countable for the gridded-design architecture. We use Template Error Enhancement Factor (TEEF) to choose DSA material, determine grouping design rules, and select the optimum guiding patterns. Our post-pxOPC imaging data shows that it is promising to achieve 2-mask solution with DSA for the contact/via layer using 193i at 5nm node.

  6. Method and apparatus for varying accelerator beam output energy

    DOEpatents

    Young, Lloyd M.

    1998-01-01

    A coupled cavity accelerator (CCA) accelerates a charged particle beam with rf energy from a rf source. An input accelerating cavity receives the charged particle beam and an output accelerating cavity outputs the charged particle beam at an increased energy. Intermediate accelerating cavities connect the input and the output accelerating cavities to accelerate the charged particle beam. A plurality of tunable coupling cavities are arranged so that each one of the tunable coupling cavities respectively connect an adjacent pair of the input, output, and intermediate accelerating cavities to transfer the rf energy along the accelerating cavities. An output tunable coupling cavity can be detuned to variably change the phase of the rf energy reflected from the output coupling cavity so that regions of the accelerator can be selectively turned off when one of the intermediate tunable coupling cavities is also detuned.

  7. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  8. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  9. Design strategy for integrating DSA via patterning in sub-7 nm interconnects

    NASA Astrophysics Data System (ADS)

    Karageorgos, Ioannis; Ryckaert, Julien; Tung, Maryann C.; Wong, H.-S. P.; Gronheid, Roel; Bekaert, Joost; Karageorgos, Evangelos; Croes, Kris; Vandenberghe, Geert; Stucchi, Michele; Dehaene, Wim

    2016-03-01

    In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCPs). As a result, the insertion of DSA for IC fabrication is being actively considered for the sub-7nm nodes. At these nodes the DSA technology could alleviate costs for multiple patterning and limit the number of litho masks that would be required per metal layer. One of the most straightforward approaches for DSA implementation would be for via patterning through templated DSA, where hole patterns are readily accessible through templated confinement of cylindrical phase BCP materials. Our in-house studies show that decomposition of via layers in realistic circuits below the 7nm node would require at least many multi-patterning steps (or colors), using 193nm immersion lithography. Even the use of EUV might require double patterning in these dimensions, since the minimum via distance would be smaller than EUV resolution. The grouping of vias through templated DSA can resolve local conflicts in high density areas. This way, the number of required colors can be significantly reduced. For the implementation of this approach, a DSA-aware mask decomposition is required. In this paper, our design approach for DSA via patterning in sub-7nm nodes is discussed. We propose options to expand the list of DSA-compatible via patterns (DSA letters) and we define matching cost formulas for the optimal DSA-aware layout decomposition. The flowchart of our proposed approach tool is presented.

  10. Accelerated Mini-batch Randomized Block Coordinate Descent Method

    PubMed Central

    Zhao, Tuo; Yu, Mo; Wang, Yiming; Arora, Raman; Liu, Han

    2014-01-01

    We consider regularized empirical risk minimization problems. In particular, we minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each iteration. Thus they need all data to be accessible so that the partial gradient of the block gradient can be exactly obtained. However, such a “batch” setting may be computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial gradient of the selected block based on a mini-batch of randomly sampled data in each iteration. We further accelerate the MRBCD method by exploiting the semi-stochastic optimization scheme, which effectively reduces the variance of the partial gradient estimators. Theoretically, we show that for strongly convex functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method naturally exploits the sparsity structure and achieves better computational performance than existing methods. PMID:25620860

  11. Electrochemical treatment of tannery wastewater using DSA electrodes.

    PubMed

    Costa, Carla Regina; Botta, Clarice M R; Espindola, Evaldo L G; Olivi, Paulo

    2008-05-01

    In this work we studied the electrochemical treatment of a tannery wastewater using dimensionally stable anodes (DSA) containing tin, iridium, ruthenium, and titanium. The electrodes were prepared by thermal decomposition of the polymeric precursors. The electrolyses were performed under galvanostatic conditions, at room temperature. Effects of the oxide composition, current density, and effluent conductivity were investigated, and the current efficiency was calculated as a function of the time for the performed electrolyses. Results showed that all the studied electrodes led to a decrease in the content of both total phenolic compounds and total organic carbon (TOC), as well as lower absorbance in the UV-vis region. Toxicity tests using Daphnia similis demonstrated that the electrochemical treatment reduced the wastewater toxicity. The use of DSA type electrodes in the electrochemical treatment of tannery wastewater proved to be useful since it can promote a decrease in total phenolic compounds, TOC, absorbance, and toxicity. PMID:17931769

  12. Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method

    NASA Astrophysics Data System (ADS)

    Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han

    2015-12-01

    Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.

  13. DSA via hole shrink for advanced node applications

    NASA Astrophysics Data System (ADS)

    Chi, Cheng; Liu, Chi-Chun; Meli, Luciana; Schmidt, Kristin; Xu, Yongan; DeSilva, Ekmini Anuja; Sanchez, Martha; Farrell, Richard; Cottle, Hongyun; Kawamura, Daiji; Singh, Lovejeet; Furukawa, Tsuyoshi; Lai, Kafai; Pitera, Jed W.; Sanders, Daniel; Hetzer, David R.; Metz, Andrew; Felix, Nelson; Arnold, John; Colburn, Matthew

    2016-04-01

    Directed self-assembly (DSA) of block copolymers (BCPs) has become a promising patterning technique for 7nm node hole shrink process due to its material-controlled CD uniformity and process simplicity.[1] For such application, cylinder-forming BCP system has been extensively investigated compared to its counterpart, lamella-forming system, mainly because cylindrical BCPs will form multiple vias in non-circular guiding patterns (GPs), which is considered to be closer to technological needs.[2-5] This technological need to generate multiple DSA domains in a bar-shape GP originated from the resolution limit of lithography, i.e. those vias placed too close to each other will merge and short the circuit. In practice, multiple patterning and self-aligned via (SAV) processes have been implemented in semiconductor manufacturing to address this resolution issue.[6] The former approach separates one pattern layer with unresolvable dense features into several layers with resolvable features, while the latter approach simply utilizes the superposition of via bars and the pre-defined metal trench patterns in a thin hard mask layer to resolve individual vias, as illustrated in Fig 1 (upper). With proper design, using DSA to generate via bars with the SAV process could provide another approach to address the resolution issue.

  14. Discontinuous diffusion synthetic acceleration for S{sub n} transport on 2D arbitrary polygonal meshes

    SciTech Connect

    Turcksin, Bruno Ragusa, Jean C.

    2014-10-01

    In this paper, a Diffusion Synthetic Acceleration (DSA) technique applied to the S{sub n} radiation transport equation is developed using Piece-Wise Linear Discontinuous (PWLD) finite elements on arbitrary polygonal grids. The discretization of the DSA equations employs an Interior Penalty technique, as is classically done for the stabilization of the diffusion equation using discontinuous finite element approximations. The penalty method yields a system of linear equations that is Symmetric Positive Definite (SPD). Thus, solution techniques such as Preconditioned Conjugate Gradient (PCG) can be effectively employed. Algebraic MultiGrid (AMG) and Symmetric Gauss–Seidel (SGS) are employed as conjugate gradient preconditioners for the DSA system. AMG is shown to be significantly more efficient than SGS. Fourier analyses are carried out and we show that this discontinuous finite element DSA scheme is always stable and effective at reducing the spectral radius for iterative transport solves, even for grids with high-aspect ratio cells. Numerical results are presented for different grid types: quadrilateral, hexagonal, and polygonal grids as well as grids with local mesh adaptivity.

  15. Miniature plasma accelerating detonator and method of detonating insensitive materials

    SciTech Connect

    Bickes, R.W. Jr.; Kopczewski, M.R.; Schwarz, A.C.

    1986-11-11

    This patent describes a detonator assembly for initiating insensitive explosives or energetic materials. In the improvement described here the detonator assembly comprises: railgun accelerating means of a size sufficient to be used as a detonator for insensitive explosives or energetic materials in an amount of about 100 mg of explosives or less and capable of accelerating a plasma to detonation initiating velocities; and power supply means for supplying the power necessary to the railgun accelerating means to generate and accelerate the plasma.

  16. Accelerated weight histogram method for exploring free energy landscapes

    SciTech Connect

    Lindahl, V.; Lidmar, J.; Hess, B.

    2014-07-28

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  17. Accelerated weight histogram method for exploring free energy landscapes

    NASA Astrophysics Data System (ADS)

    Lindahl, V.; Lidmar, J.; Hess, B.

    2014-07-01

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  18. METHOD OF PRODUCING AND ACCELERATING AN ION BEAM

    NASA Technical Reports Server (NTRS)

    Foster, John E. (Inventor)

    2005-01-01

    A method of producing and accelerating an ion beam comprising the steps of providing a magnetic field with a cusp that opens in an outward direction along a centerline that passes through a vertex of the cusp: providing an ionizing gas that sprays outward through at least one capillary-like orifice in a plenum that is positioned such that the orifice is on the centerline in the cusp, outward of the vortex of the cusp; providing a cathode electron source, and positioning it outward of the orifice and off of the centerline; and positively charging the plenum relative to the cathode electron source such that the plenum functions as m anode. A hot filament may be used as the cathode electron source, and permanent magnets may be used to provide the magnetic field.

  19. DFM for defect-free DSA hole shrink process

    NASA Astrophysics Data System (ADS)

    Fukawatase, Ken; Yoshimoto, Kenji; Ohshima, Masahiro; Naka, Yoshihiro; Maeda, Shimon; Tanaka, Satoshi; Morita, Seiji; Aoyama, Hisako; Mimotogi, Shoji

    2014-03-01

    Application of the directed self-assembly (DSA) of block copolymer (PS-b-PMMA) to the hole shrink process has gained large attention because of the low cost and the potential for sub-lithographic patterning of contact, via and cut masks (Ref. [1-2] and references therein). In order to realize the DSA hole shrink process for manufacturing, however, one still has to resolve a few critical issues such as morphological defects and placement errors [3]. The morphological defect here indicates the PS residual layer lying between the vertical PMMA cylinder and the substrate, which prevents the PMMA cylinder from touching to the bottom surface. Such underlying defects cannot be observed by conventional approach with the top-down SEM images. In this study, we have utilized a simplified model, so-called the Ohta- Kawasaki (OK) model [4-5] to optimize the DSA hole shrink process. The advantages of the OK model are considerably low computational expense and reasonable accuracy. First, we demonstrated that the OK model could indeed predict complicated, three-dimensional morphologies of the diblock copolymer in the pre-patterned hole. All the results were computed within one minute, and they were reasonably comparable to those obtained from the self-consistent field theory (SCFT) [6]. Then, we calibrated the model parameters with the cross-sectional TEM images, minimizing the errors between the simulated thickness of PS residual layer and the experimental data. The calibrated model was used for the optimization of the guide hole shape and for the exploration of the multi-cylinder case.

  20. Accelerated molecular dynamics methods: introduction and recent developments

    SciTech Connect

    Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G

    2009-01-01

    reaction pathways may be important, we return instead to a molecular dynamics treatment, in which the trajectory itself finds an appropriate way to escape from each state of the system. Since a direct integration of the trajectory would be limited to nanoseconds, while we are seeking to follow the system for much longer times, we modify the dynamics in some way to cause the first escape to happen much more quickly, thereby accelerating the dynamics. The key is to design the modified dynamics in a way that does as little damage as possible to the probability for escaping along a given pathway - i.e., we try to preserve the relative rate constants for the different possible escape paths out of the state. We can then use this modified dynamics to follow the system from state to state, reaching much longer times than we could reach with direct MD. The dynamics within any one state may no longer be meaningful, but the state-to-state dynamics, in the best case, as we discuss in the paper, can be exact. We have developed three methods in this accelerated molecular dynamics (AMD) class, in each case appealing to TST, either implicitly or explicitly, to design the modified dynamics. Each of these methods has its own advantages, and we and others have applied these methods to a wide range of problems. The purpose of this article is to give the reader a brief introduction to how these methods work, and discuss some of the recent developments that have been made to improve their power and applicability. Note that this brief review does not claim to be exhaustive: various other methods aiming at similar goals have been proposed in the literature. For the sake of brevity, our focus will exclusively be on the methods developed by the group.

  1. Just in Time DSA-The Hanford Nuclear Safety Basis Strategy

    SciTech Connect

    Olinger, S. J.; Buhl, A. R.

    2002-02-26

    The U.S. Department of Energy, Richland Operations Office (RL) is responsible for 30 hazard category 2 and 3 nuclear facilities that are operated by its prime contractors, Fluor Hanford Incorporated (FHI), Bechtel Hanford, Incorporated (BHI) and Pacific Northwest National Laboratory (PNNL). The publication of Title 10, Code of Federal Regulations, Part 830, Subpart B, Safety Basis Requirements (the Rule) in January 2001 imposed the requirement that the Documented Safety Analyses (DSA) for these facilities be reviewed against the requirements of the Rule. Those DSA that do not meet the requirements must either be upgraded to satisfy the Rule, or an exemption must be obtained. RL and its prime contractors have developed a Nuclear Safety Strategy that provides a comprehensive approach for supporting RL's efforts to meet its long term objectives for hazard category 2 and 3 facilities while also meeting the requirements of the Rule. This approach will result in a reduction of the total number of safety basis documents that must be developed and maintained to support the remaining mission and closure of the Hanford Site and ensure that the documentation that must be developed will support: compliance with the Rule; a ''Just-In-Time'' approach to development of Rule-compliant safety bases supported by temporary exemptions; and consolidation of safety basis documents that support multiple facilities with a common mission (e.g. decontamination, decommissioning and demolition [DD&D], waste management, surveillance and maintenance). This strategy provides a clear path to transition the safety bases for the various Hanford facilities from support of operation and stabilization missions through DD&D to accelerate closure. This ''Just-In-Time'' Strategy can also be tailored for other DOE Sites, creating the potential for large cost savings and schedule reductions throughout the DOE complex.

  2. Diffusion Synthetic Acceleration for High-Order Discontinuous Finite Element SN Transport Schemes and Application to Locally Refined Unstructured Meshes

    SciTech Connect

    Yaqi Wang; Jean C. Ragusa

    2011-10-01

    Diffusion synthetic acceleration (DSA) schemes compatible with adaptive mesh refinement (AMR) grids are derived for the SN transport equations discretized using high-order discontinuous finite elements. These schemes are directly obtained from the discretized transport equations by assuming a linear dependence in angle of the angular flux along with an exact Fick's law and, therefore, are categorized as partially consistent. These schemes are akin to the symmetric interior penalty technique applied to elliptic problems and are all based on a second-order discontinuous finite element discretization of a diffusion equation (as opposed to a mixed or P1 formulation). Therefore, they only have the scalar flux as unknowns. A Fourier analysis has been carried out to determine the convergence properties of the three proposed DSA schemes for various cell optical thicknesses and aspect ratios. Out of the three DSA schemes derived, the modified interior penalty (MIP) scheme is stable and effective for realistic problems, even with distorted elements, but loses effectiveness for some highly heterogeneous configurations. The MIP scheme is also symmetric positive definite and can be solved efficiently with a preconditioned conjugate gradient method. Its implementation in an AMR SN transport code has been performed for both source iteration and GMRes-based transport solves, with polynomial orders up to 4. Numerical results are provided and show good agreement with the Fourier analysis results. Results on AMR grids demonstrate that the cost of DSA can be kept low on locally refined meshes.

  3. Apparatus and method for the acceleration of projectiles to hypervelocities

    DOEpatents

    Hertzberg, Abraham; Bruckner, Adam P.; Bogdanoff, David W.

    1990-01-01

    A projectile is initially accelerated to a supersonic velocity and then injected into a launch tube filled with a gaseous propellant. The projectile outer surface and launch tube inner surface form a ramjet having a diffuser, a combustion chamber and a nozzle. A catalytic coated flame holder projecting from the projectile ignites the gaseous propellant in the combustion chamber thereby accelerating the projectile in a subsonic combustion mode zone. The projectile then enters an overdriven detonation wave launch tube zone wherein further projectile acceleration is achieved by a formed, controlled overdriven detonation wave capable of igniting the gaseous propellant in the combustion chamber. Ultrahigh velocity projectile accelerations are achieved in a launch tube layered detonation zone having an inner sleeve filled with hydrogen gas. An explosive, which is disposed in the annular zone between the inner sleeve and the launch tube, explodes responsive to an impinging shock wave emanating from the diffuser of the accelerating projectile thereby forcing the inner sleeve inward and imparting an acceleration to the projectile. For applications wherein solid or liquid high explosives are employed, the explosion thereof forces the inner sleeve inward, forming a throat behind the projectile. This throat chokes flow behind, thereby imparting an acceleration to the projectile.

  4. Demonstration recommendations for accelerated testing of concrete decontamination methods

    SciTech Connect

    Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.

    1995-12-01

    A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are {sup 137}Cs, {sup 238}U (and its daughters), {sup 60}Co, {sup 90}Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 {times} 10{sup 8} ft{sup 2}or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling.

  5. Third order TRANSPORT with MAD (Methodical Accelerator Design) input

    SciTech Connect

    Carey, D.C.

    1988-09-20

    This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix. (LSP)

  6. Method of accelerating photons by a relativistic plasma wave

    DOEpatents

    Dawson, John M.; Wilks, Scott C.

    1990-01-01

    Photons of a laser pulse have their group velocity accelerated in a plasma as they are placed on a downward density gradient of a plasma wave of which the phase velocity nearly matches the group velocity of the photons. This acceleration results in a frequency upshift. If the unperturbed plasma has a slight density gradient in the direction of propagation, the photon frequencies can be continuously upshifted to significantly greater values.

  7. Diffusive Shock Acceleration and Reconnection Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Hunana, P.; Mostafavi, P.; Le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.; Cummings, A.; Stone, E.; Decker, R.

    2015-12-01

    Shock waves, as shown by simulations and observations, can generate high levels of downstream vortical turbulence, including magnetic islands. We consider a combination of diffusive shock acceleration (DSA) and downstream magnetic-island-reconnection-related processes as an energization mechanism for charged particles. Observations of electron and ion distributions downstream of interplanetary shocks and the heliospheric termination shock (HTS) are frequently inconsistent with the predictions of classical DSA. We utilize a recently developed transport theory for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets. Particle energization associated with the anti-reconnection electric field, a consequence of magnetic island merging, and magnetic island contraction, are considered. For the former only, we find that (i) the spectrum is a hard power law in particle speed, and (ii) the downstream solution is constant. For downstream plasmoid contraction only, (i) the accelerated spectrum is a hard power law in particle speed; (ii) the particle intensity for a given energy peaks downstream of the shock, and the distance to the peak location increases with increasing particle energy, and (iii) the particle intensity amplification for a particular particle energy, f(x,c/{c}0)/f(0,c/{c}0), is not 1, as predicted by DSA, but increases with increasing particle energy. The general solution combines both the reconnection-induced electric field and plasmoid contraction. The observed energetic particle intensity profile observed by Voyager 2 downstream of the HTS appears to support a particle acceleration mechanism that combines both DSA and magnetic-island-reconnection-related processes.

  8. Development of wide area environment accelerator operation and diagnostics method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Akito; Furukawa, Kazuro

    2015-08-01

    Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.

  9. Systems and methods for cylindrical hall thrusters with independently controllable ionization and acceleration stages

    DOEpatents

    Diamant, Kevin David; Raitses, Yevgeny; Fisch, Nathaniel Joseph

    2014-05-13

    Systems and methods may be provided for cylindrical Hall thrusters with independently controllable ionization and acceleration stages. The systems and methods may include a cylindrical channel having a center axial direction, a gas inlet for directing ionizable gas to an ionization section of the cylindrical channel, an ionization device that ionizes at least a portion of the ionizable gas within the ionization section to generate ionized gas, and an acceleration device distinct from the ionization device. The acceleration device may provide an axial electric field for an acceleration section of the cylindrical channel to accelerate the ionized gas through the acceleration section, where the axial electric field has an axial direction in relation to the center axial direction. The ionization section and the acceleration section of the cylindrical channel may be substantially non-overlapping.

  10. Sequential PTA of abdominal aorta. Haemodynamic evaluation and IV-DSA follow-up.

    PubMed

    Walstra, B R; Janevski, B K

    1987-04-01

    A case of sequential dilatation of a subtotal stenosis of the abdominal aorta in a young subject is reported. Initial and long-term success of the procedure is recorded using haemodynamic evaluation and intravenous digital subtraction angiography (IV-DSA) follow-up on an outpatient basis. In addition, the significance of biplane aortography with IV-DSA is illustrated. PMID:3033770

  11. 34 CFR 367.11 - What assurances must a DSA include in its application?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the... section 704 of the Act and subpart C of 34 CFR part 364; and (g) The applicant has been designated by the... 34 Education 2 2014-07-01 2013-07-01 true What assurances must a DSA include in its...

  12. 34 CFR 367.11 - What assurances must a DSA include in its application?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the... section 704 of the Act and subpart C of 34 CFR part 364; and (g) The applicant has been designated by the... 34 Education 2 2010-07-01 2010-07-01 false What assurances must a DSA include in its...

  13. 34 CFR 367.11 - What assurances must a DSA include in its application?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the... section 704 of the Act and subpart C of 34 CFR part 364; and (g) The applicant has been designated by the... 34 Education 2 2013-07-01 2013-07-01 false What assurances must a DSA include in its...

  14. 34 CFR 367.11 - What assurances must a DSA include in its application?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the... section 704 of the Act and subpart C of 34 CFR part 364; and (g) The applicant has been designated by the... 34 Education 2 2011-07-01 2010-07-01 true What assurances must a DSA include in its...

  15. Comparative Oxidative Stability of Fatty Acid Alkyl Esters by Accelerated Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Several fatty acid alkyl esters were subjected to accelerated methods of oxidation, including EN 14112 (Rancimat method) and pressurized differential scanning calorimetry (PDSC). Structural trends elucidated from both methods that improved oxidative stability included decreasing the number of doubl...

  16. Method of and apparatus for accelerating a projectile

    DOEpatents

    Goldstein, Yeshayahu S. A.; Tidman, Derek A.

    1986-01-01

    A projectile is accelerated along a confined path by supplying a pulsed high pressure, high velocity plasma jet to the rear of the projectile as the projectile traverses the path. The jet enters the confined path at a non-zero angle relative to the projectile path. The pulse is derived from a dielectric capillary tube having an interior wall from which plasma forming material is ablated in response to a discharge voltage. The projectile can be accelerated in response to the kinetic energy in the plasma jet or in response to a pressure increase of gases in the confined path resulting from the heat added to the gases by the plasma.

  17. Characterization of a Direct Sample Analysis (DSA) Ambient Ionization Source

    NASA Astrophysics Data System (ADS)

    Winter, Gregory T.; Wilhide, Joshua A.; LaCourse, William R.

    2015-09-01

    Water cluster ion intensity and distribution is affected by source conditions in direct sample analysis (DSA) ionization. Parameters investigated in this paper include source nozzle diameter, gas flow rate, and source positions relative to the mass spectrometer inlet. Schlieren photography was used to image the gas flow profile exiting the nozzle. Smaller nozzle diameters and higher flow rates produced clusters of the type [H + (H2O)n]+ with greater n and higher intensity than larger nozzles and lower gas flow rates. At high gas flow rates, the gas flow profile widened compared with the original nozzle diameter. At lower flow rates, the amount of expansion was reduced, which suggests that lowering the flow rate may allow for improvements in sampling spatial resolution.

  18. Characterization of a Direct Sample Analysis (DSA) Ambient Ionization Source.

    PubMed

    Winter, Gregory T; Wilhide, Joshua A; LaCourse, William R

    2015-09-01

    Water cluster ion intensity and distribution is affected by source conditions in direct sample analysis (DSA) ionization. Parameters investigated in this paper include source nozzle diameter, gas flow rate, and source positions relative to the mass spectrometer inlet. Schlieren photography was used to image the gas flow profile exiting the nozzle. Smaller nozzle diameters and higher flow rates produced clusters of the type [H + (H(2)O)(n)](+) with greater n and higher intensity than larger nozzles and lower gas flow rates. At high gas flow rates, the gas flow profile widened compared with the original nozzle diameter. At lower flow rates, the amount of expansion was reduced, which suggests that lowering the flow rate may allow for improvements in sampling spatial resolution. PMID:26091890

  19. Ultrahigh impedance method to assess electrostatic accelerator performance

    NASA Astrophysics Data System (ADS)

    Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios

    2015-06-01

    This paper describes an investigation of problem-solving procedures to troubleshoot electrostatic accelerators. A novel technique to diagnose issues with high-voltage components is described. The main application of this technique is noninvasive testing of electrostatic accelerator high-voltage grading systems, measuring insulation resistance, or determining the volume and surface resistivity of insulation materials used in column posts and acceleration tubes. In addition, this technique allows verification of the continuity of the resistive divider assembly as a complete circuit, revealing if an electrical path exists between equipotential rings, resistors, tube electrodes, and column post-to-tube conductors. It is capable of identifying and locating a "microbreak" in a resistor and the experimental validation of the transfer function of the high impedance energy control element. A simple and practical fault-finding procedure has been developed based on fundamental principles. The experimental distributions of relative resistance deviations (Δ R /R ) for both accelerating tubes and posts were collected during five scheduled accelerator maintenance tank openings during 2013 and 2014. Components with measured Δ R /R >±2.5 % were considered faulty and put through a detailed examination, with faults categorized. In total, thirty four unique fault categories were identified and most would not be identifiable without the new technique described. The most common failure mode was permanent and irreversible insulator current leakage that developed after being exposed to the ambient environment. As a result of efficient in situ troubleshooting and fault-elimination techniques, the maximum values of |Δ R /R | are kept below 2.5% at the conclusion of maintenance procedures. The acceptance margin could be narrowed even further by a factor of 2.5 by increasing the test voltage from 40 V up to 100 V. Based on experience over the last two years, resistor and insulator

  20. Scatterometry-based defect detection for DSA in-line process control

    NASA Astrophysics Data System (ADS)

    Chao, Robin; Liu, Chi-Chun; Bozdog, Cornel; Cepler, Aron; Sendelbach, Matthew; Cohen, Oded; Wolfling, Shay; Bailey, Todd; Felix, Nelson

    2015-03-01

    Successful implementation of directed self-assembly in high volume manufacturing is contingent upon the ability to control the new DSA-specific local defects such as "dislocations" or "line-shifts" or "fingerprint-like" defects. Conventional defect inspection tools are either limited in resolution (brightfield optical methods) or in the area / number of defects to investigate / review (SEM). Here we explore in depth a scatterometry-based technique that can bridge the gap between area throughput and detection resolution. First we establish the detection methodology for scatterometry-based defect detection, then we compare to established methodology. Careful experiments using scatterometry imaging confirm the ultimate resolution for defect detection of scatterometry-based techniques as low as <1% defect per area sampled - similar to CD-SEM based detection, while retaining a 2 orders of magnitude higher area sampling rate.

  1. Evaluation of Non-contrast Dynamic MRA in Intracranial Arteriovenous Malformation (AVM): Comparison with time of flight (TOF) and digital subtraction angiography (DSA)

    PubMed Central

    Yu, Songlin; Yan, Lirong; Yao, Yuqiang; Wang, Shuo; Yang, Mingqi; Wang, Bo; Zhuo, Yan; Zhao, Jizong; Wang, Danny J. J.

    2014-01-01

    Purpose Digital subtraction angiography (DSA) remains the gold standard to diagnose intracranial arteriovenous malformations (AVMs) but is invasive. Existing magnetic resonance angiography (MRA) is suboptimal for assessing the hemodynamics of AVMs. The objective of this study was to evaluate the clinical utility of a novel noncontrast four-dimensional (4D) dynamic MRA (dMRA) in the evaluation of intracranial AVMs through comparison with DSA and time-of-flight (TOF) MRA. Materials and methods Nineteen patients (12 women, mean age 26.2±10.7 years) with intracranial AVMs were examined with 4D dMRA, TOF and DSA. Spetzler–Martin grading scale was evaluated using each of the above three methods independently by two raters. Diagnostic confidence scores for three components of AVMs (feeding artery, nidus and draining vein) were also rated. Kendall's coefficient of concordance was calculated to evaluate the reliability between two raters within each modality (dMRA, TOF, TOF plus dMRA). The Wilcoxon signed-rank test was applied to compare the diagnostic confidence scores between each pair of the three modalities Results dMRA was able to detect 16 out of 19 AVMs, and the ratings of AVM size and location matched those of DSA. The diagnostic confidence scores by dMRA were adequate for nidus (3.5/5), moderate for feeding arteries (2.5/5) and poor for draining veins (1.5/5). The hemodynamic information provided by dMRA improved diagnostic confidence scores by TOF MRA. Conclusion As a completely noninvasive method, 4D dMRA offers hemodynamic information with a temporal resolution of 50–100 ms for the evaluation of AVMs and can complement existing methods such as DSA and TOF MRA. PMID:22521994

  2. Rapamycin Interferes With Postdepletion Regulatory T Cell Homeostasis and Enhances DSA Formation Corrected by CTLA4-Ig.

    PubMed

    Oh, B; Yoon, J; Farris, A; Kirk, A; Knechtle, S; Kwun, J

    2016-09-01

    Previously, we demonstrated that alemtuzumab induction with rapamycin as sole maintenance therapy is associated with an increased incidence of humoral rejection in human kidney transplant patients. To investigate the role of rapamycin in posttransplant humoral responses after T cell depletion, fully MHC mismatched hearts were transplanted into hCD52Tg mice, followed by alemtuzumab treatment with or without a short course of rapamycin. While untreated hCD52Tg recipients acutely rejected B6 hearts (n = 12), hCD52Tg recipients treated with alemtuzumab alone or in conjunction with rapamycin showed a lack of acute rejection (MST > 100). However, additional rapamycin showed a reduced beating quality over time and increased incidence of vasculopathy. Furthermore, rapamycin supplementation showed an increased serum donor-specific antibodies (DSA) level compared to alemtuzumab alone at postoperation days 50 and 100. Surprisingly, additional rapamycin treatment significantly reduced CD4(+) CD25(+) FoxP3(+) T reg cell numbers during treatment. On the contrary, ICOS(+) PD-1(+) CD4 follicular helper T cells in the lymph nodes were significantly increased. Interestingly, CTLA4-Ig supplementation in conjunction with rapamycin corrected rapamycin-induced accelerated posttransplant humoral response by directly modulating Tfh cells but not Treg cells. This suggests that rapamycin after T cell depletion could affect Treg cells leading to an increase of Tfh cells and DSA production that can be reversed by CTLA4-Ig. PMID:26990829

  3. Implementation of templated DSA for via layer patterning at the 7nm node

    NASA Astrophysics Data System (ADS)

    Gronheid, Roel; Doise, Jan; Bekaert, Joost; Chan, Boon Teik; Karageorgos, Ioannis; Ryckaert, Julien; Vandenberghe, Geert; Cao, Yi; Lin, Guanyang; Somervell, Mark; Fenger, Germain; Fuchimoto, Daisuke

    2015-03-01

    In recent years major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCP). Insertion of DSA for IC fabrication is seriously considered for the 7nm node. At this node the DSA technology could alleviate costs for double patterning and limit the number of masks that would be required per layer. At imec multiple approaches for inserting DSA into the 7nm node are considered. One of the most straightforward approaches for implementation would be for via patterning through templated DSA (grapho-epitaxy), since hole patterns are readily accessible through templated hole patterning of cylindrical phase BCP materials. Here, the pre-pattern template is first patterned into a spin-on hardmask stack. After optimizing the surface properties of the template the desired hole patterns can be obtained by the BCP DSA process. For implementation of this approach to be implemented for 7nm node via patterning, not only the appropriate process flow needs to be available, but also appropriate metrology (including for pattern placement accuracy) and DSA-aware mask decomposition are required. In this paper the imec approach for 7nm node via patterning will be discussed.

  4. Comparative imaging study in ultrasound, MRI, CT, and DSA using a multimodality renal artery phantom

    SciTech Connect

    King, Deirdre M.; Fagan, Andrew J.; Moran, Carmel M.; Browne, Jacinta E.

    2011-02-15

    Purpose: A range of anatomically realistic multimodality renal artery phantoms consisting of vessels with varying degrees of stenosis was developed and evaluated using four imaging techniques currently used to detect renal artery stenosis (RAS). The spatial resolution required to visualize vascular geometry and the velocity detection performance required to adequately characterize blood flow in patients suffering from RAS are currently ill-defined, with the result that no one imaging modality has emerged as a gold standard technique for screening for this disease. Methods: The phantoms, which contained a range of stenosis values (0%, 30%, 50%, 70%, and 85%), were designed for use with ultrasound, magnetic resonance imaging, x-ray computed tomography, and x-ray digital subtraction angiography. The construction materials used were optimized with respect to their ultrasonic speed of sound and attenuation coefficient, MR relaxometry (T{sub 1},T{sub 2}) properties, and Hounsfield number/x-ray attenuation coefficient, with a design capable of tolerating high-pressure pulsatile flow. Fiducial targets, incorporated into the phantoms to allow for registration of images among modalities, were chosen to minimize geometric distortions. Results: High quality distortion-free images of the phantoms with good contrast between vessel lumen, fiducial markers, and background tissue to visualize all stenoses were obtained with each modality. Quantitative assessments of the grade of stenosis revealed significant discrepancies between modalities, with each underestimating the stenosis severity for the higher-stenosed phantoms (70% and 85%) by up to 14%, with the greatest discrepancy attributable to DSA. Conclusions: The design and construction of a range of anatomically realistic renal artery phantoms containing varying degrees of stenosis is described. Images obtained using the main four diagnostic techniques used to detect RAS were free from artifacts and exhibited adequate contrast

  5. METHODS AND MEANS FOR OBTAINING HYDROMAGNETICALLY ACCELERATED PLASMA JET

    DOEpatents

    Marshall, J. Jr.

    1960-11-22

    A hydromagnetic plasma accelerator is described comprising in combination a center electrode, an outer electrode coaxial with the center electrode and defining an annular vacuum chamber therebetween, insulating closure means between the electrodes at one end, means for iniroducing an ionizable gas into the annular vacuum chamber near one end thereof, and means including a power supply for applying a voltage between the electrodes at the end having the closure means, the open ends of the electrodes being adapted for connection to a vacuumed atilization chamber.

  6. GPU-accelerated discontinuous Galerkin methods on hybrid meshes

    NASA Astrophysics Data System (ADS)

    Chan, Jesse; Wang, Zheng; Modave, Axel; Remacle, Jean-Francois; Warburton, T.

    2016-08-01

    We present a time-explicit discontinuous Galerkin (DG) solver for the time-domain acoustic wave equation on hybrid meshes containing vertex-mapped hexahedral, wedge, pyramidal and tetrahedral elements. Discretely energy-stable formulations are presented for both Gauss-Legendre and Gauss-Legendre-Lobatto (Spectral Element) nodal bases for the hexahedron. Stable timestep restrictions for hybrid meshes are derived by bounding the spectral radius of the DG operator using order-dependent constants in trace and Markov inequalities. Computational efficiency is achieved under a combination of element-specific kernels (including new quadrature-free operators for the pyramid), multi-rate timestepping, and acceleration using Graphics Processing Units.

  7. Linear Accelerators

    NASA Astrophysics Data System (ADS)

    Sidorin, Anatoly

    2010-01-01

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  8. Linear Accelerators

    SciTech Connect

    Sidorin, Anatoly

    2010-01-05

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  9. Particle acceleration by combined diffusive shock acceleration and downstream multiple magnetic island acceleration

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Hunana, P.; Mostafavi, P.; le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.

    2015-09-01

    As a consequence of the evolutionary conditions [28; 29], shock waves can generate high levels of downstream vortical turbulence. Simulations [32-34] and observations [30; 31] support the idea that downstream magnetic islands (also called plasmoids or flux ropes) result from the interaction of shocks with upstream turbulence. Zank et al. [18] speculated that a combination of diffusive shock acceleration (DSA) and downstream reconnection-related effects associated with the dynamical evolution of a “sea of magnetic islands” would result in the energization of charged particles. Here, we utilize the transport theory [18; 19] for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets to investigate a combined DSA and downstream multiple magnetic island charged particle acceleration mechanism. We consider separately the effects of the anti-reconnection electric field that is a consequence of magnetic island merging [17], and magnetic island contraction [14]. For the merging plasmoid reconnection- induced electric field only, we find i) that the particle spectrum is a power law in particle speed, flatter than that derived from conventional DSA theory, and ii) that the solution is constant downstream of the shock. For downstream plasmoid contraction only, we find that i) the accelerated particle spectrum is a power law in particle speed, flatter than that derived from conventional DSA theory; ii) for a given energy, the particle intensity peaks downstream of the shock, and the peak location occurs further downstream of the shock with increasing particle energy, and iii) the particle intensity amplification for a particular particle energy, f(x, c/c0)/f(0, c/c0), is not 1, as predicted by DSA theory, but increases with increasing particle energy. These predictions can be tested against observations of electrons and ions accelerated at interplanetary shocks and the heliospheric

  10. Self-consistent Monte Carlo simulations of proton acceleration in coronal shocks: Effect of anisotropic pitch-angle scattering of particles

    NASA Astrophysics Data System (ADS)

    Afanasiev, A.; Battarbee, M.; Vainio, R.

    2015-12-01

    Context. Solar energetic particles observed in association with coronal mass ejections (CMEs) are produced by the CME-driven shock waves. The acceleration of particles is considered to be due to diffusive shock acceleration (DSA). Aims: We aim at a better understanding of DSA in the case of quasi-parallel shocks, in which self-generated turbulence in the shock vicinity plays a key role. Methods: We have developed and applied a new Monte Carlo simulation code for acceleration of protons in parallel coronal shocks. The code performs a self-consistent calculation of resonant interactions of particles with Alfvén waves based on the quasi-linear theory. In contrast to the existing Monte Carlo codes of DSA, the new code features the full quasi-linear resonance condition of particle pitch-angle scattering. This allows us to take anisotropy of particle pitch-angle scattering into account, while the older codes implement an approximate resonance condition leading to isotropic scattering. We performed simulations with the new code and with an old code, applying the same initial and boundary conditions, and have compared the results provided by both codes with each other, and with the predictions of the steady-state theory. Results: We have found that anisotropic pitch-angle scattering leads to less efficient acceleration of particles than isotropic. However, extrapolations to particle injection rates higher than those we were able to use suggest the capability of DSA to produce relativistic particles. The particle and wave distributions in the foreshock as well as their time evolution, provided by our new simulation code, are significantly different from the previous results and from the steady-state theory. Specifically, the mean free path in the simulations with the new code is increasing with energy, in contrast to the theoretical result.

  11. DSA patterning options for FinFET formation at 7nm node

    NASA Astrophysics Data System (ADS)

    Liu, Chi-Chun C.; Franke, Elliott; Lie, Fee Li; Sieg, Stuart; Tsai, Hsinyu; Lai, Kafai; Truong, Hoa; Farrell, Richard; Somervell, Mark; Sanders, Daniel; Felix, Nelson; Guillorn, Michael; Burns, Sean; Hetzer, David; Ko, Akiteru; Arnold, John; Colburn, Matthew

    2016-03-01

    Several 27nm-pitch directed self-assembly (DSA) processes targeting fin formation for FinFET device fabrication are studied in a 300mm pilot line environment, including chemoepitaxy for a conventional Fin arrays, graphoepitaxy for a customization approach and a hybrid approach for self-aligned Fin cut. The trade-off between each DSA flow is discussed in terms of placement error, Fin CD/profile uniformity, and restricted design. Challenges in pattern transfer are observed and process optimization are discussed. Finally, silicon Fins with 100nm depth and on-target CD using different DSA options with either lithographic or self-aligned customization approach are demonstrated.

  12. Combining Diffusive Shock Acceleration with Acceleration by Contracting and Reconnecting Small-scale Flux Ropes at Heliospheric Shocks

    NASA Astrophysics Data System (ADS)

    le Roux, J. A.; Zank, G. P.; Webb, G. M.; Khabarova, O. V.

    2016-08-01

    Computational and observational evidence is accruing that heliospheric shocks, as emitters of vorticity, can produce downstream magnetic flux ropes and filaments. This led Zank et al. to investigate a new paradigm whereby energetic particle acceleration near shocks is a combination of diffusive shock acceleration (DSA) with downstream acceleration by many small-scale contracting and reconnecting (merging) flux ropes. Using a model where flux-rope acceleration involves a first-order Fermi mechanism due to the mean compression of numerous contracting flux ropes, Zank et al. provide theoretical support for observations that power-law spectra of energetic particles downstream of heliospheric shocks can be harder than predicted by DSA theory and that energetic particle intensities should peak behind shocks instead of at shocks as predicted by DSA theory. In this paper, a more extended formalism of kinetic transport theory developed by le Roux et al. is used to further explore this paradigm. We describe how second-order Fermi acceleration, related to the variance in the electromagnetic fields produced by downstream small-scale flux-rope dynamics, modifies the standard DSA model. The results show that (i) this approach can qualitatively reproduce observations of particle intensities peaking behind the shock, thus providing further support for the new paradigm, and (ii) stochastic acceleration by compressible flux ropes tends to be more efficient than incompressible flux ropes behind shocks in modifying the DSA spectrum of energetic particles.

  13. High chi block copolymer DSA to improve pattern quality for FinFET device fabrication

    NASA Astrophysics Data System (ADS)

    Tsai, HsinYu; Miyazoe, Hiroyuki; Vora, Ankit; Magbitang, Teddie; Arellano, Noel; Liu, Chi-Chun; Maher, Michael J.; Durand, William J.; Dawes, Simon J.; Bucchignano, James J.; Gignac, Lynne; Sanders, Daniel P.; Joseph, Eric A.; Colburn, Matthew E.; Willson, C. Grant; Ellison, Christopher J.; Guillorn, Michael A.

    2016-03-01

    Directed self-assembly (DSA) with block-copolymers (BCP) is a promising lithography extension technique to scale below 30nm pitch with 193i lithography. Continued scaling toward 20nm pitch or below will require material system improvements from PS-b-PMMA. Pattern quality for DSA features, such as line edge roughness (LER), line width roughness (LWR), size uniformity, and placement, is key to DSA manufacturability. In this work, we demonstrate finFET devices fabricated with DSA-patterned fins and compare several BCP systems for continued pitch scaling. Organic-organic high chi BCPs at 24nm and 21nm pitches show improved low to mid-frequency LER/LWR after pattern transfer.

  14. Electrochemical cell design for the impedance studies of chlorine evolution at DSA(®) anodes.

    PubMed

    Silva, J F; Dias, A C; Araújo, P; Brett, C M A; Mendes, A

    2016-08-01

    A new electrochemical cell design suitable for the electrochemical impedance spectroscopy (EIS) studies of chlorine evolution on Dimensionally Stable Anodes (DSA(®)) has been developed. Despite being considered a powerful tool, EIS has rarely been used to study the kinetics of chlorine evolution at DSA anodes. Cell designs in the open literature are unsuitable for the EIS analysis at high DSA anode current densities for chlorine evolution because they allow gas accumulation at the electrode surface. Using the new cell, the impedance spectra of the DSA anode during chlorine evolution at high sodium chloride concentration (5 mol dm(-3) NaCl) and high current densities (up to 140 mA cm(-2)) were recorded. Additionally, polarization curves and voltammograms were obtained showing little or no noise. EIS and polarization curves evidence the role of the adsorption step in the chlorine evolution reaction, compatible with the Volmer-Heyrovsky and Volmer-Tafel mechanisms. PMID:27587166

  15. The Lozanov Method for Accelerating the Learning of Foreign Languages.

    ERIC Educational Resources Information Center

    Stanton, H. E.

    1978-01-01

    Discusses the Lozanov Method of teaching foreign languages developed by Lozanov in Bulgaria. This method (also known as Suggestopedia) uses various techniques such as physical relaxation exercises, mental concentration, classical music, and ego-enhancing suggestions. (CFM)

  16. Computer control of large accelerators, design concepts and methods

    NASA Astrophysics Data System (ADS)

    Beck, F.; Gormley, M.

    1985-03-01

    Unlike most of the specialities treated in this volume, control system design is still an art, not a science. This presentation is an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies, and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented, since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided.

  17. Computer control of large accelerators design concepts and methods

    SciTech Connect

    Beck, F.; Gormley, M.

    1984-05-01

    Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.

  18. Applying ILT mask synthesis for co-optimizing design rules and DSA process characteristics

    NASA Astrophysics Data System (ADS)

    Dam, Thuc; Stanton, William

    2014-03-01

    During early stage development of a DSA process, there are many unknown interactions between design, DSA process, RET, and mask synthesis. The computational resolution of these unknowns can guide development towards a common process space whereby manufacturing success can be evaluated. This paper will demonstrate the use of existing Inverse Lithography Technology (ILT) to co-optimize the multitude of parameters. ILT mask synthesis will be applied to a varied hole design space in combination with a range of DSA model parameters under different illumination and RET conditions. The design will range from 40 nm pitch doublet to random DSA designs with larger pitches, while various effective DSA characteristics of shrink bias and corner smoothing will be assumed for the DSA model during optimization. The co-optimization of these design parameters and process characteristics under different SMO solutions and RET conditions (dark/bright field tones and binary/PSM mask types) will also help to provide a complete process mapping of possible manufacturing options. The lithographic performances for masks within the optimized parameter space will be generated to show a common process space with the highest possibility for success.

  19. Clean Slate Environmental Remediation DSA for 10 CFR 830 Compliance

    SciTech Connect

    James L. Traynor, Stephen L. Nicolosi, Michael L. Space, Louis F. Restrepo

    2006-08-01

    Clean Slate Sites II and III are scheduled for environmental remediation (ER) to remove elevated levels of radionuclides in soil. These sites are contaminated with legacy remains of non-nuclear yield nuclear weapons experiments at the Nevada Test Site, that involved high explosive, fissile, and related materials. The sites may also hold unexploded ordnance (UXO) from military training activities in the area over the intervening years. Regulation 10 CFR 830 (Ref. 1) identifies DOE-STD-1120-98 (Ref. 2) and 29 CFR 1910.120 (Ref. 3) as the safe harbor methodologies for performing these remediation operations. Of these methodologies, DOE-STD-1120-98 has been superseded by DOE-STD-1120-2005 (Ref. 4). The project adopted DOE-STD-1120-2005, which includes an approach for ER projects, in combination with 29 CFR 1910.120, as the basis documents for preparing the documented safety analysis (DSA). To securely implement the safe harbor methodologies, we applied DOE-STD-1027-92 (Ref. 5) and DOE-STD-3009-94 (Ref. 6), as needed, to develop a robust hazard classification and hazards analysis that addresses non-standard hazards such as radionuclides and UXO. The hazard analyses provided the basis for identifying Technical Safety Requirements (TSR) level controls. The DOE-STD-1186-2004 (Ref. 7) methodology showed that some controls warranted elevation to Specific Administrative Control (SAC) status. In addition to the Evaluation Guideline (EG) of DOE-STD-3009-94, we also applied the DOE G 420.1 (Ref. 8) annual, radiological dose, siting criterion to define a controlled area around the operation to protect the maximally exposed offsite individual (MOI).

  20. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    SciTech Connect

    Willert, Jeffrey; Park, H.; Knoll, D.A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton–Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  1. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  2. Accelerating molecular property calculations with nonorthonormal Krylov space methods.

    PubMed

    Furche, Filipp; Krull, Brandon T; Nguyen, Brian D; Kwon, Jake

    2016-05-01

    We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved. PMID:27155623

  3. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    NASA Astrophysics Data System (ADS)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake

    2016-05-01

    We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.

  4. A co-design method for parallel image processing accelerator based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Weng, Kaijian; Cheng, Zhao; Yan, Luxin; Guan, Jing

    2011-11-01

    In this paper, we present a co-design method for parallel image processing accelerator based on DSP and FPGA. DSP is used as application and operation subsystem to execute the complex operations, and in which the algorithms are resolving into commands. FPGA is used as co-processing subsystem for regular data-parallel processing, and operation commands and image data are transmitted to FPGA for processing acceleration. A series of experiments have been carried out, and up to a half or three quarter time is saved which supports that the proposed accelerator will consume less time and get better performance than the traditional systems.

  5. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGESBeta

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  6. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628

  7. Constraint methods that accelerate free-energy simulations of biomolecules

    SciTech Connect

    Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  8. GPU acceleration of particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Cowan, Benjamin; Cary, John; Meiser, Dominic

    2015-11-01

    Graphics processing units (GPUs) have become key components in many supercomputing systems, as they can provide more computations relative to their cost and power consumption than conventional processors. However, to take full advantage of this capability, they require a strict programming model which involves single-instruction multiple-data execution as well as significant constraints on memory accesses. To bring the full power of GPUs to bear on plasma physics problems, we must adapt the computational methods to this new programming model. We have developed a GPU implementation of the particle-in-cell (PIC) method, one of the mainstays of plasma physics simulation. This framework is highly general and enables advanced PIC features such as high order particles and absorbing boundary conditions. The main elements of the PIC loop, including field interpolation and particle deposition, are designed to optimize memory access. We describe the performance of these algorithms and discuss some of the methods used. Work supported by DARPA contract W31P4Q-15-C-0061 (SBIR).

  9. Aitken-based acceleration methods for assessing convergence of multilayer neural networks.

    PubMed

    Pilla, R S; Kamarthi, S V; Lindsay, B G

    2001-01-01

    This paper first develops the ideas of Aitken delta(2) method to accelerate the rate of convergence of an error sequence (value of the objective function at each step) obtained by training a neural network with a sigmoidal activation function via the backpropagation algorithm. The Aitken method is exact when the error sequence is exactly geometric. However, theoretical and empirical evidence suggests that the best possible rate of convergence obtainable for such an error sequence is log-geometric. This paper develops a new invariant extended-Aitken acceleration method for accelerating log-geometric sequences. The resulting accelerated sequence enables one to predict the final value of the error function. These predictions can in turn be used to assess the distance between the current and final solution and thereby provides a stopping criterion for a desired accuracy. Each of the techniques described is applicable to a wide range of problems. The invariant extended-Aitken acceleration approach shows improved acceleration as well as outstanding prediction of the final error in the practical problems considered. PMID:18249928

  10. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  11. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.

  12. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  13. Investigation on Accelerating Dust Storm Simulation via Domain Decomposition Methods

    NASA Astrophysics Data System (ADS)

    Yu, M.; Gui, Z.; Yang, C. P.; Xia, J.; Chen, S.

    2014-12-01

    Dust storm simulation is a data and computing intensive process, which requires high efficiency and adequate computing resources. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. However, it is still a question worthy of consideration that how to allocate these subdomain processes into computing nodes without introducing imbalanced task loads and unnecessary communications among computing nodes. Here we propose a domain decomposition and allocation framework that can carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. The framework is tested in the NMM (Nonhydrostatic Mesoscale Model)-dust model, where a 72-hour processes of the dust load are simulated. Performance result using the proposed scheduling method is compared with the one using default scheduling methods of MPI. Results demonstrate that the system improves the performance of simulation by 20% up to 80%.

  14. Method for run time hardware code profiling for algorithm acceleration

    NASA Astrophysics Data System (ADS)

    Matev, Vladimir; de la Torre, Eduardo; Riesgo, Teresa

    2009-05-01

    In this paper we propose a method for run time profiling of applications on instruction level by analysis of loops. Instead of looking for coarse grain blocks we concentrate on fine grain but still costly blocks in terms of execution times. Most code profiling is done in software by introducing code into the application under profile witch has time overhead, while in this work data for the position of a loop, loop body, size and number of executions is stored and analysed using a small non intrusive hardware block. The paper describes the system mapping to runtime reconfigurable systems. The fine grain code detector block synthesis results and its functionality verification are also presented in the paper. To demonstrate the concept MediaBench multimedia benchmark running on the chosen development platform is used.

  15. Sequential electrochemical treatment of dairy wastewater using aluminum and DSA-type anodes.

    PubMed

    Borbón, Brenda; Oropeza-Guzman, Mercedes Teresita; Brillas, Enric; Sirés, Ignasi

    2014-01-01

    Dairy wastewater is characterized by a high content of hardly biodegradable dissolved, colloidal, and suspended organic matter. This work firstly investigates the performance of two individual electrochemical treatments, namely electrocoagulation (EC) and electro-oxidation (EO), in order to finally assess the mineralization ability of a sequential EC/EO process. EC with an Al anode was employed as a primary pretreatment for the conditioning of 800 mL of wastewater. A complete reduction of turbidity, as well as 90 and 81% of chemical oxygen demand (COD) and total organic carbon (TOC) removal, respectively, were achieved after 120 min of EC at 9.09 mA cm(-2). For EO, two kinds of dimensionally stable anodes (DSA) electrodes (Ti/IrO₂-Ta₂O₅ and Ti/IrO₂-SnO₂-Sb₂O₅) were prepared by the Pechini method, obtaining homogeneous coatings with uniform composition and high roughness. The (·)OH formed at the DSA surface from H₂O oxidation were not detected by electron spin resonance. However, their indirect determination by means of H₂O₂ measurements revealed that Ti/IrO₂-SnO₂-Sb₂O₅ is able to produce partially physisorbed radicals. Since the characterization of the wastewater revealed the presence of indole derivatives, preliminary bulk electrolyses were done in ultrapure water containing 1 mM indole in sulfate and/or chloride media. The performance of EO with the Ti/IrO₂-Ta₂O₅ anode was evaluated from the TOC removal and the UV/Vis absorbance decay. The mineralization was very poor in 0.05 M Na₂SO₄, whereas it increased considerably at a greater Cl(-) content, meaning that the oxidation mediated by electrogenerated species such as Cl₂, HClO, and/or ClO(-) competes and even predominates over the (·)OH-mediated oxidation. The EO treatment of EC-pretreated dairy wastewater allowed obtaining a global 98 % TOC removal, decreasing from 1,062 to <30 mg L(-1). PMID:24671400

  16. Apparatus and method for phosphate-accelerated bioremediation

    DOEpatents

    Looney, Brian B.; Pfiffner, Susan M.; Phelps, Tommy J.; Lombard, Kenneth H.; Hazen, Terry C.; Borthen, James W.

    1998-01-01

    An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site and provides for the use of a passive delivery system. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.

  17. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  18. Apparatus and method for phosphate-accelerated bioremediation

    DOEpatents

    Looney, B.B.; Pfiffner, S.M.; Phelps, T.J.; Lombard, K.H.; Hazen, T.C.; Borthen, J.W.

    1998-05-19

    An apparatus and method are provided for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site and provides for the use of a passive delivery system. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate. 8 figs.

  19. Apparatus and method for phosphate-accelerated bioremediation

    DOEpatents

    Looney, B.B.; Phelps, T.J.; Hazen, T.C.; Pfiffner, S.M.; Lombard, K.H.; Borthen, J.W.

    1994-01-01

    An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.

  20. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    DOEpatents

    Danby, G.T.; Jackson, J.W.

    1990-03-19

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations (dB/dt) in the particle beam.

  1. Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers

    DOEpatents

    Danby, Gordon T.; Jackson, John W.

    1991-01-01

    A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.

  2. Multigrid lattice Boltzmann method for accelerated solution of elliptic equations

    NASA Astrophysics Data System (ADS)

    Patil, Dhiraj V.; Premnath, Kannan N.; Banerjee, Sanjoy

    2014-05-01

    A new solver for second-order elliptic partial differential equations (PDEs) based on the lattice Boltzmann method (LBM) and the multigrid (MG) technique is presented. Several benchmark elliptic equations are solved numerically with the inclusion of multiple grid-levels in two-dimensional domains at an optimal computational cost within the LB framework. The results are compared with the corresponding analytical solutions and numerical solutions obtained using the Stone's strongly implicit procedure. The classical PDEs considered in this article include the Laplace and Poisson equations with Dirichlet boundary conditions, with the latter involving both constant and variable coefficients. A detailed analysis of solution accuracy, convergence and computational efficiency of the proposed solver is given. It is observed that the use of a high-order stencil (for smoothing) improves convergence and accuracy for an equivalent number of smoothing sweeps. The effect of the type of scheduling cycle (V- or W-cycle) on the performance of the MG-LBM is analyzed. Next, a parallel algorithm for the MG-LBM solver is presented and then its parallel performance on a multi-core cluster is analyzed. Lastly, a practical example is provided wherein the proposed elliptic PDE solver is used to compute the electro-static potential encountered in an electro-chemical cell, which demonstrates the effectiveness of this new solver in complex coupled systems. Several orders of magnitude gains in convergence and parallel scaling for the canonical problems, and a factor of 5 reduction for the multiphysics problem are achieved using the MG-LBM.

  3. A parameter identification method combining acceleration search /Partan/ and continuous parameter tracking.

    NASA Technical Reports Server (NTRS)

    Jackson, G. A.

    1972-01-01

    A parameter identification method is presented which combines the best features of two well-established, existing methods: Continuous Parameter Tracking and Acceleration Search (Partan). In this paper the equations are developed for the general n-parameter identification problem, and results are given for a specific two parameter application.

  4. Grey transport acceleration method for time-dependent radiative transfer problems

    SciTech Connect

    Larsen, E.

    1988-10-01

    A new iterative method for solving hte time-dependent multifrequency radiative transfer equations is described. The method is applicable to semi-implicit time discretizations that generate a linear steady-state multifrequency transport problem with pseudo-scattering within each time step. The standard ''lambda'' iteration method is shown to often converge slowly for such problems, and the new grey transport acceleration (GTA) method, based on accelerating the lambda method by employing a grey, or frequency-independent transport equation, is developed. The GTA method is shown, theoretically by an iterative Fourier analysis, and experimentally by numerical calculations, to converge significantly faster than the lambda method. In addition, the GTA method is conceptually simple to implement for general differencing schemes, on either Eulerian or Lagrangian meshes. copyright 1988 Academic Press, Inc.

  5. GPU-accelerated indirect boundary element method for voxel model analyses with fast multipole method

    NASA Astrophysics Data System (ADS)

    Hamada, Shoji

    2011-05-01

    An indirect boundary element method (BEM) that uses the fast multipole method (FMM) was accelerated using graphics processing units (GPUs) to reduce the time required to calculate a three-dimensional electrostatic field. The BEM is designed to handle cubic voxel models and is specialized to consider square voxel walls as boundary surface elements. The FMM handles the interactions among the surface charge elements and directly outputs surface integrals of the fields over each individual element. The CPU code was originally developed for field analysis in human voxel models derived from anatomical images. FMM processes are programmed using the NVIDIA Compute Unified Device Architecture (CUDA) with double-precision floating-point arithmetic on the basis of a shared pseudocode template. The electric field induced by DC-current application between two electrodes is calculated for two models with 499,629 (model 1) and 1,458,813 (model 2) surface elements. The calculation times were measured with a four-GPU configuration (two NVIDIA GTX295 cards) with four CPU cores (an Intel Core i7-975 processor). The times required by a linear system solver are 31 s and 186 s for models 1 and 2, respectively. The speed-up ratios of the FMM range from 5.9 to 8.2 for model 1 and from 5.0 to 5.6 for model 2. The calculation speed for element-interaction in this BEM analysis was comparable to that of particle-interaction using FMM on a GPU.

  6. Detecting chaos in particle accelerators through the frequency map analysis method

    SciTech Connect

    Papaphilippou, Yannis

    2014-06-01

    The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.

  7. Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods

    SciTech Connect

    Luskin, Mitchell; James, Richard; Tadmor, Ellad

    2014-03-30

    This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.

  8. Kinetic Simulations of Particle Acceleration at Shocks

    SciTech Connect

    Caprioli, Damiano; Guo, Fan

    2015-07-16

    Collisionless shocks are mediated by collective electromagnetic interactions and are sources of non-thermal particles and emission. The full particle-in-cell approach and a hybrid approach are sketched, simulations of collisionless shocks are shown using a multicolor presentation. Results for SN 1006, a case involving ion acceleration and B field amplification where the shock is parallel, are shown. Electron acceleration takes place in planetary bow shocks and galaxy clusters. It is concluded that acceleration at shocks can be efficient: >15%; CRs amplify B field via streaming instability; ion DSA is efficient at parallel, strong shocks; ions are injected via reflection and shock drift acceleration; and electron DSA is efficient at oblique shocks.

  9. Accelerated Block Preconditioned Gradient method for large scale wave functions calculations in Density Functional Theory

    SciTech Connect

    Fattebert, J.-L.

    2010-01-20

    An Accelerated Block Preconditioned Gradient (ABPG) method is proposed to solve electronic structure problems in Density Functional Theory. This iterative algorithm is designed to solve directly the non-linear Kohn-Sham equations for accurate discretization schemes involving a large number of degrees of freedom. It makes use of an acceleration scheme similar to what is known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of convergence for large scale applications using a finite difference discretization and multigrid preconditioning.

  10. DSA volumetric 3D reconstructions of intracranial aneurysms: A pictorial essay

    PubMed Central

    Cieściński, Jakub; Serafin, Zbigniew; Strześniewski, Piotr; Lasek, Władysław; Beuth, Wojciech

    2012-01-01

    Summary A gold standard of cerebral vessel imaging remains the digital subtraction angiography (DSA) performed in three projections. However, in specific clinical cases, many additional projections are required, or a complete visualization of a lesion may even be impossible with 2D angiography. Three-dimensional (3D) reconstructions of rotational angiography were reported to improve the performance of DSA significantly. In this pictorial essay, specific applications of this technique are presented in the management of intracranial aneurysms, including: preoperative aneurysm evaluation, intraoperative imaging, and follow-up. Volumetric reconstructions of 3D DSA are a valuable tool for cerebral vessels imaging. They play a vital role in the assessment of intracranial aneurysms, especially in evaluation of the aneurysm neck and the aneurysm recanalization. PMID:22844309

  11. Three dimensional finite element methods: Their role in the design of DC accelerator systems

    SciTech Connect

    Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.

    2013-04-19

    High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 Multiplication-Sign 300 mm{sup 2}. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.

  12. Three dimensional finite element methods: Their role in the design of DC accelerator systems

    NASA Astrophysics Data System (ADS)

    Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.

    2013-04-01

    High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 × 300 mm2. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.

  13. A case report and DSA findings of cerebral hemorrhage caused by syphilitic vasculitis.

    PubMed

    Zhang, Xia; Xiao, Guo-Dong; Xu, Xing-Shun; Zhang, Chun-Yuan; Liu, Chun-Feng; Cao, Yong-Jun

    2012-12-01

    Syphilis is now rare and easily misdiagnosed because of the wide use of antibiotics in the clinical. We report a case of cerebral hemorrhage in a patient with hypertension who was first diagnosed as hypertensive cerebral hemorrhage. However, treponema pallidum particle agglutination and rapid plasma regain tests of cerebrospinal fluid revealed the existence of neurosyphilis. Interestingly, digital subtraction angiography (DSA) showed severe stenosis in both middle cerebral arteries and right anterior cerebral artery. The case reminded us to pay attention to syphilitic vasculitis in patients with cryptogenic stroke. DSA sometimes may play a critical role in differential diagnosis of neurosyphilis. PMID:22198645

  14. [2011 Shanghai customer satisfaction report of DSA/X-ray equipment's after-service].

    PubMed

    Li, Bin; Qian, Jianguo; Cao, Shaoping; Zheng, Yunxin; Xu, Zitian; Wang, Lijun

    2012-11-01

    To improve the manufacturer's medical equipment after-sale service, the fifth Shanghai zone customer satisfaction survey was launched by the end of 2011. The DSA/X-ray equipment was setup as an independent category for the first time. From the survey we can show that the DSA/X-ray equipment's CSI is higher than last year, the customer satisfaction scores of preventive maintenance and service contract are lower than others, and CSI of local brand is lower than imported brand. PMID:23461127

  15. Evaluation of micro-colorimetric lipid determination method with samples prepared using sonication and accelerated solvent extraction methods.

    PubMed

    Billa, Nanditha; Hubin-Barrows, Dylan; Lahren, Tylor; Burkhard, Lawrence P

    2014-02-01

    Two common laboratory extraction techniques were evaluated for routine use with the micro-colorimetric lipid determination method developed by Van Handel (1985) [2] and recently validated for small samples by Inouye and Lotufo (2006) [1]. With the accelerated solvent extraction method using chloroform:methanol solvent and the colorimetric lipid determination method, 28 of 30 samples had significant proportional bias (α=1%, determined using standard additions) and 1 of 30 samples had significant constant bias (α=1%, determined using Youden Blank measurements). With sonic extraction, 0 of 6 samples had significant proportional bias (α=1%) and 1 of 6 samples had significant constant bias (α=1%). These demonstrate that the accelerated solvent extraction method with chloroform:methanol solvent system creates an interference with the colorimetric assay method, and without accounting for the bias in the analysis, inaccurate measurements would be obtained. PMID:24401464

  16. A new experimental method for the accelerated characterization of composite materials

    NASA Technical Reports Server (NTRS)

    Yeow, Y. T.; Morris, D. H.; Brinson, H. F.

    1978-01-01

    The use of composite materials for a variety of practical structural applications is presented and the need for an accelerated characterization procedure is assessed. A new experimental and analytical method is presented which allows the prediction of long term properties from short term tests. Some preliminary experimental results are presented.

  17. An approach to accelerate iterative methods for solving nonlinear operator equations

    NASA Astrophysics Data System (ADS)

    Nedzhibov, Gyurhan H.

    2011-12-01

    We propose and analyze a generalization of a Steffensen type acceleration method in case of extracting a locally unique solution of a nonlinear operator equation on a Banach space. In order, we use a special choice of a divided difference for operators. Convergence analysis and some applications of the obtained results are provided.

  18. Means and method for the focusing and acceleration of parallel beams of charged particles

    DOEpatents

    Maschke, Alfred W.

    1983-07-05

    A novel apparatus and method for focussing beams of charged particles comprising planar arrays of electrostatic quadrupoles. The quadrupole arrays may comprise electrodes which are shared by two or more quadrupoles. Such quadrupole arrays are particularly adapted to providing strong focussing forces for high current, high brightness, beams of charged particles, said beams further comprising a plurality of parallel beams, or beamlets, each such beamlet being focussed by one quadrupole of the array. Such arrays may be incorporated in various devices wherein beams of charged particles are accelerated or transported, such as linear accelerators, klystron tubes, beam transport lines, etc.

  19. Acceleration algorithm for constant-statistics method applied to the nonuniformity correction of infrared sequences

    NASA Astrophysics Data System (ADS)

    Jara Chavez, A. G.; Torres Vicencio, F. O.

    2015-03-01

    Non-uniformity noise, it was, it is, and it will probably be one of the most non-desired attached companion of the infrared focal plane array (IRFPA) data. We present a higher order filter where the key advantage is based in its capacity to estimates the detection parameters and thus to compensate it for fixed pattern noise, as an enhancement of Constant Statistics (CS) theory. This paper shows a technique to improve the convergence in accelerated way for CS (AACS: Acceleration Algorithm for Constant Statistics). The effectiveness of this method is demonstrated by using simulated infrared video sequences and several real infrared video sequences obtained using two infrared cameras.

  20. New Image Reconstruction Methods for Accelerated Quantitative Parameter Mapping and Magnetic Resonance Angiography

    NASA Astrophysics Data System (ADS)

    Velikina, J. V.; Samsonov, A. A.

    2016-02-01

    Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.

  1. Diffusive Shock Acceleration Simulations: Comparison with Particle Methods and Bow Shock Measurements

    NASA Astrophysics Data System (ADS)

    Kang, Hyesung; Jones, T. W.

    1995-07-01

    Direct comparisons of diffusive particle acceleration numerical simulations have been made against Monte Carlo and hybrid plasma simulations by Ellison et al. (1993) and against observations at the Earth's bow shock presented by Ellison et al. (1990). Toward this end we have introduced a new numerical scheme for injection of cosmic-ray particles out of the thermal plasma, modeled by way of the diffusive scattering process itself; that is, the diffusion and acceleration across the shock front of particles out of the suprathermal tail of the Maxwellian distribution. Our simulations take two forms. First, we have solved numerically the timedependent diffusion-advection equation for the high-energy (cosmic-ray) protons in one-dimensional quasiparallel shocks. Dynamical feedback between the particles and thermal plasma is included. The proton fluxes on both sides of the shock derived from our method are consistent with those calculated by Ellison et al. (1993). A similar test has compared our methods to published measurements at the Earth's bow shock when the interplanetary magnetic field was almost parallel to the solar wind velocity (Ellison et al. 1990). Again our results are in good agreement. Second, the same shock conditions have been simulated with the two-fluid version of diffusive shock acceleration theory by adopting injection rates and the closure parameters inferred from the diffusion-advection equation calculations. The acceleration efficiency and the shock structure calculated with the two-fluid method are in good agreement with those computed with the diffusion-advection method. Thus, we find that all of these computational methods (diffusion-advection, two-fluid, Monte Carlo, and hybrid) are in substantial agreement on the issues they can simultaneously address, so that the essential physics of diffusive particle acceleration is adequately contained within each. This is despite the fact that each makes what appear to be very different assumptions or

  2. An improved method for statistical analysis of raw accelerator mass spectrometry data

    SciTech Connect

    Gutjahr, A.; Phillips, F.; Kubik, P.W.; Elmore, D.

    1987-01-01

    Hierarchical statistical analysis is an appropriate method for statistical treatment of raw accelerator mass spectrometry (AMS) data. Using Monte Carlo simulations we show that this method yields more accurate estimates of isotope ratios and analytical uncertainty than the generally used propagation of errors approach. The hierarchical analysis is also useful in design of experiments because it can be used to identify sources of variability. 8 refs., 2 figs.

  3. A simplified spherical harmonic method for coupled electron-photon transport calculations

    SciTech Connect

    Josef, J.A.

    1997-12-01

    In this thesis the author has developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. He has performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. The theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. In addition, he has applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well as for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. The author has investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems.

  4. The Diffusive Shock Acceleration Myth

    NASA Astrophysics Data System (ADS)

    Gloeckler, G.; Fisk, L. A.

    2012-12-01

    It is generally accepted that diffusive shock acceleration (DSA) is the dominant mechanism for particle acceleration at shocks. This is despite the overwhelming observational evidence that is contrary to predictions of DSA models. For example, our most recent survey of hourly-averaged, spin-averaged proton distribution functions around 61 locally observed shocks in 2001 at 1 AU found that in 21 cases no particles were accelerated. Spectral indices (γ ) of suprathermal tails on the velocity distributions around the 40 shocks that did accelerate particles, showed none of the DSA-predicted correlations of γ with the shock compression ratio and the shock normal to magnetic field angle. Here we will present ACE/SWICS observations of three sets of 72 consecutive one-hour averaged velocity distributions (in each of 8 SWICS spin sectors). Each set includes passage of one or more shocks or strong compression regions. All spectra were properly transformed to the solar wind frame using the detailed, updated SWICS forward model, taking into account the hourly-averaged directions of the solar wind flow, the magnetic field and the ACE spin axis (http://www.srl.caltech.edu/ACE/ASC/). The suprathermal tails are observed to be a combination of locally accelerated and remote tails. The local tails are power laws. The remote tails are also power laws with rollovers at higher energies. When local tails are weak (as is the case especially upstream of strong shocks or compression regions) the remote tails also have a rollover at low energies due to modulation (transport effects). Among our main findings are that (1) the spectral indices of both the local and remote tails are -5 within the uncertainties of the measurements, as predicted by our pump acceleration mechanism, and (2) the velocity distributions are anisotropic with the perpendicular (to the magnetic field) pressure greater than the parallel pressure.

  5. Skeleton-based OPC application for DSA full chip mask correction

    NASA Astrophysics Data System (ADS)

    Schneider, L.; Farys, V.; Serret, E.; Fenouillet-Beranger, C.

    2015-09-01

    Recent industrial results around directed self-assembly (DSA) of block copolymers (BCP) have demonstrated the high potential of this technique [1-2]. The main advantage being cost reduction thanks to a reduced number of lithographic steps. Meanwhile, the associated correction for mask creation must account for the introduction of this new technique, maintaining a high level of accuracy and reliability. In order to create VIA (Vertical Interconnect Layer) layer, graphoepitaxy DSA can be used. The technique relies on the creation of a confinement guides where the BCP can separate into distinct regions and resulting patterns are etched in order to obtain an ordered series of VIA contact. The printing of the guiding pattern requires the use of classical lithography. Optical proximity correction (OPC) is applied to obtain the best suited guiding pattern allowing to match a specific design target. In this study, an original approach for DSA full chip mask optical proximity correction based on a skeleton representation of a guiding pattern is proposed. The cost function for an OPC process is based on minimizing the Central Placement Error (CPE), defined as the difference between an ideal skeleton target and a generated skeleton from a guiding contour. The high performance of this approach for DSA OPC full chip correction and its ability to minimize variability error on via placement is demonstrated and reinforced by the comparison with a rigorous model. Finally this Skeleton approach is highlighted as an appropriate tool for Design rules definition.

  6. ELECTROLYTIC DISINFECTION OF ESCHERICHIA COLI AND COLIFORM BACTERIA IN A BATCH CELL WITH DSA ELECTRODES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Electrolytic treatment of dairy manure lagoon water using DSA electrodes is shown to produce a progressive disinfection of native coliforms and introduced E. coli. The disinfectant effect continues post-treatment for several minutes. To further examine the process, flow cytometry was employed to st...

  7. Contrast staining on CT after DSA in ischemic stroke patients progresses to infarction and rarely hemorrhages.

    PubMed

    Amans, Matthew R; Cooke, Daniel L; Vella, Maya; Dowd, Christopher F; Halbach, Van V; Higashida, Randall T; Hetts, Steven W

    2014-01-01

    Contrast staining of brain parenchyma identified on non-contrast CT performed after DSA in patients with acute ischemic stroke (AIS) is an incompletely understood imaging finding. We hypothesize contrast staining to be an indicator of brain injury and suspect the fate of involved parenchyma to be cerebral infarction. Seventeen years of AIS data were retrospectively analyzed for contrast staining. Charts were reviewed and outcomes of the stained parenchyma were identified on subsequent CT and MRI. Thirty-six of 67 patients meeting inclusion criteria (53.7%) had contrast staining on CT obtained within 72 hours after DSA. Brain parenchyma with contrast staining in patients with AIS most often evolved into cerebral infarction (81%). Hemorrhagic transformation was less likely in cases with staining compared with hemorrhagic transformation in the cohort that did not have contrast staining of the parenchyma on post DSA CT (6% versus 25%, respectively, OR 0.17, 95% CI 0.017 - 0.98, p = 0.02). Brain parenchyma with contrast staining on CT after DSA in AIS patients was likely to infarct and unlikely to hemorrhage. PMID:24556308

  8. Contrast Staining on CT after DSA in Ischemic Stroke Patients Progresses to Infarction and Rarely Hemorrhages

    PubMed Central

    Amans, Matthew R.; Cooke, Daniel L.; Vella, Maya; Dowd, Christopher F.; Halbach, Van V.; Higashida, Randall T.; Hetts, Steven W.

    2014-01-01

    Summary Contrast staining of brain parenchyma identified on non-contrast CT performed after DSA in patients with acute ischemic stroke (AIS) is an incompletely understood imaging finding. We hypothesize contrast staining to be an indicator of brain injury and suspect the fate of involved parenchyma to be cerebral infarction. Seventeen years of AIS data were retrospectively analyzed for contrast staining. Charts were reviewed and outcomes of the stained parenchyma were identified on subsequent CT and MRI. Thirty-six of 67 patients meeting inclusion criteria (53.7%) had contrast staining on CT obtained within 72 hours after DSA. Brain parenchyma with contrast staining in patients with AIS most often evolved into cerebral infarction (81%). Hemorrhagic transformation was less likely in cases with staining compared with hemorrhagic transformation in the cohort that did not have contrast staining of the parenchyma on post DSA CT (6% versus 25%, respectively, OR 0.17, 95% CI 0.017 – 0.98, p = 0.02). Brain parenchyma with contrast staining on CT after DSA in AIS patients was likely to infarct and unlikely to hemorrhage. PMID:24556308

  9. 34 CFR 367.11 - What assurances must a DSA include in its application?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 2 2012-07-01 2012-07-01 false What assurances must a DSA include in its application? 367.11 Section 367.11 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION INDEPENDENT LIVING SERVICES FOR OLDER INDIVIDUALS WHO ARE...

  10. 34 CFR 367.10 - How does a designated State agency (DSA) apply for an award?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true How does a designated State agency (DSA) apply for an award? 367.10 Section 367.10 Education Regulations of the Offices of the Department of Education... LIVING SERVICES FOR OLDER INDIVIDUALS WHO ARE BLIND What Are the Application Requirements? § 367.10...

  11. Centrifugal accelerator, system and method for removing unwanted layers from a surface

    DOEpatents

    Foster, Christopher A.; Fisher, Paul W.

    1995-01-01

    A cryoblasting process having a centrifugal accelerator for accelerating frozen pellets of argon or carbon dioxide toward a target area utilizes an accelerator throw wheel designed to induce, during operation, the creation of a low-friction gas bearing within internal passages of the wheel which would otherwise retard acceleration of the pellets as they move through the passages. An associated system and method for removing paint from a surface with cryoblasting techniques involves the treating, such as a preheating, of the painted surface to soften the paint prior to the impacting of frozen pellets thereagainst to increase the rate of paint removal. A system and method for producing large quantities of frozen pellets from a liquid material, such as liquid argon or carbon dioxide, for use in a cryoblasting process utilizes a chamber into which the liquid material is introduced in the form of a jet which disintegrates into droplets. A non-condensible gas, such as inert helium or air, is injected into the chamber at a controlled rate so that the droplets freeze into bodies of relatively high density.

  12. An improved method to accurately calibrate the gantry angle indicators of the radiotherapy linear accelerators

    NASA Astrophysics Data System (ADS)

    Chang, Liyun; Ho, Sheng-Yow; Du, Yi-Chun; Lin, Chih-Ming; Chen, Tainsong

    2007-06-01

    The calibration of the gantry angle indicator is an important and basic quality assurance (QA) item for the radiotherapy linear accelerator. In this study, we propose a new and practical method, which uses only the digital level, V-film, and general solid phantoms. By taking the star shot only, we can accurately calculate the true gantry angle according to the geometry of the film setup. The results on our machine showed that the gantry angle was shifted by -0.11° compared with the digital indicator, and the standard deviation was within 0.05°. This method can also be used for the simulator. In conclusion, this proposed method could be adopted as an annual QA item for mechanical QA of the accelerator.

  13. The three-cubic method: An optional online robot joint trajectory generator under velocity, acceleration, and wandering constraints

    SciTech Connect

    Tondu, B.; Bazaz, S.A.

    1999-09-01

    An original method called the three-cubic method is proposed to generate online robot joint trajectories interpolating given position points with associated velocities. The method is based on an acceleration profile composed of three cubic polynomial segments, which ensure a zero acceleration at each intermediate point. Velocity and acceleration continuity is obtained, and this three-cubics combination allows the analytical solution to the minimum time trajectory problem under maximum velocity and acceleration constraints. Possible wandering is detected and can be overcome. Furthermore, the analytical solution to the minimum time trajectory problem leads to an online trajectory computation.

  14. A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation

    SciTech Connect

    Evans, Thomas M; Mosher, Scott W; Slattery, Stuart

    2014-01-01

    We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.

  15. A STUDY ON APPLICABILITY OF GROUND RESPONSE ACCELERATION METHOD TO DEEP VERTICAL UNDERGROUND STRUCTURES

    NASA Astrophysics Data System (ADS)

    Matsumoto, Mai; Shiba, Yukio; Watanabe, Kazuaki

    This paper discusses the applicability of ground response acceleration method to seismic analysis for deep vertical underground structures. To examine the applicability, an analysis of relationships between response of ground and the shaft was conducted. It was found from the analysis that vertical axial stress of the shaft was not correspond with shear stress of ground. Accordingly, it was concluded that the axial stress was not evaluated correctly by the existing method. Therefore, to extend the applicability of the method, ground responses correlated with the axial stress were analyzed and a new method using these ground responses was proposed.

  16. A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation

    SciTech Connect

    Evans, Thomas M.; Mosher, Scott W.; Slattery, Stuart R.; Hamilton, Steven P.

    2014-02-01

    We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.

  17. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2007-09-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  18. Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA

    NASA Astrophysics Data System (ADS)

    Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.

    2015-03-01

    In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.

  19. Toward sub-20nm pitch Fin patterning and integration with DSA

    NASA Astrophysics Data System (ADS)

    Sayan, Safak; Marzook, Taisir; Chan, BT; Vandenbroeck, Nadia; Singh, Arjun; Laidler, David; Sanchez, Efrain A.; Leray, Philippe; R. Delgadillo, Paulina; Gronheid, Roel; Vandenberghe, Geert; Clark, William; Juncker, Aurelie

    2016-03-01

    Directed Self Assembly (DSA) has gained increased momentum in recent years as a cost-effective means for extending lithography to sub-30nm pitch, primarily presenting itself as an alternative to mainstream 193i pitch division approaches such as SADP and SAQP. Towards these goals, IMEC has excelled at understanding and implementing directed self-assembly based on PS-b-PMMA block co-polymers (BCPs) using LiNe flow [1]. These efforts increase the understanding of how block copolymers might be implemented as part of HVM compatible DSA integration schemes. In recent contributions, we have proposed and successfully demonstrated two state-of-the-art CMOS process flows which employed DSA based on the PS-b-PMMA, LiNe flow at IMEC (pitch = 28 nm) to form FinFET arrays via both a `cut-last' and `cut-first' approach [2-4]. Therein, we described the relevant film stacks (hard mask and STI stacks) to achieve robust patterning and pattern transfer into IMEC's FEOL device film stacks. We also described some of the pattern placement and overlay challenges associated with these two strategies. In this contribution, we will present materials and processes for FinFET patterning and integration towards sub-20 nm pitch technology nodes. This presents a noteworthy challenge for DSA using BCPs as the ultimate resolution for PS-b-PMMA may not achieve such dimensions. The emphasis will continue to be towards patterning approaches, wafer alignment strategies, the effects of DSA processing on wafer alignment and overlay.

  20. Proposed method for internal electron therapy based on high-intensity laser acceleration

    NASA Astrophysics Data System (ADS)

    Tepper, Michal; Barkai, Uri; Gannot, Israel

    2015-05-01

    Radiotherapy is one of the main methods to treat cancer. However, due to the propagation pattern of high-energy photons in tissue and their inability to discriminate between healthy and malignant tissues, healthy tissues may also be damaged, causing undesired side effects. A possible method for internal electron therapy, based on laser acceleration of electrons inside the patient's body, is suggested. In this method, an optical waveguide, optimized for high intensities, is used to transmit the laser radiation and accelerate electrons toward the tumor. The radiation profile can be manipulated in order to create a patient-specific radiation treatment profile by changing the laser characteristics. The propagation pattern of electrons in tissues minimizes the side effects caused to healthy tissues. A simulation was developed to demonstrate the use of this method, calculating the trajectories of the accelerated electron as a function of laser properties. The simulation was validated by comparison to theory, showing a good fit for laser intensities of up to 2×1020 (W/cm2), and was then used to calculate suggested treatment profiles for two tumor test cases (with and without penetration to the tumor). The results show that treatment profiles can be designed to cover tumor area with minimal damage to adjacent tissues.

  1. Advanced treatment planning methods for efficient radiation therapy with laser accelerated proton and ion beams

    SciTech Connect

    Schell, Stefan; Wilkens, Jan J.

    2010-10-15

    Purpose: Laser plasma acceleration can potentially replace large and expensive cyclotrons or synchrotrons for radiotherapy with protons and ions. On the way toward a clinical implementation, various challenges such as the maximum obtainable energy still remain to be solved. In any case, laser accelerated particles exhibit differences compared to particles from conventional accelerators. They typically have a wide energy spread and the beam is extremely pulsed (i.e., quantized) due to the pulsed nature of the employed lasers. The energy spread leads to depth dose curves that do not show a pristine Bragg peak but a wide high dose area, making precise radiotherapy impossible without an additional energy selection system. Problems with the beam quantization include the limited repetition rate and the number of accelerated particles per laser shot. This number might be too low, which requires a high repetition rate, or it might be too high, which requires an additional fluence selection system to reduce the number of particles. Trying to use laser accelerated particles in a conventional way such as spot scanning leads to long treatment times and a high amount of secondary radiation produced when blocking unwanted particles. Methods: The authors present methods of beam delivery and treatment planning that are specifically adapted to laser accelerated particles. In general, it is not necessary to fully utilize the energy selection system to create monoenergetic beams for the whole treatment plan. Instead, within wide parts of the target volume, beams with broader energy spectra can be used to simultaneously cover multiple axially adjacent spots of a conventional dose delivery grid as applied in intensity modulated particle therapy. If one laser shot produces too many particles, they can be distributed over a wider area with the help of a scattering foil and a multileaf collimator to cover multiple lateral spot positions at the same time. These methods are called axial and

  2. Accelerated screening methods for determining chemical and thermal stability of refrigerant-lubricant mixtures, Part 1: Method assessment. Final report

    SciTech Connect

    Kauffman, R.

    1993-04-01

    This report presents results of a literature search performed to identify analytical techniques suitable for accelerated screening of chemical and thermal stabilities of different refrigerant/lubricant combinations. Search focused on three areas: Chemical stability data of HFC-134a and other non-chlorine containing refrigerant candidates; chemical stability data of CFC-12, HCFC-22, and other chlorine containing refrigerants; and accelerated thermal analytical techniques. Literature was catalogued and an abstract was written for each journal article or technical report. Several thermal analytical techniques were identified as candidates for development into accelerated screening tests. They are easy to operate, are common to most laboratories, and are expected to produce refrigerant/lubricant stability evaluations which agree with the current stability test ANSI/ASHRAE (American National Standards Institute/American Society of Heating, Refrigerating, and Air-Conditioning Engineers) Standard 97-1989, ``Sealed Glass Tube Method to Test the Chemical Stability of Material for Use Within Refrigerant Systems.`` Initial results of one accelerated thermal analytical candidate, DTA, are presented for CFC-12/mineral oil and HCFC-22/mineral oil combinations. Also described is research which will be performed in Part II to optimize the selected candidate.

  3. A multipole accelerated desingularized method for computing nonlinear wave forces on bodies

    SciTech Connect

    Scorpio, S.M.; Beck, R.F.

    1996-12-31

    Nonlinear wave forces on offshore structures are investigated. The fluid motion is computed using an Euler-Lagrange time domain approach. Nonlinear free surface boundary conditions are stepped forward in time using an accurate and stable integration technique. The field equation with mixed boundary conditions that result at each time step are solved at N nodes using a desingularized boundary integral method with multipole acceleration. Multipole accelerated solutions require O(N) computational effort and computer storage while conventional solvers require O(N{sup 2}) effort and storage for an iterative solution and O(N{sup 3}) effort for direct inversion of the influence matrix. These methods are applied to the three dimensional problem of wave diffraction by a vertical cylinder.

  4. Accelerated stress testing of thin film solar cells: Development of test methods and preliminary results

    NASA Technical Reports Server (NTRS)

    Lathrop, J. W.

    1985-01-01

    If thin film cells are to be considered a viable option for terrestrial power generation their reliability attributes will need to be explored and confidence in their stability obtained through accelerated testing. Development of a thin film accelerated test program will be more difficult than was the case for crystalline cells because of the monolithic construction nature of the cells. Specially constructed test samples will need to be fabricated, requiring committment to the concept of accelerated testing by the manufacturers. A new test schedule appropriate to thin film cells will need to be developed which will be different from that used in connection with crystalline cells. Preliminary work has been started to seek thin film schedule variations to two of the simplest tests: unbiased temperature and unbiased temperature humidity. Still to be examined are tests which involve the passage of current during temperature and/or humidity stress, either by biasing in the forward (or reverse) directions or by the application of light during stress. Investigation of these current (voltage) accelerated tests will involve development of methods of reliably contacting the thin conductive films during stress.

  5. Implicit Monte Carlo diffusion - an acceleration method for Monte Carlo time dependent radiative transfer simulations

    SciTech Connect

    Gentile, N A

    2000-10-01

    We present a method for accelerating time dependent Monte Carlo radiative transfer calculations by using a discretization of the diffusion equation to calculate probabilities that are used to advance particles in regions with small mean free path. The method is demonstrated on problems with on 1 and 2 dimensional orthogonal grids. It results in decreases in run time of more than an order of magnitude on these problems, while producing answers with accuracy comparable to pure IMC simulations. We call the method Implicit Monte Carlo Diffusion, which we abbreviate IMD.

  6. An accelerated method of computing nonlinear processes in instruments with longitudinal interaction

    NASA Astrophysics Data System (ADS)

    Pikunov, V. M.; Prokopev, V. E.; Sandalov, A. N.

    1985-04-01

    The use of the reference particle method for investigating nonlinear processes in instruments with longitudinal interaction is considered in an attempt to accelerate the computation of the processes. It is demonstrated that, coupled with interpolation formulas based on Kotelnikov series, the method yields effective numerical algorithms in the framework of discrete models of electron flux. A comparison of the method with a disk model of an electron flux for the case of multiresonator clistron was performed for clistron bunchers with 50 to 80-percent efficiency. It is concluded that the computation time was reduced by a factor of 3-10 while maintaining satisfactory accuracy.

  7. On the Use of Accelerated Aging Methods for Screening High Temperature Polymeric Composite Materials

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Grayson, Michael A.

    1999-01-01

    A rational approach to the problem of accelerated testing of high temperature polymeric composites is discussed. The methods provided are considered tools useful in the screening of new materials systems for long-term application to extreme environments that include elevated temperature, moisture, oxygen, and mechanical load. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for specific aging mechanisms.

  8. Stability analysis of multigrid acceleration methods for the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Fay, John F.

    1990-01-01

    A calculation is made of the stability of various relaxation schemes for the numerical solution of partial differential equations. A multigrid acceleration method is introduced, and its effects on stability are explored. A detailed stability analysis of a simple case is carried out and verified by numerical experiment. It is shown that the use of multigrids can speed convergence by several orders of magnitude without adversely affecting stability.

  9. Subspace accelerated inexact Newton method for large scale wave functions calculations in Density Functional Theory

    SciTech Connect

    Fattebert, J

    2008-07-29

    We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.

  10. Diffusion-synthetic acceleration given anisotropic scattering, general quadratures, and multidimensions

    SciTech Connect

    Adams, M.L. ); Wareing, T.A. )

    1993-01-01

    We study diffusion-synthetic acceleration (DSA) for within-group scattering iterations in discrete ordinates calculations. We consider analytic (not spatially discretized) equations in Cartesian coordinates with linearly anisotropic scattering. We place no restrictions on the discrete ordinates quadrature set. We assume an infinite homogeneous medium. Our main results are as follows: 1. DSA is unstable in two dimensions (2D) and three dimensions (3D), given forward-peaked scattering. It can be stabilized by taking extra transport sweeps each iteration. 2. Standard DSA is unstable, given any quadrature set that does not correctly integrate linear functions of angle. 3. Relative to one dimension (ID), DSA's performance is degraded in 2D and 3D.

  11. A hybrid data acquisition system for magnetic measurements of accelerator magnets

    SciTech Connect

    Wang, X.; Hafalia, R.; Joseph, J.; Lizarazo, J.; Martchevsky, M.; Sabbi, G. L.

    2011-06-03

    A hybrid data acquisition system was developed for magnetic measurement of superconducting accelerator magnets at LBNL. It consists of a National Instruments dynamic signal acquisition (DSA) card and two Metrolab fast digital integrator (FDI) cards. The DSA card records the induced voltage signals from the rotating probe while the FDI cards records the flux increment integrated over a certain angular step. This allows the comparison of the measurements performed with two cards. In this note, the setup and test of the system is summarized. With a probe rotating at a speed of 0.5 Hz, the multipole coefficients of two magnets were measured with the hybrid system. The coefficients from the DSA and FDI cards agree with each other, indicating that the numerical integration of the raw voltage acquired by the DSA card is comparable to the performance of the FDI card in the current measurement setup.

  12. Novel methods in the Particle-In-Cell accelerator Code-Framework Warp

    SciTech Connect

    Vay, J-L; Grote, D. P.; Cohen, R. H.; Friedman, A.

    2012-12-26

    The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.

  13. Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim

    2016-08-01

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.

  14. Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method.

    PubMed

    Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim

    2016-08-01

    In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm. PMID:27587137

  15. GPU-Accelerated Finite Element Method for Modelling Light Transport in Diffuse Optical Tomography

    PubMed Central

    Schweiger, Martin

    2011-01-01

    We introduce a GPU-accelerated finite element forward solver for the computation of light transport in scattering media. The forward model is the computationally most expensive component of iterative methods for image reconstruction in diffuse optical tomography, and performance optimisation of the forward solver is therefore crucial for improving the efficiency of the solution of the inverse problem. The GPU forward solver uses a CUDA implementation that evaluates on the graphics hardware the sparse linear system arising in the finite element formulation of the diffusion equation. We present solutions for both time-domain and frequency-domain problems. A comparison with a CPU-based implementation shows significant performance gains of the graphics accelerated solution, with improvements of approximately a factor of 10 for double-precision computations, and factors beyond 20 for single-precision computations. The gains are also shown to be dependent on the mesh complexity, where the largest gains are achieved for high mesh resolutions. PMID:22013431

  16. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  17. A chain-of-states acceleration method for the efficient location of minimum energy paths

    SciTech Connect

    Hernández, E. R. Herrero, C. P.; Soler, J. M.

    2015-11-14

    We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.

  18. 450mm etch process development and process chamber evaluation using 193i DSA guided pattern

    NASA Astrophysics Data System (ADS)

    Collison, Wenli; Lin, Yii-Cheng; Dunn, Shannon; Takikawa, Hiroaki; Paris, James; Chen, Lucy; Detrick, Troy; Belen, Jun; Stojakovic, George; Goss, Michael; Fish, Norman; Park, Minjoon; Sun, Chih-Ming; Kelling, Mark; Lin, Pinyen

    2016-03-01

    In the Global 450mm Equipment Development Consortium (G450C), a 193i guided directed self-assembly (DSA) pattern has been used to create structures at the 14nm node and below. The first guided DSA patterned wafer was ready for etch process development within a month of the G450C's first 193i patterned wafer availability with one litho pass. Etch processes were scaled up from 300mm to 450mm for a 28nm pitch STI stack and a 40nm pitch M1 BEOL stack. The effects of various process parameters were investigated to fine tune each process. Overall process window has been checked and compared. Excellent process stability results were shown for current etch chambers.

  19. Method for the Accelerated Testing of the Durability of a Construction Binder using the Arrhenius Approach

    NASA Astrophysics Data System (ADS)

    Fridrichová, Marcela; Dvořák, Karel; Gazdič, Dominik

    2016-03-01

    The single most reliable indicator of a material's durability is its performance in long-term tests, which cannot always be carried out due to a limited time budget. The second option is to perform some kind of accelerated durability tests. The aim of the work described in this article was to develop a method for the accelerated durability testing of binders. It was decided that the Arrhenius equation approach and the theory of chemical reaction kinetics would be applied in this case. The degradation process has been simplified to a single quantifiable parameter, which became compressive strength. A model hydraulic binder based on fluidised bed combustion ash (FBC ash) was chosen as the test subject for the development of the method. The model binder and its hydration products were tested by high-temperature X-ray diffraction analysis. The main hydration product of this binder was ettringite. Due to the thermodynamic instability of this mineral, it was possible to verify the proposed method via long term testing. In order to accelerate the chemical reactions in the binder, four combinations of two temperatures (65 and 85°C) and two different relative humidities (14 and 100%) were used. The upper temperature limit was chosen because of the results of the high-temperature x-ray testing of the ettringite's decomposition. The calculation formulae for the accelerated durability tests were derived on the basis of data regarding the decrease in compressive strength under the conditions imposed by the four above-mentioned combinations. The mineralogical composition of the binder after degradation was also described. The final degradation product was gypsum under dry conditions and monosulphate under wet conditions. The validity of the method and formula was subsequently verified by means of long-term testing. A very good correspondence between the calculated and real values was achieved. The deviation of these values did not exceed 5 %. The designed and verified method

  20. Vibration-Based Method Developed to Detect Cracks in Rotors During Acceleration Through Resonance

    NASA Technical Reports Server (NTRS)

    Sawicki, Jerzy T.; Baaklini, George Y.; Gyekenyesi, Andrew L.

    2004-01-01

    In recent years, there has been an increasing interest in developing rotating machinery shaft crack-detection methodologies and online techniques. Shaft crack problems present a significant safety and loss hazard in nearly every application of modern turbomachinery. In many cases, the rotors of modern machines are rapidly accelerated from rest to operating speed, to reduce the excessive vibrations at the critical speeds. The vibration monitoring during startup or shutdown has been receiving growing attention (ref. 1), especially for machines such as aircraft engines, which are subjected to frequent starts and stops, as well as high speeds and acceleration rates. It has been recognized that the presence of angular acceleration strongly affects the rotor's maximum response to unbalance and the speed at which it occurs. Unfortunately, conventional nondestructive evaluation (NDE) methods have unacceptable limits in terms of their application for online crack detection. Some of these techniques are time consuming and inconvenient for turbomachinery service testing. Almost all of these techniques require that the vicinity of the damage be known in advance, and they can provide only local information, with no indication of the structural strength at a component or system level. In addition, the effectiveness of these experimental techniques is affected by the high measurement noise levels existing in complex turbomachine structures. Therefore, the use of vibration monitoring along with vibration analysis has been receiving increasing attention.

  1. Microwave-accelerated method for ultra-rapid extraction of Neisseria gonorrhoeae DNA for downstream detection.

    PubMed

    Melendez, Johan H; Santaus, Tonya M; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A; Geddes, Chris D

    2016-10-01

    Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by detection of the genomic target often involving polymerase chain reaction (PCR)-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (gonorrhea, GC) DNA. Our approach is based on the use of highly focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the current study, we show that highly focused microwaves at 2.45 GHz, using 12.3-mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification, in less than 10 min total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward toward the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503

  2. Preliminary determination of Newtonian gravitational constant with angular acceleration feedback method

    PubMed Central

    Xue, Chao; Quan, Li-Di; Yang, Shan-Qing; Wang, Bing-Peng; Wu, Jun-Fei; Shao, Cheng-Gang; Tu, Liang-Cheng; Milyukov, Vadim; Luo, Jun

    2014-01-01

    This paper describes the preliminary measurement of the Newtonian gravitational constant G with the angular acceleration feedback method at HUST. The apparatus has been built, and preliminary measurement performed, to test all aspects of the experimental design, particularly the feedback function, which was recently discussed in detail by Quan et al. The experimental results show that the residual twist angle of the torsion pendulum at the signal frequency introduces 0.4 ppm to the value of G. The relative uncertainty of the angular acceleration of the turntable is approximately 100 ppm, which is mainly limited by the stability of the apparatus. Therefore, the experiment has been modified with three features: (i) the height of the apparatus is reduced almost by half, (ii) the aluminium shelves were replaced with shelves made from ultra-low expansion material and (iii) a perfect compensation of the laboratory-fixed gravitational background will be carried out. With these improvements, the angular acceleration is expected to be determined with an uncertainty of better than 10 ppm, and a reliable value of G with 20 ppm or below will be obtained in the near future. PMID:25201996

  3. Krylov iterative methods and synthetic acceleration for transport in binary statistical media

    SciTech Connect

    Fichtl, Erin D; Warsa, James S; Prinja, Anil K

    2008-01-01

    In particle transport applications there are numerous physical constructs in which heterogeneities are randomly distributed. The quantity of interest in these problems is the ensemble average of the flux, or the average of the flux over all possible material 'realizations.' The Levermore-Pomraning closure assumes Markovian mixing statistics and allows a closed, coupled system of equations to be written for the ensemble averages of the flux in each material. Generally, binary statistical mixtures are considered in which there are two (homogeneous) materials and corresponding coupled equations. The solution process is iterative, but convergence may be slow as either or both materials approach the diffusion and/or atomic mix limits. A three-part acceleration scheme is devised to expedite convergence, particularly in the atomic mix-diffusion limit where computation is extremely slow. The iteration is first divided into a series of 'inner' material and source iterations to attenuate the diffusion and atomic mix error modes separately. Secondly, atomic mix synthetic acceleration is applied to the inner material iteration and S{sup 2} synthetic acceleration to the inner source iterations to offset the cost of doing several inner iterations per outer iteration. Finally, a Krylov iterative solver is wrapped around each iteration, inner and outer, to further expedite convergence. A spectral analysis is conducted and iteration counts and computing cost for the new two-step scheme are compared against those for a simple one-step iteration, to which a Krylov iterative method can also be applied.

  4. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    NASA Astrophysics Data System (ADS)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2016-02-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson-Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-ups that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. Overall, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.

  5. An hybrid computing approach to accelerating the multiple scattering theory based ab initio methods

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Stocks, G. Malcolm

    2014-03-01

    The multiple scattering theory method, also known as the Korringa-Kohn-Rostoker (KKR) method, is considered an elegant approach to the ab initio electronic structure calculation for solids. Its convenience in accessing the one-electron Green function has led to the development of locally-self consistent multiple scattering (LSMS) method, a linear scaling ab initio method that allows for the electronic structure calculation for complex structures requiring tens of thousands of atoms in unit cell. It is one of the few applications that demonstrated petascale computing capability. In this presentation, we discuss our recent efforts in developing a hybrid computing approach for accelerating the full potential electronic structure calculation. Specifically, in the framework of our existing LSMS code in FORTRAN 90/95, we explore the many core resources on GPGPU accelerators by implementing the compute intensive functions (for the calculation of multiple scattering matrices and the single site solutions) in CUDA, and move the computational tasks to the GPGPUs if they are found available. We explain in details our approach to the CUDA programming and the code structure, and show the speed-up of the new hybrid code by comparing its performances on CPU/GPGPU and on CPU only. The work was supported in part by the Center for Defect Physics, a DOE-BES Energy Frontier Research Center.

  6. Accelerated CMR using zonal, parallel and prior knowledge driven imaging methods

    PubMed Central

    Kozerke, Sebastian; Plein, Sven

    2008-01-01

    Accelerated imaging is highly relevant for many CMR applications as competing constraints with respect to spatiotemporal resolution and tolerable scan times are frequently posed. Three approaches, all involving data undersampling to increase scan efficiencies, are discussed in this review. Zonal imaging can be considered a niche but nevertheless has found application in coronary imaging and CMR flow measurements. Current work on parallel-transmit systems is expected to revive the interest in zonal imaging techniques. The second and main approach to speeding up CMR sequences has been parallel imaging. A wide range of CMR applications has benefited from parallel imaging with reduction factors of two to three routinely applied for functional assessment, perfusion, viability and coronary imaging. Large coil arrays, as are becoming increasingly available, are expected to support reduction factors greater than three to four in particular in combination with 3D imaging protocols. Despite these prospects, theoretical work has indicated fundamental limits of coil encoding at clinically available magnetic field strengths. In that respect, alternative approaches exploiting prior knowledge about the object being imaged as such or jointly with parallel imaging have attracted considerable attention. Five to eight-fold scan accelerations in cine and dynamic CMR applications have been reported and image quality has been found to be favorable relative to using parallel imaging alone. With all acceleration techniques, careful consideration of the limits and the trade-off between acceleration and occurrence of artifacts that may arise if these limits are breached is required. In parallel imaging the spatially varying noise has to be considered when measuring contrast- and signal-to-noise ratios. Also, temporal fidelity in images reconstructed with prior knowledge driven methods has to be studied carefully. PMID:18534005

  7. Modified Anderson Method for Accelerating 3D-RISM Calculations Using Graphics Processing Unit.

    PubMed

    Maruyama, Yutaka; Hirata, Fumio

    2012-09-11

    A fast algorithm is proposed to solve the three-dimensional reference interaction site model (3D-RISM) theory on a graphics processing unit (GPU). 3D-RISM theory is a powerful tool for investigating biomolecular processes in solution; however, such calculations are often both memory-intensive and time-consuming. We sought to accelerate these calculations using GPUs, but to work around the problem of limited memory size in GPUs, we modified the less memory-intensive "Anderson method" to give faster convergence to 3D-RISM calculations. Using this method on a Tesla C2070 GPU, we reduced the total computational time by a factor of 8, 1.4 times by the modified Andersen method and 5.7 times by GPU, compared to calculations on an Intel Xeon machine (eight cores, 3.33 GHz) with the conventional method. PMID:26605714

  8. Comparison of sampling methods for radiocarbon dating of carbonyls in air samples via accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander

    2016-05-01

    Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

  9. Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids.

    SciTech Connect

    Deng, Jie; Zimmerman, Jonathan A.; Thompson, Aidan Patrick; Brown, William Michael; Plimpton, Steven James; Zhou, Xiao Wang; Wagner, Gregory John; Erickson, Lindsay Crowl

    2011-09-01

    Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.

  10. Challenges and opportunities in applying grapho-epitaxy DSA lithography to metal cut and contact/via applications

    NASA Astrophysics Data System (ADS)

    Ma, Yuansheng; Torres, J. Andres; Fenger, Germain; Granik, Yuri; Ryckaert, Julien; Vanderberghe, Geert; Bekaert, Joost; Word, James

    2014-10-01

    Directed self assembly has become a very attractive technology for Fin and contact/via applications. Some of the issues related to pattern placement error, defectivity rates and process integration are actively being addressed by the industry and have not faced significant roadblocks for contact-hole applications. While many DSA applications have been proposed, deploying DSA for Fin structures competes in cost and variability control with SADP techniques. Given the 1D nature of find structures, it is difficult to control fin placement with accuracy better than 4nm 3 sigma. In addition, a second patterning step is needed to remove the un-wanted sections of the grating and leaving behind only the required fin structures, therefore limiting its adoption. On the other hand, DSA applied to contact/via holes has demonstrated low defectivity rates due to improved polymerization and processing techniques, as well as an adequate control to reduce the placement error due to thermal fluctuations during the annealing and cylinder formation process. For that reason, the results from contact/via layers can extend to the metal cut layer printing with DSA grapho-epitaxy. In this paper, we show that DSA provides a promising cost-effective solution for the technology scaling by reducing mask number from N to N-1. It is shown that pxOPC may provide better guiding patterns than the conventional one. In addition, the practical grouping rules for DSA should avoid 2D grouping, avoid putting more than 3 features in a group with different pitches, and avoid grouping features with different sizes. Our recommendations to designers for DSA technology are the following: if the design is to be decomposed with 2 or more DSA masks, then the design rules should be set up in this way: first the minimum pitch is better to be on DSA material's own natural pitch; second, for each DSA mask, singletons and bar-like grouping shapes with DSA's natural pitch should be used as much as possible.

  11. An improved method for calibrating the gantry angles of linear accelerators.

    PubMed

    Higgins, Kyle; Treas, Jared; Jones, Andrew; Fallahian, Naz Afarin; Simpson, David

    2013-11-01

    Linear particle accelerators (linacs) are widely used in radiotherapy procedures; therefore, accurate calibrations of gantry angles must be performed to prevent the exposure of healthy tissue to excessive radiation. One of the common methods for calibrating these angles is the spirit level method. In this study, a new technique for calibrating the gantry angle of a linear accelerator was examined. A cubic phantom was constructed of Styrofoam with small lead balls, embedded at specific locations in this foam block. Several x-ray images were taken of this phantom at various gantry angles using an electronic portal imaging device on the linac. The deviation of the gantry angles were determined by analyzing the images using a customized computer program written in ImageJ (National Institutes of Health). Gantry angles of 0, 90, 180, and 270 degrees were chosen and the results of both calibration methods were compared for each of these angles. The results revealed that the image method was more precise than the spirit level method. For the image method, the average of the measured values for the selected angles of 0, 90, 180, and 270 degrees were found to be -0.086 ± 0.011, 90.018 ± 0.011, 180.178 ± 0.015, and 269.972 ± 0.006 degrees, respectively. The corresponding average values using the spirit level method were 0.2 ± 0.03, 90.2 ± 0.04, 180.1 ± 0.01, and 269.9 ± 0.05 degrees, respectively. Based on these findings, the new method was shown to be a reliable technique for calibrating the gantry angle. PMID:24077078

  12. Detecting sea-level hazards: Simple regression-based methods for calculating the acceleration of sea level

    USGS Publications Warehouse

    Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H., Jr.

    2015-01-01

    Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.

  13. Practical method and device for enhancing pulse contrast ratio for lasers and electron accelerators

    DOEpatents

    Zhang, Shukui; Wilson, Guy

    2014-09-23

    An apparatus and method for enhancing pulse contrast ratios for drive lasers and electron accelerators. The invention comprises a mechanical dual-shutter system wherein the shutters are placed sequentially in series in a laser beam path. Each shutter of the dual shutter system has an individually operated trigger for opening and closing the shutter. As the triggers are operated individually, the delay between opening and closing first shutter and opening and closing the second shutter is variable providing for variable differential time windows and enhancement of pulse contrast ratio.

  14. Feedback control of torsion balance in measurement of gravitational constant G with angular acceleration method

    SciTech Connect

    Quan, Li-Di; Xue, Chao; Shao, Cheng-Gang; Yang, Shan-Qing; Tu, Liang-Cheng; Luo, Jun; Wang, Yong-Ji

    2014-01-15

    The performance of the feedback control system is of central importance in the measurement of the Newton's gravitational constant G with angular acceleration method. In this paper, a PID (Proportion-Integration-Differentiation) feedback loop is discussed in detail. Experimental results show that, with the feedback control activated, the twist angle of the torsion balance is limited to 7.3×10{sup −7} rad /√( Hz ) at the signal frequency of 2 mHz, which contributes a 0.4 ppm uncertainty to the G value.

  15. Accelerated projected steepest descent method for nonlinear inverse problems with sparsity constraints

    NASA Astrophysics Data System (ADS)

    Teschke, Gerd; Borries, Claudia

    2010-02-01

    This paper is concerned with the construction of an iterative algorithm to solve nonlinear inverse problems with an ell1 constraint on x. One extensively studied method to obtain a solution of such an ell1 penalized problem is iterative soft-thresholding. Regrettably, such iteration schemes are computationally very intensive. A subtle alternative to iterative soft-thresholding is the projected gradient method that was quite recently proposed by Daubechies et al (2008 J. Fourier Anal. Appl. 14 764-92). The authors have shown that the proposed scheme is indeed numerically much thriftier. However, its current applicability is limited to linear inverse problems. In this paper we provide an extension of this approach to nonlinear problems. Adequately adapting the conditions on the (variable) thresholding parameter to the nonlinear nature, we can prove convergence in norm for this projected gradient method, with and without acceleration. A numerical verification is given in the context of nonlinear and non-ideal sensing. For this particular recovery problem we can achieve an impressive numerical performance (when comparing it to non-accelerated procedures).

  16. A broadband fast multipole accelerated boundary element method for the three dimensional Helmholtz equation.

    PubMed

    Gumerov, Nail A; Duraiswami, Ramani

    2009-01-01

    The development of a fast multipole method (FMM) accelerated iterative solution of the boundary element method (BEM) for the Helmholtz equations in three dimensions is described. The FMM for the Helmholtz equation is significantly different for problems with low and high kD (where k is the wavenumber and D the domain size), and for large problems the method must be switched between levels of the hierarchy. The BEM requires several approximate computations (numerical quadrature, approximations of the boundary shapes using elements), and these errors must be balanced against approximations introduced by the FMM and the convergence criterion for iterative solution. These different errors must all be chosen in a way that, on the one hand, excess work is not done and, on the other, that the error achieved by the overall computation is acceptable. Details of translation operators for low and high kD, choice of representations, and BEM quadrature schemes, all consistent with these approximations, are described. A novel preconditioner using a low accuracy FMM accelerated solver as a right preconditioner is also described. Results of the developed solvers for large boundary value problems with 0.0001 less, similarkD less, similar500 are presented and shown to perform close to theoretical expectations. PMID:19173406

  17. The effect of acceleration versus displacement methods on steady-state boundary forces

    NASA Technical Reports Server (NTRS)

    Mcghee, D. S.

    1992-01-01

    This study describes the acceleration and displacement methods for use in the recovery of coupled system boundary forces. A simple two degree of freedom system has been used for illustration. The effect of the choice of method for use with indeterminate or over-constrained boundaries has been investigated. It has specifically looked at results from a simple two dimensional beam problem using both methods. Much work has been done on the effect of Craig-Bampton modal truncation system displacements and forces, however, little work has been done on system level modal truncation. The findings of this study indicate that the effect of this system level truncation is significant. This may be particularly true for the 35 Hz system cutoff frequency that is required by the space shuttle. From this study's findings, recommendations for areas of study with space shuttle payload systems are made.

  18. Acceleration of ensemble machine learning methods using many-core devices

    NASA Astrophysics Data System (ADS)

    Tamerus, A.; Washbrook, A.; Wyeth, D.

    2015-12-01

    We present a case study into the acceleration of ensemble machine learning methods using many-core devices in collaboration with Toshiba Medical Visualisation Systems Europe (TMVSE). The adoption of GPUs to execute a key algorithm in the classification of medical image data was shown to significantly reduce overall processing time. Using a representative dataset and pre-trained decision trees as input we will demonstrate how the decision forest classification method can be mapped onto the GPU data processing model. It was found that a GPU-based version of the decision forest method resulted in over 138 times speed-up over a single-threaded CPU implementation with further improvements possible. The same GPU-based software was then directly applied to a suitably formed dataset to benefit supervised learning techniques applied in High Energy Physics (HEP) with similar improvements in performance.

  19. The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces

    NASA Astrophysics Data System (ADS)

    Vuik, C.; Saghir, A.; Boerstoel, G. P.

    2000-08-01

    Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright

  20. Groundwater modelling: Towards an estimation of the acceleration factors of iterative methods via an analysis of the transmissivity spatial variability

    NASA Astrophysics Data System (ADS)

    Benali, Abdelmajid

    2013-01-01

    When running a groundwater flow model, a recurrent and seemingly subsidiary question arises at the starting step of computations: what value of acceleration parameter do we need to optimize the numerical solver? A method is proposed to provide a practical estimate of the optimal acceleration parameter via a geostatistical analysis of the spatial variability of the logarithm of the transmissivity field Y. The background of the approach is illustrated on the successive over-relaxation method (SOR) used, either as a stand-alone solver, or as a symmetric preconditioner (SSOR) to the gradient conjugate method, or as a smoother in multigrid methods. It shows that this optimum acceleration factor is a function of the standard deviation and the correlation length of Y. This provides an easy-to-use heuristic procedure to estimate the acceleration factors, which could even be incorporated in the software package. A case study illustrates the steps needed to perform this estimation.

  1. Application of the vector ɛ and ρ extrapolation methods in the acceleration of the Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Qiong; Jiang, Zongfu; Liao, Tianhe; Song, Kaiyang

    2010-11-01

    The vector ɛ and ρ extrapolation methods are applied in accelerating the convergence of the Richardson-Lucy (R-L) algorithm and its damped version. The theory and implementation are discussed in detail, and relevant numerical results are given, including the cases of noise-free images and images corrupted by the Poisson noise. The results show that the vector ɛ and ρ extrapolations of 9 orders can speed the convergence quite efficiently, and the ρ(9) method is more powerful than the ɛ(9) method for noisy degraded images. The extra computation burden due to the extrapolation is limited, and is well paid back by the accelerated convergence. The performances of these two methods are compared with the famous automatic acceleration method. For noise-free degraded images, the vector ɛ(9) and ρ(9) methods are more stable than the automatic method. For noisy degraded images, the damped R-L algorithm accelerated by vector ρ(9) or automatic methods is more powerful, and the instability of the automatic method is restrained by the damping strategy. We explain the instability of the method in accelerating the normal R-L algorithm by the numerical noise due to its frequent applications in the run.

  2. Dental movement acceleration: Literature review by an alternative scientific evidence method

    PubMed Central

    Camacho, Angela Domínguez; Cujar, Sergio Andres Velásquez

    2014-01-01

    The aim of this study was to analyze the majority of publications using effective methods to speed up orthodontic treatment and determine which publications carry high evidence-based value. The literature published in Pubmed from 1984 to 2013 was reviewed, in addition to well-known reports that were not classified under this database. To facilitate evidence-based decision making, guidelines such as the Consolidation Standards of Reporting Trials, Preferred Reporting items for systematic Reviews and Meta-analyses, and Transparent Reporting of Evaluations with Non-randomized Designs check list were used. The studies were initially divided into three groups: local application of cell mediators, physical stimuli, and techniques that took advantage of the regional acceleration phenomena. The articles were classified according to their level of evidence using an alternative method for orthodontic scientific article classification. 1a: Systematic Reviews (SR) of randomized clinical trials (RCTs), 1b: Individual RCT, 2a: SR of cohort studies, 2b: Individual cohort study, controlled clinical trials and low quality RCT, 3a: SR of case-control studies, 3b: Individual case-control study, low quality cohort study and short time following split mouth designs. 4: Case-series, low quality case-control study and non-systematic review, and 5: Expert opinion. The highest level of evidence for each group was: (1) local application of cell mediators: the highest level of evidence corresponds to a 3B level in Prostaglandins and Vitamin D; (2) physical stimuli: vibratory forces and low level laser irradiation have evidence level 2b, Electrical current is classified as 3b evidence-based level, Pulsed Electromagnetic Field is placed on the 4th level on the evidence scale; and (3) regional acceleration phenomena related techniques: for corticotomy the majority of the reports belong to level 4. Piezocision, dentoalveolar distraction, alveocentesis, monocortical tooth dislocation and ligament

  3. A precorrected-FFT method to accelerate the solution of the forward problem in magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Tissari, Satu; Rahola, Jussi

    2003-02-01

    Accurate localization of brain activity recorded by magnetoencephalography (MEG) requires that the forward problem, i.e. the magnetic field caused by a dipolar source current in a homogeneous volume conductor, be solved precisely. We have used the Galerkin method with piecewise linear basis functions in the boundary element method to improve the solution of the forward problem. In addition, we have replaced the direct method, i.e. the LU decomposition, by a modern iterative method to solve the dense linear system of equations arising from the boundary element discretization. In this paper we describe a precorrected-FFT method which we have combined with the iterative method to accelerate the solution of the forward problem and to avoid the explicit formation of the dense coefficient matrix. For example, with a triangular mesh of 18000 triangles, the CPU time to solve the forward problem was decreased from 3.5 h to less than 5 min, and the computer memory requirements were decreased from 1.3 GB to 156 MB. The method makes it possible to solve quickly significantly larger problems with widely-used workstations.

  4. A method of determining narrow energy spread electron beams from a laser plasma wakefield accelerator using undulator radiation

    SciTech Connect

    Gallacher, J. G.; Anania, M. P.; Brunetti, E.; Ersfeld, B.; Islam, M. R.; Reitsma, A. J. W.; Shanks, R. P.; Wiggins, S. M.; Jaroszynski, D. A.; Budde, F.; Debus, A.; Haupt, K.; Schwoerer, H.; Jaeckel, O.; Pfotenhauer, S.; Rohwer, E.; Schlenvoigt, H.-P.

    2009-09-15

    In this paper a new method of determining the energy spread of a relativistic electron beam from a laser-driven plasma wakefield accelerator by measuring radiation from an undulator is presented. This could be used to determine the beam characteristics of multi-GeV accelerators where conventional spectrometers are very large and cumbersome. Simultaneous measurement of the energy spectra of electrons from the wakefield accelerator in the 55-70 MeV range and the radiation spectra in the wavelength range of 700-900 nm of synchrotron radiation emitted from a 50 period undulator confirm a narrow energy spread for electrons accelerated over the dephasing distance where beam loading leads to energy compression. Measured energy spreads of less than 1% indicates the potential of using a wakefield accelerator as a driver of future compact and brilliant ultrashort pulse synchrotron sources and free-electron lasers that require high peak brightness beams.

  5. A new method of accelerated graph display in primary flight display based on FPGA

    NASA Astrophysics Data System (ADS)

    Kong, Quancun; Li, Chenggui; Zhang, Fengqing

    2006-11-01

    With the development of avionic technology, there is the increasing amount of information to be displayed on Primary Flight Display (PFD) of the cockpit. Beside the higher requirement of accuracy, reliability and the real-time property of information should be met in some emergency situations. Therefore, it is rather important to make further improvement on speeding up graph generation and display. This paper, based on hardware acceleration, describes a designated method to satisfy the higher requirement of PFD for graph display. The new method is characterized with graphic layering double frame buffer alternation and graphic synthesis, which to a great extent, reduces the job of a processor and speeds up the graphic generation and display, hence solving the speed bottleneck in PFD graphic display.

  6. A GPU-accelerated adaptive discontinuous Galerkin method for level set equation

    NASA Astrophysics Data System (ADS)

    Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.

    2016-01-01

    This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.

  7. Proposition of an Accelerated Ageing Method for Natural Fibre/Polylactic Acid Composite

    NASA Astrophysics Data System (ADS)

    Zandvliet, Clio; Bandyopadhyay, N. R.; Ray, Dipa

    2015-10-01

    Natural fibre composite based on polylactic acid (PLA) composite is of special interest because it is entirely from renewable resources and biodegradable. Some samples of jute/PLA composite and PLA alone made 6 years ago and kept in tropical climate on a shelf shows too fast ageing degradation. In this work, an accelerated ageing method for natural fibres/PLA composite is proposed and tested. Experiment was carried out with jute and flax fibre/PLA composite. The method was compared with the standard ISO 1037-06a. The residual flexural strength after ageing test was compared with the one of common wood-based panels and of real aged samples prepared 6 years ago.

  8. A diffusion synthetic acceleration scheme for rectangular geometries based on bilinear discontinuous finite elements

    SciTech Connect

    Turcksin, B.; Ragusa, J. C.

    2013-07-01

    A DSA technique to accelerate the iterative convergence of S{sub n} transport solves is derived for bilinear discontinuous (BLD) finite elements on rectangular grids. The diffusion synthetic acceleration equations are discretized using BLD elements by adapting the Modified Interior Penalty technique, introduced in [4] for triangular grids. The MIP-DSA equations are SPD and thus are solved using a preconditioned CG technique. Fourier analyses and implementation of the technique in a BLD S{sub n} transport code show that the technique is stable is effective. (authors)

  9. GPU-accelerated 3D neutron diffusion code based on finite difference method

    SciTech Connect

    Xu, Q.; Yu, G.; Wang, K.

    2012-07-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  10. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. PMID:25043838

  11. Accelerated Regularized Estimation of MR Coil Sensitivities Using Augmented Lagrangian Methods

    PubMed Central

    Ramani, Sathish; Fessler, Jeffrey A.

    2012-01-01

    Several magnetic resonance (MR) parallel imaging techniques require explicit estimates of the receive coil sensitivity profiles. These estimates must be accurate over both the object and its surrounding regions to avoid generating artifacts in the reconstructed images. Regularized estimation methods that involve minimizing a cost function containing both a data-fit term and a regularization term provide robust sensitivity estimates. However, these methods can be computationally expensive when dealing with large problems. In this paper, we propose an iterative algorithm based on variable splitting and the augmented Lagrangian method that estimates the coil sensitivity profile by minimizing a quadratic cost function. Our method, ADMM–Circ, reformulates the finite differencing matrix in the regularization term to enable exact alternating minimization steps. We also present a faster variant of this algorithm using intermediate updating of the associated Lagrange multipliers. Numerical experiments with simulated and real data sets indicate that our proposed method converges approximately twice as fast as the preconditioned conjugate gradient method (PCG) over the entire field-of-view. These concepts may accelerate other quadratic optimization problems. PMID:23192524

  12. The Accelerated Intake: A Method for Increasing Initial Attendance to Outpatient Cocaine Treatment.

    ERIC Educational Resources Information Center

    Festinger, David S.; And Others

    1996-01-01

    The effectiveness of offering same day appointments at an outpatient cocaine treatment program to increase intake attendance was examined. Seventy-eight clients were given standard or accelerated intake appointments. Significantly more clients who were given accelerated appointments attended the program. An accelerated intake procedure appears to…

  13. Sensitivity analysis in multipole-accelerated panel methods for potential flow

    NASA Technical Reports Server (NTRS)

    Leathrum, James F., Jr.

    1995-01-01

    In the design of an airframe, the effect of changing the geometry on resulting computations is necessary for design optimization. The geometry is defined in terms of a series of design variables, including design variables to define the wing planform, tail, canard, pylon, and nacelle. Design optimization in this research is based on how these design variable affect the potential flow. The potential flow is computed as a function of the geometry and location of a series of panels describing the airframe, which are in turn a function of the design variables. Multipole accelerated panel methods improve the computational complexity of the problem and thus are an attractive approach. To utilize the methods in design optimization, it was necessary to define the appropriate sensitivity derivatives. The overhead incurred from finding the sensitivity derivatives in conjunction with the original computation should be small. This research developed the background for multipole-accelerated panel methods and the framework for finding sensitivity derivatives in the methods. Potential flow panel codes are commonly used for powered-lift aerodynamic predictions for three dimensional geometries. Given an airframe which has been discretized into a series of panels to define the airframe geometry, potential is computed as a function of the influence of all panels on all other panels. This is a computationally intensive problem for which efficient solutions are desired to improve the computational time and to allow greater resolution by use of more panels. One such solution is the use of hierarchical multipole methods which entail approximations of the effects of far-field terms. Hierarchical multipole methods have become prevalent in molecular dynamics and gravitational physics, and have been introduced into the fields of capacitance calculations, computational fluid dynamics, and electromagnetics. The methods utilize multipole expansions to describe the effect of bodies (i

  14. Sensitivity evaluation of DSA-based parametric imaging using Doppler ultrasound in neurovascular phantoms

    NASA Astrophysics Data System (ADS)

    Balasubramoniam, A.; Bednarek, D. R.; Rudin, S.; Ionita, C. N.

    2016-03-01

    An evaluation of the relation between parametric imaging results obtained from Digital Subtraction Angiography (DSA) images and blood-flow velocity measured using Doppler ultrasound in patient-specific neurovascular phantoms is provided. A silicone neurovascular phantom containing internal carotid artery, middle cerebral artery and anterior communicating artery was embedded in a tissue equivalent gel. The gel prevented movement of the vessels when blood mimicking fluid was pumped through it to obtain Colour Doppler images. The phantom was connected to a peristaltic pump, simulating physiological flow conditions. To obtain the parametric images, water was pumped through the phantom at various flow rates (100, 120 and 160 ml/min) and 10 ml contrast boluses were injected. DSA images were obtained at 10 frames/sec from the Toshiba C-arm and DSA image sequences were input into LabVIEW software to get parametric maps from time-density curves. The parametric maps were compared with velocities determined by Doppler ultrasound at the internal carotid artery. The velocities measured by the Doppler ultrasound were 38, 48 and 65 cm/s for flow rates of 100, 120 and 160 ml/min, respectively. For the 20% increase in flow rate, the percentage change of blood velocity measured by Doppler ultrasound was 26.3%. Correspondingly, there was a 20% decrease of Bolus Arrival Time (BAT) and 14.3% decrease of Mean Transit Time (MTT), showing strong inverse correlation with Doppler measured velocity. The parametric imaging parameters are quite sensitive to velocity changes and are well correlated to the velocities measured by Doppler ultrasound.

  15. On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.

    2003-01-01

    A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.

  16. New methods for accelerating the convergence of molecular electronic integrals over exponential type orbitals

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan; Hoggan, Philip

    2003-01-01

    This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.

  17. GPU-accelerated Lattice Boltzmann method for anatomical extraction in patient-specific computational hemodynamics

    NASA Astrophysics Data System (ADS)

    Yu, H.; Wang, Z.; Zhang, C.; Chen, N.; Zhao, Y.; Sawchuk, A. P.; Dalsing, M. C.; Teague, S. D.; Cheng, Y.

    2014-11-01

    Existing research of patient-specific computational hemodynamics (PSCH) heavily relies on software for anatomical extraction of blood arteries. Data reconstruction and mesh generation have to be done using existing commercial software due to the gap between medical image processing and CFD, which increases computation burden and introduces inaccuracy during data transformation thus limits the medical applications of PSCH. We use lattice Boltzmann method (LBM) to solve the level-set equation over an Eulerian distance field and implicitly and dynamically segment the artery surfaces from radiological CT/MRI imaging data. The segments seamlessly feed to the LBM based CFD computation of PSCH thus explicit mesh construction and extra data management are avoided. The LBM is ideally suited for GPU (graphic processing unit)-based parallel computing. The parallel acceleration over GPU achieves excellent performance in PSCH computation. An application study will be presented which segments an aortic artery from a chest CT dataset and models PSCH of the segmented artery.

  18. Research on acceleration method of reactor physics based on FPGA platforms

    SciTech Connect

    Li, C.; Yu, G.; Wang, K.

    2013-07-01

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)

  19. Flow modification in canine intracranial aneurysm model by an asymmetric stent: studies using digital subtraction angiography (DSA) and image-based computational fluid dynamics (CFD) analyses

    PubMed Central

    Hoi, Yiemeng; Ionita, Ciprian N.; Tranquebar, Rekha V.; Hoffmann, Kenneth R.; Woodward, Scott, H.; Taulbee, Dale B.; Meng, Hui; Rudin, Stephen

    2011-01-01

    An asymmetric stent with low porosity patch across the intracranial aneurysm neck and high porosity elsewhere is designed to modify the flow to result in thrombogenesis and occlusion of the aneurysm and yet to reduce the possibility of also occluding adjacent perforator vessels. The purposes of this study are to evaluate the flow field induced by an asymmetric stent using both numerical and digital subtraction angiography (DSA) methods and to quantify the flow dynamics of an asymmetric stent in an in vivo aneurysm model. We created a vein-pouch aneurysm model on the canine carotid artery. An asymmetric stent was implanted at the aneurysm, with 25% porosity across the aneurysm neck and 80% porosity elsewhere. The aneurysm geometry, before and after stent implantation, was acquired using cone beam CT and reconstructed for computational fluid dynamics (CFD) analysis. Both steady-state and pulsatile flow conditions using the measured waveforms from the aneurysm model were studied. To reduce computational costs, we modeled the asymmetric stent effect by specifying a pressure drop over the layer across the aneurysm orifice where the low porosity patch was located. From the CFD results, we found the asymmetric stent reduced the inflow into the aneurysm by 51%, and appeared to create a stasis-like environment which favors thrombus formation. The DSA sequences also showed substantial flow reduction into the aneurysm. Asymmetric stents may be a viable image guided intervention for treating intracranial aneurysms with desired flow modification features. PMID:21666881

  20. High-Speed Digital Signal Processing Method for Detection of Repeating Earthquakes Using GPGPU-Acceleration

    NASA Astrophysics Data System (ADS)

    Kawakami, Taiki; Okubo, Kan; Uchida, Naoki; Takeuchi, Nobunao; Matsuzawa, Toru

    2013-04-01

    Repeating earthquakes are occurring on the similar asperity at the plate boundary. These earthquakes have an important property; the seismic waveforms observed at the identical observation site are very similar regardless of their occurrence time. The slip histories of repeating earthquakes could reveal the existence of asperities: The Analysis of repeating earthquakes can detect the characteristics of the asperities and realize the temporal and spatial monitoring of the slip in the plate boundary. Moreover, we are expecting the medium-term predictions of earthquake at the plate boundary by means of analysis of repeating earthquakes. Although the previous works mostly clarified the existence of asperity and repeating earthquake, and relationship between asperity and quasi-static slip area, the stable and robust method for automatic detection of repeating earthquakes has not been established yet. Furthermore, in order to process the enormous data (so-called big data) the speedup of the signal processing is an important issue. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for the signal processing in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. That is, a PC (personal computer) with GPUs might be a personal supercomputer. GPU computing gives us the high-performance computing environment at a lower cost than before. Therefore, the use of GPUs contributes to a significant reduction of the execution time in signal processing of the huge seismic data. In this study, first, we applied the band-limited Fourier phase correlation as a fast method of detecting repeating earthquake. This method utilizes only band-limited phase information and yields the correlation values between two seismic signals. Secondly, we employ coherence function using three orthogonal components (East-West, North-South, and Up-Down) of seismic data as a

  1. The cell-in-series method: A technique for accelerated electrode degradation in redox flow batteries

    DOE PAGESBeta

    Pezeshki, Alan M.; Sacci, Robert L.; Veith, Gabriel M.; Zawodzinski, Thomas A.; Mench, Matthew M.

    2015-11-21

    Here, we demonstrate a novel method to accelerate electrode degradation in redox flow batteries and apply this method to the all-vanadium chemistry. Electrode performance degradation occurred seven times faster than in a typical cycling experiment, enabling rapid evaluation of materials. This method also enables the steady-state study of electrodes. In this manner, it is possible to delineate whether specific operating conditions induce performance degradation; we found that both aggressively charging and discharging result in performance loss. Post-mortem x-ray photoelectron spectroscopy of the degraded electrodes was used to resolve the effects of state of charge (SoC) and current on the electrodemore » surface chemistry. For the electrode material tested in this work, we found evidence that a loss of oxygen content on the negative electrode cannot explain decreased cell performance. Furthermore, the effects of decreased electrode and membrane performance on capacity fade in a typical cycling battery were decoupled from crossover; electrode and membrane performance decay were responsible for a 22% fade in capacity, while crossover caused a 12% fade.« less

  2. Accelerating Particle Filter Using Randomized Multiscale and Fast Multipole Type Methods.

    PubMed

    Shabat, Gil; Shmueli, Yaniv; Bermanis, Amit; Averbuch, Amir

    2015-07-01

    Particle filter is a powerful tool for state tracking using non-linear observations. We present a multiscale based method that accelerates the tracking computation by particle filters. Unlike the conventional way, which calculates weights over all particles in each cycle of the algorithm, we sample a small subset from the source particles using matrix decomposition methods. Then, we apply a function extension algorithm that uses a particle subset to recover the density function for all the rest of the particles not included in the chosen subset. The computational effort is substantial especially when multiple objects are tracked concurrently. The proposed algorithm significantly reduces the computational load. By using the Fast Gaussian Transform, the complexity of the particle selection step is reduced to a linear time in n and k, where n is the number of particles and k is the number of particles in the selected subset. We demonstrate our method on both simulated and on real data such as object tracking in video sequences. PMID:26352448

  3. The cell-in-series method: A technique for accelerated electrode degradation in redox flow batteries

    SciTech Connect

    Pezeshki, Alan M.; Sacci, Robert L.; Veith, Gabriel M.; Zawodzinski, Thomas A.; Mench, Matthew M.

    2015-11-21

    Here, we demonstrate a novel method to accelerate electrode degradation in redox flow batteries and apply this method to the all-vanadium chemistry. Electrode performance degradation occurred seven times faster than in a typical cycling experiment, enabling rapid evaluation of materials. This method also enables the steady-state study of electrodes. In this manner, it is possible to delineate whether specific operating conditions induce performance degradation; we found that both aggressively charging and discharging result in performance loss. Post-mortem x-ray photoelectron spectroscopy of the degraded electrodes was used to resolve the effects of state of charge (SoC) and current on the electrode surface chemistry. For the electrode material tested in this work, we found evidence that a loss of oxygen content on the negative electrode cannot explain decreased cell performance. Furthermore, the effects of decreased electrode and membrane performance on capacity fade in a typical cycling battery were decoupled from crossover; electrode and membrane performance decay were responsible for a 22% fade in capacity, while crossover caused a 12% fade.

  4. INSTRUMENTS AND METHODS OF INVESTIGATION: High-energy electron accelerators for industrial applications

    NASA Astrophysics Data System (ADS)

    Salimov, Rustam A.

    2000-02-01

    The principle of operation and the design of main parts of high-energy industrial electron accelerators are described. Accelerators based on high-voltage dc rectifiers are very efficient, compact and characterized by a high degree of unification of their main units. In total, more than 70 accelerators have been manufactured at the G I Budker Institute of Nuclear Physics, with over 20 of them for export.

  5. Challenges in LER/CDU metrology in DSA: placement error and cross-line correlations

    NASA Astrophysics Data System (ADS)

    Constantoudis, Vassilios; Kuppuswamy, Vijaya-Kumar M.; Gogolides, Evangelos; Pret, Alessandro V.; Pathangi, Hari; Gronheid, Roel

    2016-03-01

    DSA lithography poses new challenges in LER/LWR metrology due to its self-organized and pitch-based nature. To cope with these challenges, a novel characterization approach with new metrics and updating the older ones is required. To this end, we focus on two specific challenges of DSA line patterns: a) the large correlations between the left and right edges of a line (line wiggling, rms(LWR)

  6. Production and contribution of hydroxyl radicals between the DSA anode and water interface.

    PubMed

    Li, Guoting; Zhu, Meiya; Chen, Jing; Li, Yunxia; Zhang, Xiwang

    2011-01-01

    Hydroxyl radicals play the key role during electrochemical oxidation and photoelectrochemical oxidation. The production and effect of hydroxyl radicals on the interface between DSA anode and water was investigated by examining the quenching effect of iso-propanol on Orange II decolorization. We observed that with an increase in electrode potential from 4 to 12 V across electrodes at pH 7.0, the contribution percentage of hydroxyl radicals increased dramatically. More OH radicals were produced in acidic and alkaline conditions than at neutral conditions. At electrode potential of 4 V, the contribution percentage of hydroxyl radicals was obviously higher at near neutral pH conditions, while removal efficiency of Orange II achieved was the lowest concurrently. Finally, for photocatalytic oxidation, electrochemical oxidation, and photoelectrochemical oxidation using the same DSA electrode, the effect of hydroxyl radicals proved to be dominant in photocatalytic oxidation but the contribution of hydroxyl radicals was not dominant in electrochemical oxidation, which implies the necessity of UV irradiation for electrochemical oxidation during water treatment. PMID:21790045

  7. Simultaneous acceleration of protons and electrons at nonrelativistic quasiparallel collisionless shocks.

    PubMed

    Park, Jaehong; Caprioli, Damiano; Spitkovsky, Anatoly

    2015-02-27

    We study diffusive shock acceleration (DSA) of protons and electrons at nonrelativistic, high Mach number, quasiparallel, collisionless shocks by means of self-consistent 1D particle-in-cell simulations. For the first time, both species are found to develop power-law distributions with the universal spectral index -4 in momentum space, in agreement with the prediction of DSA. We find that scattering of both protons and electrons is mediated by right-handed circularly polarized waves excited by the current of energetic protons via nonresonant hybrid (Bell) instability. Protons are injected into DSA after a few gyrocycles of shock drift acceleration (SDA), while electrons are first preheated via SDA, then energized via a hybrid acceleration process that involves both SDA and Fermi-like acceleration mediated by Bell waves, before eventual injection into DSA. Using the simulations we can measure the electron-proton ratio in accelerated particles, which is of paramount importance for explaining the cosmic ray fluxes measured on Earth and the multiwavelength emission of astrophysical objects such as supernova remnants, radio supernovae, and galaxy clusters. We find the normalization of the electron power law is ≲10^{-2} of the protons for strong nonrelativistic shocks. PMID:25768768

  8. Simultaneous Acceleration of Protons and Electrons at Nonrelativistic Quasiparallel Collisionless Shocks

    NASA Astrophysics Data System (ADS)

    Park, Jaehong; Caprioli, Damiano; Spitkovsky, Anatoly

    2015-02-01

    We study diffusive shock acceleration (DSA) of protons and electrons at nonrelativistic, high Mach number, quasiparallel, collisionless shocks by means of self-consistent 1D particle-in-cell simulations. For the first time, both species are found to develop power-law distributions with the universal spectral index -4 in momentum space, in agreement with the prediction of DSA. We find that scattering of both protons and electrons is mediated by right-handed circularly polarized waves excited by the current of energetic protons via nonresonant hybrid (Bell) instability. Protons are injected into DSA after a few gyrocycles of shock drift acceleration (SDA), while electrons are first preheated via SDA, then energized via a hybrid acceleration process that involves both SDA and Fermi-like acceleration mediated by Bell waves, before eventual injection into DSA. Using the simulations we can measure the electron-proton ratio in accelerated particles, which is of paramount importance for explaining the cosmic ray fluxes measured on Earth and the multiwavelength emission of astrophysical objects such as supernova remnants, radio supernovae, and galaxy clusters. We find the normalization of the electron power law is ≲1 0-2 of the protons for strong nonrelativistic shocks.

  9. Mask free intravenous 3D digital subtraction angiography (IV 3D-DSA) from a single C-arm acquisition

    NASA Astrophysics Data System (ADS)

    Li, Yinsheng; Niu, Kai; Yang, Pengfei; Aagaard-Kienitz, Beveley; Niemann, David B.; Ahmed, Azam S.; Strother, Charles; Chen, Guang-Hong

    2016-03-01

    Currently, clinical acquisition of IV 3D-DSA requires two separate scans: one mask scan without contrast medium and a filled scan with contrast injection. Having two separate scans adds radiation dose to the patient and increases the likelihood of suffering inadvertent patient motion induced mis-registration and the associated mis-registraion artifacts in IV 3D-DSA images. In this paper, a new technique, SMART-RECON is introduced to generate IV 3D-DSA images from a single Cone Beam CT (CBCT) acquisition to eliminate the mask scan. Potential benefits of eliminating mask scan would be: (1) both radiation dose and scan time can be reduced by a factor of 2; (2) intra-sweep motion can be eliminated; (3) inter-sweep motion can be mitigated. Numerical simulations were used to validate the algorithm in terms of contrast recoverability and the ability to mitigate limited view artifacts.

  10. Method for detecting moment connection fracture using high-frequency transients in recorded accelerations

    USGS Publications Warehouse

    Rodgers, J.E.; Elebi, M.

    2011-01-01

    The 1994 Northridge earthquake caused brittle fractures in steel moment frame building connections, despite causing little visible building damage in most cases. Future strong earthquakes are likely to cause similar damage to the many un-retrofitted pre-Northridge buildings in the western US and elsewhere. Without obvious permanent building deformation, costly intrusive inspections are currently the only way to determine if major fracture damage that compromises building safety has occurred. Building instrumentation has the potential to provide engineers and owners with timely information on fracture occurrence. Structural dynamics theory predicts and scale model experiments have demonstrated that sudden, large changes in structure properties caused by moment connection fractures will cause transient dynamic response. A method is proposed for detecting the building-wide level of connection fracture damage, based on observing high-frequency, fracture-induced transient dynamic responses in strong motion accelerograms. High-frequency transients are short (<1 s), sudden-onset waveforms with frequency content above 25 Hz that are visually apparent in recorded accelerations. Strong motion data and damage information from intrusive inspections collected from 24 sparsely instrumented buildings following the 1994 Northridge earthquake are used to evaluate the proposed method. The method's overall success rate for this data set is 67%, but this rate varies significantly with damage level. The method performs reasonably well in detecting significant fracture damage and in identifying cases with no damage, but fails in cases with few fractures. Combining the method with other damage indicators and removing records with excessive noise improves the ability to detect the level of damage. ?? 2010 Elsevier B.V. All rights reserved.

  11. A GPU Accelerated Discontinuous Galerkin Conservative Level Set Method for Simulating Atomization

    NASA Astrophysics Data System (ADS)

    Jibben, Zechariah J.

    This dissertation describes a process for interface capturing via an arbitrary-order, nearly quadrature free, discontinuous Galerkin (DG) scheme for the conservative level set method (Olsson et al., 2005, 2008). The DG numerical method is utilized to solve both advection and reinitialization, and executed on a refined level set grid (Herrmann, 2008) for effective use of processing power. Computation is executed in parallel utilizing both CPU and GPU architectures to make the method feasible at high order. Finally, a sparse data structure is implemented to take full advantage of parallelism on the GPU, where performance relies on well-managed memory operations. With solution variables projected into a kth order polynomial basis, a k + 1 order convergence rate is found for both advection and reinitialization tests using the method of manufactured solutions. Other standard test cases, such as Zalesak's disk and deformation of columns and spheres in periodic vortices are also performed, showing several orders of magnitude improvement over traditional WENO level set methods. These tests also show the impact of reinitialization, which often increases shape and volume errors as a result of level set scalar trapping by normal vectors calculated from the local level set field. Accelerating advection via GPU hardware is found to provide a 30x speedup factor comparing a 2.0GHz Intel Xeon E5-2620 CPU in serial vs. a Nvidia Tesla K20 GPU, with speedup factors increasing with polynomial degree until shared memory is filled. A similar algorithm is implemented for reinitialization, which relies on heavier use of shared and global memory and as a result fills them more quickly and produces smaller speedups of 18x.

  12. A coupled ordinates method for solution acceleration of rarefied gas dynamics simulations

    SciTech Connect

    Das, Shankhadeep; Mathur, Sanjay R.; Alexeenko, Alina; Murthy, Jayathi Y.

    2015-05-15

    Non-equilibrium rarefied flows are frequently encountered in a wide range of applications, including atmospheric re-entry vehicles, vacuum technology, and microscale devices. Rarefied flows at the microscale can be effectively modeled using the ellipsoidal statistical Bhatnagar–Gross–Krook (ESBGK) form of the Boltzmann kinetic equation. Numerical solutions of these equations are often based on the finite volume method (FVM) in physical space and the discrete ordinates method in velocity space. However, existing solvers use a sequential solution procedure wherein the velocity distribution functions are implicitly coupled in physical space, but are solved sequentially in velocity space. This leads to explicit coupling of the distribution function values in velocity space and slows down convergence in systems with low Knudsen numbers. Furthermore, this also makes it difficult to solve multiscale problems or problems in which there is a large range of Knudsen numbers. In this paper, we extend the coupled ordinates method (COMET), previously developed to study participating radiative heat transfer, to solve the ESBGK equations. In this method, at each cell in the physical domain, distribution function values for all velocity ordinates are solved simultaneously. This coupled solution is used as a relaxation sweep in a geometric multigrid method in the spatial domain. Enhancements to COMET to account for the non-linearity of the ESBGK equations, as well as the coupled implementation of boundary conditions, are presented. The methodology works well with arbitrary convex polyhedral meshes, and is shown to give significantly faster solutions than the conventional sequential solution procedure. Acceleration factors of 5–9 are obtained for low to moderate Knudsen numbers on single processor platforms.

  13. A semiempirical method for the description of off-center ratios at depth from linear accelerators

    SciTech Connect

    Tsalafoutas, I.A.; Xenofos, S.; Yakoumakis, E.; Nikoletopoulos, S

    2003-06-30

    A semiempirical method for the description of the off-center ratios (OCR) at depth from linear accelerators is presented, which is based on a method originally developed for cobalt-60 {sup 60}Co units. The OCR profile is obtained as the sum of 2 components: the first describes an OCR similar to that from a {sup 60}Co unit, which approximates that resulting from the modification of the original x-ray intensity distribution by the flattening filter; the second takes into account the variable effect of the flattening filter on dose profile for different depths and field sizes, by considering the existence of a block and employing the negative field concept. The above method is formulated in a mathematical expression, where the parameters involved are obtained by fitting to the measured OCRs. Using this method, OCRs for various depths and field sizes, from a Philips SL-20 for the 6 MV x-ray beam and a Siemens Primus 23, for both the 6-MV and 23-MV x-ray beams, were reproduced with good accuracy. Furthermore, OCRs for other fields and depths that were not included in the fitting procedure were calculated using linear interpolation to estimate the values of the parameters. The results indicate that this method can be used to calculate OCR profiles for a wide range of depths and field sizes from a measured set of data and may be used for monitor unit calculations for off-axis points using a standard geometry. It may also be useful as a quality control tool to verify the accuracy of lacking profiles calculated by a treatment planning system.

  14. WDS/DSA Certification - International collaboration for a trustworthy research data infrastructure

    NASA Astrophysics Data System (ADS)

    Mokrane, Mustapha; Hugo, Wim; Harrison, Sandy

    2016-04-01

    , German Institute for Standardization (DIN) standard 31644, Trustworthy Repositories Audit and Certification (TRAC) criteria and the International Organization for Standardization (ISO) standard 16363. In addition, the Data Seal of Approval (DSA) and WDS have set up core certification mechanisms for trusted digital repositories in 2009, which are increasingly recognized as de facto standards. While DSA emerged in Europe in the Humanities and Social Sciences, WDS started as an international initiative with historical roots in the Earth and Space Sciences. Their catalogues of requirements and review procedures are based on the same principles of openness, transparency. A unique feature of the DSA and WDS certification is that it strikes a balance between simplicity, robustness and the effort required to complete. A successful international cross-project collaboration was initiated between WDS and DSA under the umbrella of the Research Data Alliance (RDA), an international initiative started in 2013 to promote data interoperability which provided a useful and neutral forum. A joint working group was established in early 2014 to reconcile and simplify the array of certification options and improve and stimulate core certification for scientific data services. The outputs of this collaboration are a Catalogue of Common Requirements (https://goo.gl/LJZqDo) and a Catalogue of Common Procedures (https://goo.gl/vNR0q1) which will be implemented jointly by WDS and DSA.

  15. Accelerated in vitro durability testing of nonvascular Nitinol stents based on the electrical potential sensing method

    NASA Astrophysics Data System (ADS)

    Park, Chan-Hee; Tijing, Leonard D.; Pant, Hem Raj; Kim, Tae-Hyung; Amarjargal, Altangerel; Kim, Han Joo; Kim, Cheol Sang

    2013-09-01

    In this paper, we report an evaluation of the performance of a new stent durability tester based on the electrical potential sensing method through accelerated in vitro testing of six different nonvascular Nitinol stents simulating physiological conditions. The stents were subjected to a pulsatile loading of 33 Hz for a total of 62,726,400 cycles, at constant temperature and pressure of 35±0.5 °C and 120±4 mmHg, respectively. The electrical potential of each stent was measured in real-time and monitored for any changes in readings. After conducting test-to-fracture tests, the stents were visually checked, and by scanning electron microscopy. A sudden electrical potential drop in the readings suggests a fracture has occurred, and the only two instances of fracture in our present results were correctly determined by our present device, with the fractures confirmed visually after the test. The excellent performance of our new method shows good potential for a highly reliable and applicable in vitro durability testing for different kinds and sizes of metallic stents.

  16. Comparison between CARIBIC Aerosol Samples Analysed by Accelerator-Based Methods and Optical Particle Counter Measurements

    NASA Astrophysics Data System (ADS)

    Martinsson, B. G.; Friberg, J.; Andersson, S. M.; Weigelt, A.; Hermann, M.; Assmann, D.; Voigtländer, J.; Brenninkmeijer, C. A. M.; van Velthoven, P. J. F.; Zahn, A.

    2014-08-01

    Inter-comparison of results from two kinds of aerosol systems in the CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on a Instrument Container) passenger aircraft based observatory, operating during intercontinental flights at 9-12 km altitude, is presented. Aerosol from the lowermost stratosphere (LMS), the extra-tropical upper troposphere (UT) and the tropical mid troposphere (MT) were investigated. Aerosol particle volume concentration measured with an optical particle counter (OPC) is compared with analytical results of the sum of masses of all major and several minor constituents from aerosol samples collected with an impactor. Analyses were undertaken with the following accelerator-based methods: particle-induced X-ray emission (PIXE) and particle elastic scattering analysis (PESA). Data from 48 flights during 1 year are used, leading to a total of 106 individual comparisons. The ratios of the particle volume from the OPC and the total mass from the analyses were in 84% within a relatively narrow interval. Data points outside this interval are connected with inlet-related effects in clouds, large variability in aerosol composition, particle size distribution effects and some cases of non-ideal sampling. Overall, the comparison of these two CARIBIC measurements based on vastly different methods show good agreement, implying that the chemical and size information can be combined in studies of the MT/UT/LMS aerosol.

  17. Comparison between CARIBIC aerosol samples analysed by accelerator-based methods and optical particle counter measurements

    NASA Astrophysics Data System (ADS)

    Martinsson, B. G.; Friberg, J.; Andersson, S. M.; Weigelt, A.; Hermann, M.; Assmann, D.; Voigtländer, J.; Brenninkmeijer, C. A. M.; van Velthoven, P. J. F.; Zahn, A.

    2014-04-01

    Inter-comparison of results from two kinds of aerosol systems in the CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container) passenger aircraft based observatory, operating during intercontinental flights at 9-12 km altitude, is presented. Aerosol from the lowermost stratosphere (LMS), the extra-tropical upper troposphere (UT) and the tropical mid troposphere (MT) were investigated. Aerosol particle volume concentration measured with an optical particle counter (OPC) is compared with analytical results of the sum of masses of all major and several minor constituents from aerosol samples collected with an impactor. Analyses were undertaken with accelerator-based methods particle-induced X-ray emission (PIXE) and particle elastic scattering analysis (PESA). Data from 48 flights during one year are used, leading to a total of 106 individual comparisons. The ratios of the particle volume from the OPC and the total mass from the analyses were in 84% within a relatively narrow interval. Data points outside this interval are connected with inlet-related effects in clouds, large variability in aerosol composition, particle size distribution effects and some cases of non-ideal sampling. Overall, the comparison of these two CARIBIC measurements based on vastly different methods show good agreement, implying that the chemical and size information can be combined in studies of the MT/UT/LMS aerosol.

  18. Modeling Focused Acceleration of Cosmic-Ray Particles by Stochastic Methods

    NASA Astrophysics Data System (ADS)

    Armstrong, C. K.; Litvinenko, Yuri E.; Craig, I. J. D.

    2012-10-01

    Schlickeiser & Shalchi suggested that a first-order Fermi mechanism of focused particle acceleration could be important in several astrophysical applications. In order to investigate focused acceleration, we express the Fokker-Planck equation as an equivalent system of stochastic differential equations. We simplify the system for a set of physically motivated parameters, extend the analytical theory, and determine the evolving particle distribution numerically. While our numerical results agree with the focused acceleration rate of Schlickeiser & Shalchi for a weakly anisotropic particle distribution, we establish significant limitations of the analytical approach. Momentum diffusion is found to be more significant than focused acceleration at early times. Most critically, the particle distribution rapidly becomes anisotropic, leading to a much slower momentum gain rate. We discuss the consequences of our results for the role of focused acceleration in astrophysics.

  19. Method for direct measurement of cosmic acceleration by 21-cm absorption systems.

    PubMed

    Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li

    2014-07-25

    So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively. PMID:25105607

  20. GPU acceleration of Runge Kutta-Fehlberg and its comparison with Dormand-Prince method

    NASA Astrophysics Data System (ADS)

    Seen, Wo Mei; Gobithaasan, R. U.; Miura, Kenjiro T.

    2014-07-01

    There is a significant reduction of processing time and speedup of performance in computer graphics with the emergence of Graphic Processing Units (GPUs). GPUs have been developed to surpass Central Processing Unit (CPU) in terms of performance and processing speed. This evolution has opened up a new area in computing and researches where highly parallel GPU has been used for non-graphical algorithms. Physical or phenomenal simulations and modelling can be accelerated through General Purpose Graphic Processing Units (GPGPU) and Compute Unified Device Architecture (CUDA) implementations. These phenomena can be represented with mathematical models in the form of Ordinary Differential Equations (ODEs) which encompasses the gist of change rate between independent and dependent variables. ODEs are numerically integrated over time in order to simulate these behaviours. The classical Runge-Kutta (RK) scheme is the common method used to numerically solve ODEs. The Runge Kutta Fehlberg (RKF) scheme has been specially developed to provide an estimate of the principal local truncation error at each step, known as embedding estimate technique. This paper delves into the implementation of RKF scheme for GPU devices and compares its result with Dorman Prince method. A pseudo code is developed to show the implementation in detail. Hence, practitioners will be able to understand the data allocation in GPU, formation of RKF kernels and the flow of data to/from GPU-CPU upon RKF kernel evaluation. The pseudo code is then written in C Language and two ODE models are executed to show the achievable speedup as compared to CPU implementation. The accuracy and efficiency of the proposed implementation method is discussed in the final section of this paper.

  1. Recent advances in high-performance modeling of plasma-based acceleration using the full PIC method

    NASA Astrophysics Data System (ADS)

    Vay, J.-L.; Lehe, R.; Vincenti, H.; Godfrey, B. B.; Haber, I.; Lee, P.

    2016-09-01

    Numerical simulations have been critical in the recent rapid developments of plasma-based acceleration concepts. Among the various available numerical techniques, the particle-in-cell (PIC) approach is the method of choice for self-consistent simulations from first principles. The fundamentals of the PIC method were established decades ago, but improvements or variations are continuously being proposed. We report on several recent advances in PIC-related algorithms that are of interest for application to plasma-based accelerators, including (a) detailed analysis of the numerical Cherenkov instability and its remediation for the modeling of plasma accelerators in laboratory and Lorentz boosted frames, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, and (c) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of perfectly matched layers in high-order and pseudo-spectral solvers.

  2. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    NASA Astrophysics Data System (ADS)

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  3. Geometry of the steady-state approximation: Perturbation and accelerated convergence methods

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.; Fraser, Simon J.

    1990-07-01

    The time evolution of two model enzyme reactions is represented in phase space Γ. The phase flow is attracted to a unique trajectory, the slow manifold M, before it reaches the point equilibrium of the system. Locating M describes the slow time evolution precisely, and allows all rate constants to be obtained from steady-state data. The line set M is found by solution of a functional equation derived from the flow differential equations. For planar systems, the steady-state (SSA) and equilibrium (EA) approximations bound a trapping region containing M, and direct iteration and perturbation theory are formally equivalent solutions of the functional equation. The iteration's convergence is examined by eigenvalue methods. In many dimensions, the nullcline surfaces of the flow in Γ form a prism-shaped region containing M, but this prism is not a simple trap for the flow. Two of its edges are EA and SSA. Perturbation expansion and direct iteration are now no longer equivalent procedures; they are compared in a three-dimensional example. Convergence of the iterative scheme can be accelerated by a generalization of Aitken's δ2 extrapolation, greatly reducing the global error. These operations can be carried out using an algebraic manipulative language. Formally, all these techniques can be carried out in many dimensions.

  4. Accelerated path integral methods for atomistic simulations at ultra-low temperatures.

    PubMed

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state. PMID:27497533

  5. Advocacy for the Archives and History Office of the SLAC National Accelerator Laboratory: Stages and Methods

    SciTech Connect

    Deken, Jean Marie; /SLAC

    2009-06-19

    Advocating for the good of the SLAC Archives and History Office (AHO) has not been a one-time affair, nor has it been a one-method procedure. It has required taking time to ascertain the current and perhaps predict the future climate of the Laboratory, and it has required developing and implementing a portfolio of approaches to the goal of building a stronger archive program by strengthening and appropriately expanding its resources. Among the successful tools in the AHO advocacy portfolio, the Archives Program Review Committee has been the most visible. The Committee and the role it serves as well as other formal and informal advocacy efforts are the focus of this case study My remarks today will begin with a brief introduction to advocacy and outreach as I understand them, and with a description of the Archives and History Office's efforts to understand and work within the corporate culture of the SLAC National Accelerator Laboratory. I will then share with you some of the tools we have employed to advocate for the Archives and History Office programs and activities; and finally, I will talk about how well - or badly - those tools have served us over the past decade.

  6. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  7. k-t Group sparse: a method for accelerating dynamic MRI.

    PubMed

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G

    2011-10-01

    Compressed sensing (CS) is a data-reduction technique that has been applied to speed up the acquisition in MRI. However, the use of this technique in dynamic MR applications has been limited in terms of the maximum achievable reduction factor. In general, noise-like artefacts and bad temporal fidelity are visible in standard CS MRI reconstructions when high reduction factors are used. To increase the maximum achievable reduction factor, additional or prior information can be incorporated in the CS reconstruction. Here, a novel CS reconstruction method is proposed that exploits the structure within the sparse representation of a signal by enforcing the support components to be in the form of groups. These groups act like a constraint in the reconstruction. The information about the support region can be easily obtained from training data in dynamic MRI acquisitions. The proposed approach was tested in two-dimensional cardiac cine MRI with both downsampled and undersampled data. Results show that higher acceleration factors (up to 9-fold), with improved spatial and temporal quality, can be obtained with the proposed approach in comparison to the standard CS reconstructions. PMID:21394781

  8. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    SciTech Connect

    Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S{sub n}) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  9. Comparison of Parallel MRI Reconstruction Methods for Accelerated 3D Fast Spin-Echo Imaging

    PubMed Central

    Xiao, Zhikui; Hoge, W. Scott; Mulkern, R.V.; Zhao, Lei; Hu, Guangshu; Kyriakos, Walid E.

    2014-01-01

    Parallel MRI (pMRI) achieves imaging acceleration by partially substituting gradient-encoding steps with spatial information contained in the component coils of the acquisition array. Variable-density subsampling in pMRI was previously shown to yield improved two-dimensional (2D) imaging in comparison to uniform subsampling, but has yet to be used routinely in clinical practice. In an effort to reduce acquisition time for 3D fast spin-echo (3D-FSE) sequences, this work explores a specific nonuniform sampling scheme for 3D imaging, subsampling along two phase-encoding (PE) directions on a rectilinear grid. We use two reconstruction methods—2D-GRAPPA-Operator and 2D-SPACE RIP—and present a comparison between them. We show that high-quality images can be reconstructed using both techniques. To evaluate the proposed sampling method and reconstruction schemes, results via simulation, phantom study, and in vivo 3D human data are shown. We find that fewer artifacts can be seen in the 2D-SPACE RIP reconstructions than in 2D-GRAPPA-Operator reconstructions, with comparable reconstruction times. PMID:18727083

  10. Theoretical investigation of electronic structure and charge transport property of 9,10-distyrylanthracene (DSA) derivatives with high solid-state luminescent efficiency.

    PubMed

    Wang, Lijuan; Xu, Bin; Zhang, Jibo; Dong, Yujie; Wen, Shanpeng; Zhang, Houyu; Tian, Wenjing

    2013-02-21

    The electronic structure and charge transport property of 9,10-distyrylanthracene (DSA) and its derivatives with high solid-state luminescent efficiency were investigated by using density functional theory (DFT). The impact of substituents on the optimized structure, reorganization energy, ionization potential (IP) and electronic affinity (EA), frontier orbitals, crystal packing, transfer integrals and charge mobility were explored based on Marcus theory. It was found that the hole mobility of DSA was 0.21 cm(2) V(-1) s(-1) while the electron mobility was 0.026 cm(2) V(-1) s(-1), which were relatively high due to the low reorganization energies and high transfer integrals. The calculated results showed that the charge transport property of these compounds can be significantly tuned via introducing different substituents to DSA. When one electron-withdrawing group (cyano group) was introduced into DSA, DSA-CN exhibited hole mobility of 0.14 cm(2) V(-1) s(-1) which was on the same order of that of DSA. However, the electron mobility of DSA-CN decreased to 8.14 × 10(-4) cm(2) V(-1) s(-1) due to the relatively large reorganization energy and disadvantageous transfer integral. The effect of electron-donating substituents was investigated by introducing methoxy group and tertiary butyl into DSA. DSA-OCH(3) and DSA-TBU showed much lower charge mobility than DSA resulting from the steric hindrance of substituents. On the other hand, both of them exhibited balanced transport properties (for DSA-OCH(3), the hole and electron mobility was 0.0026 and 0.0027 cm(2) V(-1) s(-1); for DSA-TBU, the hole and electron mobility was 0.045 and 0.012 cm(2) V(-1) s(-1)) because of their similar transfer integrals for both hole and electron. DSA and its derivatives were supposed to be one of the most excellent emissive materials for organic electroluminescent applications because of their high charge mobility and high solid-state luminescent efficiency. PMID:23319079

  11. A Rapid, Convenient, and Precise Method for the Absolute Determination of the Acceleration of Gravity.

    ERIC Educational Resources Information Center

    Manche, Emanuel P.

    1979-01-01

    Describes a compact and portable apparatus for the measurement, with a high degree of precision, the value of the gravitational acceleration g. The apparatus consists of a falling mercury drop and an electronic timing circuit. (GA)

  12. Effect of a Wire on the Electromagnetic Field in an Accelerating Cavity in the Coaxial-Wire Method

    NASA Astrophysics Data System (ADS)

    Toyomasu, Takanori; Izawa, Masaaki; Kamiya, Yukihide

    1995-01-01

    A wire used in the coaxial-wire method to measure the characteristics of an accelerating cavity cannot be treated as a perturbator. Using a pill-box model, we analytically studied the electromagnetic field of a cavity with a wire. By this analysis, the effect of the wire on the resonance frequencies and Q-values of the cavity modes was clarified.

  13. Research methods for parameters of accelerated low-energy proton beam

    NASA Astrophysics Data System (ADS)

    Bystritsky, V. M.; Dudkin, G. N.; Kyznetsov, S. I.; Nechaev, B. A.; Padalko, V. N.; Philippov, A. V.; Sadovsky, A. B.; Varlachev, V. A.; Zvaygintsev, O. A.

    2015-07-01

    To study the pd-reaction cross-section it is necessary to know the main parameters of the accelerated hydrogen ion beam with a high accuracy. These parameters include: the energy ion dispersion; the content of neutrals; the ratio of atomic and molecular ions of hydrogen in the flux of accelerated particles. This work is aimed at development of techniques and the measurement of the above mentioned parameters of the low-energy proton beam.

  14. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    SciTech Connect

    Snopok, Pavel; /Michigan State U.

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  15. Method for generating extreme ultraviolet with mather-type plasma accelerators for use in Extreme Ultraviolet Lithography

    DOEpatents

    Hassanein, Ahmed; Konkashbaev, Isak

    2006-10-03

    A device and method for generating extremely short-wave ultraviolet electromagnetic wave uses two intersecting plasma beams generated by two plasma accelerators. The intersection of the two plasma beams emits electromagnetic radiation and in particular radiation in the extreme ultraviolet wavelength. In the preferred orientation two axially aligned counter streaming plasmas collide to produce an intense source of electromagnetic radiation at the 13.5 nm wavelength. The Mather type plasma accelerators can utilize tin, or lithium covered electrodes. Tin, lithium or xenon can be used as the photon emitting gas source.

  16. Towards a novel laser-driven method of exotic nuclei extraction-acceleration for fundamental physics and technology

    NASA Astrophysics Data System (ADS)

    Nishiuchi, M.; Sakaki, H.; Esirkepov, T. Zh.; Nishio, K.; Pikuz, T. A.; Faenov, A. Ya.; Skobelev, I. Yu.; Orlandi, R.; Pirozhkov, A. S.; Sagisaka, A.; Ogura, K.; Kanasaki, M.; Kiriyama, H.; Fukuda, Y.; Koura, H.; Kando, M.; Yamauchi, T.; Watanabe, Y.; Bulanov, S. V.; Kondo, K.; Imai, K.; Nagamiya, S.

    2016-04-01

    A combination of a petawatt laser and nuclear physics techniques can crucially facilitate the measurement of exotic nuclei properties. With numerical simulations and laser-driven experiments we show prospects for the Laser-driven Exotic Nuclei extraction-acceleration method proposed in [M. Nishiuchi et al., Phys, Plasmas 22, 033107 (2015)]: a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly charged short-lived heavy exotic nuclei created in the target via nuclear reactions.

  17. An acceleration of the characteristics by a space-angle two-level method using surface discontinuity factors

    SciTech Connect

    Grassi, G.

    2006-07-01

    We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)

  18. Measurement of acceleration while walking as an automated method for gait assessment in dairy cattle.

    PubMed

    Chapinal, N; de Passillé, A M; Pastell, M; Hänninen, L; Munksgaard, L; Rushen, J

    2011-06-01

    The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall acceleration, as well as the asymmetry of variance of acceleration within the front and rear pair of legs. In experiment 1, the asymmetry of variance of acceleration in the front and rear legs was positively correlated with overall gait and the visually assessed asymmetry of the steps (r ≥ 0.6). Walking speed was negatively correlated with the asymmetry of variance of the rear legs (r=-0.8) and positively correlated with the acceleration and the variance of acceleration of each leg and back (r ≥ 0.7). In experiment 2, cows had lower gait scores [2.3 vs. 2.6; standard error of the difference (SED)=0.1, measured on a 5-point scale] and lower scores for asymmetry of the steps (18.0 vs. 23.1; SED=2.2, measured on a continuous 100-unit scale) when they walked on rubber compared with concrete, and their walking speed increased (1.28 vs. 1.22 m/s; SED=0.02). The acceleration of the front (1.67 vs. 1.72 g; SED=0.02) and rear (1.62 vs. 1.67 g; SED=0.02) legs and the variance of acceleration of the rear legs (0.88 vs. 0.94 g; SED=0.03) were lower when cows walked on rubber

  19. Advanced 3D Poisson solvers and particle-in-cell methods for accelerator modeling

    NASA Astrophysics Data System (ADS)

    Serafini, David B.; McCorquodale, Peter; Colella, Phillip

    2005-01-01

    We seek to improve on the conventional FFT-based algorithms for solving the Poisson equation with infinite-domain (open) boundary conditions for large problems in accelerator modeling and related areas. In particular, improvements in both accuracy and performance are possible by combining several technologies: the method of local corrections (MLC); the James algorithm; and adaptive mesh refinement (AMR). The MLC enables the parallelization (by domain decomposition) of problems with large domains and many grid points. This improves on the FFT-based Poisson solvers typically used as it doesn't require the all-to-all communication pattern that parallel 3d FFT algorithms require, which tends to be a performance bottleneck on current (and foreseeable) parallel computers. In initial tests, good scalability up to 1000 processors has been demonstrated for our new MLC solver. An essential component of our approach is a new version of the James algorithm for infinite-domain boundary conditions for the case of three dimensions. By using a simplified version of the fast multipole method in the boundary-to-boundary potential calculation, we improve on the performance of the Hockney algorithm typically used by reducing the number of grid points by a factor of 8, and the CPU costs by a factor of 3. This is particularly important for large problems where computer memory limits are a consideration. The MLC allows for the use of adaptive mesh refinement, which reduces the number of grid points and increases the accuracy in the Poisson solution. This improves on the uniform grid methods typically used in PIC codes, particularly in beam problems where the halo is large. Also, the number of particles per cell can be controlled more closely with adaptivity than with a uniform grid. To use AMR with particles is more complicated than using uniform grids. It affects depositing particles on the non-uniform grid, reassigning particles when the adaptive grid changes and maintaining the load

  20. HEART Pathway Accelerated Diagnostic Protocol Implementation: Prospective Pre-Post Interrupted Time Series Design and Methods

    PubMed Central

    Wells, Brian J

    2016-01-01

    Background Most patients presenting to US Emergency Departments (ED) with chest pain are hospitalized for comprehensive testing. These evaluations cost the US health system >$10 billion annually, but have a diagnostic yield for acute coronary syndrome (ACS) of <10%. The history/ECG/age/risk factors/troponin (HEART) Pathway is an accelerated diagnostic protocol (ADP), designed to improve care for patients with acute chest pain by identifying patients for early ED discharge. Prior efficacy studies demonstrate that the HEART Pathway safely reduces cardiac testing, while maintaining an acceptably low adverse event rate. Objective The purpose of this study is to determine the effectiveness of HEART Pathway ADP implementation within a health system. Methods This controlled before-after study will accrue adult patients with acute chest pain, but without ST-segment elevation myocardial infarction on electrocardiogram for two years and is expected to include approximately 10,000 patients. Outcomes measures include hospitalization rate, objective cardiac testing rates (stress testing and angiography), length of stay, and rates of recurrent cardiac care for participants. Results In pilot data, the HEART Pathway decreased hospitalizations by 21%, decreased hospital length (median of 12 hour reduction), without increasing adverse events or recurrent care. At the writing of this paper, data has been collected on >5000 patient encounters. The HEART Pathway has been fully integrated into health system electronic medical records, providing real-time decision support to our providers. Conclusions We hypothesize that the HEART Pathway will safely reduce healthcare utilization. This study could provide a model for delivering high-value care to the 8-10 million US ED patients with acute chest pain each year. ClinicalTrial Clinicaltrials.gov NCT02056964; https://clinicaltrials.gov/ct2/show/NCT02056964 (Archived by WebCite at http://www.webcitation.org/6ccajsgyu) PMID:26800789

  1. Evaluation of Enhanced Sampling Provided by Accelerated Molecular Dynamics with Hamiltonian Replica Exchange Methods

    PubMed Central

    2015-01-01

    Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency. PMID:24625009

  2. Studies of charge neutral FCC Lattice Gas with Yukawa Interaction and Accelerated Cartesian Expansion method

    NASA Astrophysics Data System (ADS)

    Huang, He

    In this thesis, I present the results of studies of the structural properties and phase transition of a charge neutral FCC Lattice Gas with Yukawa Interaction and discuss a novel fast calculation algorithm---Accelerated Cartesian Expansion (ACE) method. In the first part of my thesis, I discuss the results of Monte Carlo simulations carried out to understand the finite temperature (phase transition) properties and the ground state structure of a Yukawa Lattice Gas (YLG) model. In this model the ions interact via the potential q iqjexp(-kappar> ij)/rij where qi,j are the charges of the ions located at the lattice sites i and j with position vectors R i and Rj; rij = Ri-Rj, kappa is a measure of the range of the interaction and is called the screening parameter. This model approximates an interesting quaternary system of great current thermoelectric interest called LAST-m, AgSbPbmTem+2. I have also developed rapid calculation methods for the potential energy calculation in a lattice gas system with periodic boundary condition bases on the Ewald summation method and coded the algorithm to compute the energies in MC simulation. Some of the interesting results of the MC simulations are: (i) how the nature and strength of the phase transition depend on the range of interaction (Yukawa screening parameter kappa) (ii) what is the degeneracy of the ground state for different values of the concentration of charges, and (iii) what is the nature of two-stage disordering transition seen for certain values of x. In addition, based on the analysis of the surface energy of different nano-clusters formed near the transition temperature, the solidification process and the rate of production of these nano-clusters have been studied. In the second part of my thesis, we have developed two methods for rapidly computing potentials of the form R-nu. Both these methods are founded on addition theorems based on Taylor expansions. Taylor's series has a couple of inherent advantages: (i) it

  3. Interstellar Pickup Ion Acceleration in the Turbulent Magnetic Field at the Solar Wind Termination Shock Using a Focused Transport Approach

    NASA Astrophysics Data System (ADS)

    Ye, Junye; le Roux, Jakobus A.; Arthur, Aaron D.

    2016-08-01

    We study the physics of locally born interstellar pickup proton acceleration at the nearly perpendicular solar wind termination shock (SWTS) in the presence of a random magnetic field spiral angle using a focused transport model. Guided by Voyager 2 observations, the spiral angle is modeled with a q-Gaussian distribution. The spiral angle fluctuations, which are used to generate the perpendicular diffusion of pickup protons across the SWTS, play a key role in enabling efficient injection and rapid diffusive shock acceleration (DSA) when these particles follow field lines. Our simulations suggest that variation of both the shape (q-value) and the standard deviation (σ-value) of the q-Gaussian distribution significantly affect the injection speed, pitch-angle anisotropy, radial distribution, and the efficiency of the DSA of pickup protons at the SWTS. For example, increasing q and especially reducing σ enhances the DSA rate.

  4. INSTRUMENTS AND METHODS OF INVESTIGATION: An accelerator-driven system for the destruction of nuclear waste

    NASA Astrophysics Data System (ADS)

    Revol, Jean-Pierre

    2003-07-01

    Progress in particle accelerator technology makes it possible to use a proton accelerator to produce energy and to destroy nuclear waste efficiently. The energy amplifier (EA) proposed by Carlo Rubbia and his group is a subcritical fast neutron system driven by a proton accelerator. It is particularly attractive for destroying, through fission, transuranic elements produced by presently operating nuclear reactors. The EA could also efficiently and at minimal cost transform long-lived fission fragments using the concept of adiabatic resonance crossing (ARC), recently tested at CERN with the TARC experiment. The ARC concept can be extended to several other domains of application (production of radioactive isotopes for medicine and industry, neutron research applications, etc.).

  5. Application of the Euler-Lagrange method in determination of the coordinate acceleration

    NASA Astrophysics Data System (ADS)

    Sfarti, A.

    2016-05-01

    In a recent comment published in this journal (2015 Eur. J. Phys. 36 038001), Khrapko derived the relationship between coordinate acceleration and coordinate speed for the case of radial motion in Schwarzschild coordinates. We will show an alternative derivation based on the Euler-Lagrange formalism. The Euler-Lagrange formalism has the advantage that it circumvents the tedious calculations of the Christoffel symbols and it is more intuitive. Another aspect of our comment is that one should not give much physical meaning to coordinate dependent entities, GR is a coordinate free field, so, a relationship between two coordinate dependent entities, like the acceleration being dependent on speed, should not be given much importance. By contrast, the proper acceleration and proper speed, are meaningful entities and their relationship is relevant. The comment is intended for graduate students and for the instructors who teach GR.

  6. THE ROLE OF CROSS-SHOCK POTENTIAL ON PICKUP ION SHOCK ACCELERATION IN THE FRAMEWORK OF FOCUSED TRANSPORT THEORY

    SciTech Connect

    Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.

    2013-10-20

    The focused transport theory is appropriate to describe the injection and acceleration of low-energy particles at shocks as an extension of diffusive shock acceleration (DSA). In this investigation, we aim to characterize the role of cross-shock potential (CSP) originated in the charge separation across the shock ramp on pickup ion (PUI) acceleration at various types of shocks with a focused transport model. The simulation results of energy spectrum and spatial density distribution for the cases with and without CSP added in the model are compared. With sufficient acceleration time, the focused transport acceleration finally falls into the DSA regime with the power-law spectral index equal to the solution of the DSA theory. The CSP can affect the shape of the spectrum segment at lower energies, but it does not change the spectral index of the final power-law spectrum at high energies. It is found that the CSP controls the injection efficiency which is the fraction of PUIs reaching the DSA regime. A stronger CSP jump results in a dramatically improved injection efficiency. Our simulation results also show that the injection efficiency of PUIs is mass-dependent, which is lower for species with a higher mass. In addition, the CSP is able to enhance the particle reflection upstream to produce a stronger intensity spike at the shock front. We conclude that the CSP is a non-negligible factor that affects the dynamics of PUIs at shocks.

  7. GPU-accelerated inverse identification of radiative properties of particle suspensions in liquid by the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ma, C. Y.; Zhao, J. M.; Liu, L. H.; Zhang, L.; Li, X. C.; Jiang, B. C.

    2016-03-01

    Inverse identification of radiative properties of participating media is usually time consuming. In this paper, a GPU accelerated inverse identification model is presented to obtain the radiative properties of particle suspensions. The sample medium is placed in a cuvette and a narrow light beam is irradiated normally from the side. The forward three-dimensional radiative transfer problem is solved using a massive parallel Monte Carlo method implemented on graphics processing unit (GPU), and particle swarm optimization algorithm is applied to inversely identify the radiative properties of particle suspensions based on the measured bidirectional scattering distribution function (BSDF). The GPU-accelerated Monte Carlo simulation significantly reduces the solution time of the radiative transfer simulation and hence greatly accelerates the inverse identification process. Hundreds of speedup is achieved as compared to the CPU implementation. It is demonstrated using both simulated BSDF and experimentally measured BSDF of microalgae suspensions that the radiative properties of particle suspensions can be effectively identified based on the GPU-accelerated algorithm with three-dimensional radiative transfer modelling.

  8. Accelerated/abbreviated test methods of the low-cost silicon solar array project. Study 4, task 3: Encapsulation

    NASA Technical Reports Server (NTRS)

    Kolyer, J. M.; Mann, N. R.

    1977-01-01

    Methods of accelerated and abbreviated testing were developed and applied to solar cell encapsulants. These encapsulants must provide protection for as long as 20 years outdoors at different locations within the United States. Consequently, encapsulants were exposed for increasing periods of time to the inherent climatic variables of temperature, humidity, and solar flux. Property changes in the encapsulants were observed. The goal was to predict long term behavior of encapsulants based upon experimental data obtained over relatively short test periods.

  9. Cosmetic corrosion of painted aluminum and steel automotive body sheet: Results from outdoor and accelerated laboratory test methods

    SciTech Connect

    Moran, J.P.; Ziman, P.R.; Egbert, M.W.

    1995-11-01

    In recent years, increasing attention has been given to the need to develop an accelerated laboratory test method(s) for cosmetic corrosion of painted panels that realistically simulate in-service exposure. Much of that work has focused on steel substrates. The purpose of this research is to compare the corrosion performance of painted aluminum and steel sheet as determined om various laboratory methods and in-service exposure, and to develop a realistic accelerated test method for evaluation of the cosmetic corrosion of painted aluminum. Several aluminum sheet products from the 2xxx, 5xxx, and 6xxx alloy series have been tested. The steel substrates are similar to those used in other programs. The test methods chosen represent a cross-section of methods common to the automotive and aluminum industries for evaluation of painted sheet metal products. The results indicate that there is considerable difference in the relative correlation of various test methods to in-service exposure. In addition, there is considerable difference in the relative magnitudes and morphologies of corrosion, and occasionally in the relative rankings, as a function of test method. The influence of alloy composition and zinc phosphate coating weight are also discussed.

  10. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    SciTech Connect

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-08-28

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset.

  11. Method and apparatus for measuring gravitational acceleration utilizing a high temperature superconducting bearing

    DOEpatents

    Hull, John R.

    2000-01-01

    Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operating temperature at or below 77K, whereby cooling may be accomplished with liquid nitrogen.

  12. New methods for high current fast ion beam production by laser-driven acceleration

    SciTech Connect

    Margarone, D.; Krasa, J.; Prokupek, J.; Velyhan, A.; Laska, L.; Jungwirth, K.; Mocek, T.; Korn, G.; Rus, B.; Torrisi, L.; Gammino, S.; Cirrone, P.; Cutroneo, M.; Romano, F.; Picciotto, A.; Serra, E.; Giuffrida, L.; Mangione, A.; Rosinski, M.; Parys, P.; and others

    2012-02-15

    An overview of the last experimental campaigns on laser-driven ion acceleration performed at the PALS facility in Prague is given. Both the 2 TW, sub-nanosecond iodine laser system and the 20 TW, femtosecond Ti:sapphire laser, recently installed at PALS, are used along our experiments performed in the intensity range 10{sup 16}-10{sup 19} W/cm{sup 2}. The main goal of our studies was to generate high energy, high current ion streams at relatively low laser intensities. The discussed experimental investigations show promising results in terms of maximum ion energy and current density, which make the laser-accelerated ion beams a candidate for new-generation ion sources to be employed in medicine, nuclear physics, matter physics, and industry.

  13. Method and Apparatus for measuring Gravitational Acceleration Utilizing a high Temperature Superconducting Bearing

    SciTech Connect

    Hull, John R.

    1998-11-06

    Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operative temperature at or below 77K, whereby cooling maybe accomplished with liquid nitrogen.

  14. In-situ measurements of the secondary electron yield in an accelerator environment: Instrumentation and methods

    NASA Astrophysics Data System (ADS)

    Hartung, W. H.; Asner, D. M.; Conway, J. V.; Dennett, C. A.; Greenwald, S.; Kim, J.-S.; Li, Y.; Moore, T. P.; Omanovic, V.; Palmer, M. A.; Strohman, C. R.

    2015-05-01

    The performance of a particle accelerator can be limited by the build-up of an electron cloud (EC) in the vacuum chamber. Secondary electron emission from the chamber walls can contribute to EC growth. An apparatus for in-situ measurements of the secondary electron yield (SEY) in the Cornell Electron Storage Ring (CESR) was developed in connection with EC studies for the CESR Test Accelerator program. The CESR in-situ system, in operation since 2010, allows for SEY measurements as a function of incident electron energy and angle on samples that are exposed to the accelerator environment, typically 5.3 GeV counter-rotating beams of electrons and positrons. The system was designed for periodic measurements to observe beam conditioning of the SEY with discrimination between exposure to direct photons from synchrotron radiation versus scattered photons and cloud electrons. The samples can be exchanged without venting the CESR vacuum chamber. Measurements have been done on metal surfaces and EC-mitigation coatings. The in-situ SEY apparatus and improvements to the measurement tools and techniques are described.

  15. A Method to Simulate Linear Stability of Impulsively Accelerated Density Interfaces in Ideal-MHD and Gas Dynamics

    SciTech Connect

    Ravi Samtaney

    2009-02-10

    We present a numerical method to solve the linear stability of impulsively accelerated density interfaces in two dimensions such as those arising in the Richtmyer-Meshkov instability. The method uses an Eulerian approach, and is based on an unwind method to compute the temporally evolving base state and a flux vector splitting method for the perturbations. The method is applicable to either gas dynamics or magnetohydrodynamics. Numerical examples are presented for cases in which a hydrodynamic shock interacts with a single or double density interface, and a doubly shocked single density interface. Convergence tests show that the method is spatially second order accurate for smooth flows, and between first and second order accurate for flows with shocks.

  16. Biochemical characterization of two haloalkane dehalogenases: DccA from Caulobacter crescentus and DsaA from Saccharomonospora azurea.

    PubMed

    Carlucci, Lauren; Zhou, Edward; Malashkevich, Vladimir N; Almo, Steven C; Mundorff, Emily C

    2016-04-01

    Two putative haloalkane dehalogenases (HLDs) of the HLD-I subfamily, DccA from Caulobacter crescentus and DsaA from Saccharomonospora azurea, have been identified based on sequence comparisons with functionally characterized HLD enzymes. The two genes were synthesized, functionally expressed in E. coli and shown to have activity toward a panel of haloalkane substrates. DsaA has a moderate activity level and a preference for long (greater than 3 carbons) brominated substrates, but little activity toward chlorinated alkanes. DccA shows high activity with both long brominated and chlorinated alkanes. The structure of DccA was determined by X-ray crystallography and was refined to 1.5 Å resolution. The enzyme has a large and open binding pocket with two well-defined access tunnels. A structural alignment of HLD-I subfamily members suggests a possible basis for substrate specificity is due to access tunnel size. PMID:26833751

  17. Injection to Rapid Diffusive Shock Acceleration at Perpendicular Shocks in Partially Ionized Plasmas

    NASA Astrophysics Data System (ADS)

    Ohira, Yutaka

    2016-08-01

    We present a three-dimensional hybrid simulation of a collisionless perpendicular shock in a partially ionized plasma for the first time. In this simulation, the shock velocity and upstream ionization fraction are v sh ≈ 1333 km s‑1 and f i ˜ 0.5, which are typical values for isolated young supernova remnants (SNRs) in the interstellar medium. We confirm previous two-dimensional simulation results showing that downstream hydrogen atoms leak into the upstream region and are accelerated by the pickup process in the upstream region, and large magnetic field fluctuations are generated both in the upstream and downstream regions. In addition, we find that the magnetic field fluctuations have three-dimensional structures and the leaking hydrogen atoms are injected into the diffusive shock acceleration (DSA) at the perpendicular shock after the pickup process. The observed DSA can be interpreted as shock drift acceleration with scattering. In this simulation, particles are accelerated to v ˜ 100 v sh ˜ 0.3 c within ˜100 gyroperiods. The acceleration timescale is faster than that of DSA in parallel shocks. Our simulation results suggest that SNRs can accelerate cosmic rays to 1015.5 eV (the knee) during the Sedov phase.

  18. Injection to Rapid Diffusive Shock Acceleration at Perpendicular Shocks in Partially Ionized Plasmas

    NASA Astrophysics Data System (ADS)

    Ohira, Yutaka

    2016-08-01

    We present a three-dimensional hybrid simulation of a collisionless perpendicular shock in a partially ionized plasma for the first time. In this simulation, the shock velocity and upstream ionization fraction are v sh ≈ 1333 km s‑1 and f i ∼ 0.5, which are typical values for isolated young supernova remnants (SNRs) in the interstellar medium. We confirm previous two-dimensional simulation results showing that downstream hydrogen atoms leak into the upstream region and are accelerated by the pickup process in the upstream region, and large magnetic field fluctuations are generated both in the upstream and downstream regions. In addition, we find that the magnetic field fluctuations have three-dimensional structures and the leaking hydrogen atoms are injected into the diffusive shock acceleration (DSA) at the perpendicular shock after the pickup process. The observed DSA can be interpreted as shock drift acceleration with scattering. In this simulation, particles are accelerated to v ∼ 100 v sh ∼ 0.3 c within ∼100 gyroperiods. The acceleration timescale is faster than that of DSA in parallel shocks. Our simulation results suggest that SNRs can accelerate cosmic rays to 1015.5 eV (the knee) during the Sedov phase.

  19. Toward automatic detection of vessel stenoses in cerebral 3D DSA volumes

    NASA Astrophysics Data System (ADS)

    Mualla, F.; Pruemmer, M.; Hahn, D.; Hornegger, J.

    2012-05-01

    Vessel diseases are a very common reason for permanent organ damage, disability and death. This fact necessitates further research for extracting meaningful and reliable medical information from the 3D DSA volumes. Murray's law states that at each branch point of a lumen-based system, the sum of the minor branch diameters each raised to the power x, is equal to the main branch diameter raised to the power x. The principle of minimum work and other factors like the vessel type, impose typical values for the junction exponent x. Therefore, deviations from these typical values may signal pathological cases. In this paper, we state the necessary and the sufficient conditions for the existence and the uniqueness of the solution for x. The second contribution is a scale- and orientation- independent set of features for stenosis classification. A support vector machine classifier was trained in the space of these features. Only one branch was misclassified in a cross validation on 23 branches. The two contributions fit into a pipeline for the automatic detection of the cerebral vessel stenoses.

  20. Laser driven ion accelerator

    DOEpatents

    Tajima, Toshiki

    2006-04-18

    A system and method of accelerating ions in an accelerator to optimize the energy produced by a light source. Several parameters may be controlled in constructing a target used in the accelerator system to adjust performance of the accelerator system. These parameters include the material, thickness, geometry and surface of the target.

  1. Laser driven ion accelerator

    DOEpatents

    Tajima, Toshiki

    2005-06-14

    A system and method of accelerating ions in an accelerator to optimize the energy produced by a light source. Several parameters may be controlled in constructing a target used in the accelerator system to adjust performance of the accelerator system. These parameters include the material, thickness, geometry and surface of the target.

  2. Insights into accelerated liposomal release of topotecan in plasma monitored by a non-invasive fluorescence spectroscopic method

    PubMed Central

    Fugit, Kyle D.; Jyoti, Amar; Upreti, Meenakshi; Anderson, Bradley D.

    2014-01-01

    A non-invasive fluorescence method was developed to monitor liposomal release kinetics of the anticancer agent topotecan (TPT) in physiological fluids and subsequently used to explore the cause of accelerated release in plasma. Analyses of fluorescence excitation spectra confirmed that unencapsulated TPT exhibits a red shift in its spectrum as pH is increased. This property was used to monitor TPT release from actively loaded liposomal formulations having a low intravesicular pH. Mathematical release models were developed to extract reliable rate constants for TPT release in aqueous solutions monitored by fluorescence and release kinetics obtained by HPLC. Using the fluorescence method, accelerated TPT release was observed in plasma as previously reported in the literature. Simulations to estimate the intravesicular pH were conducted to demonstrate that accelerated release correlated with alterations in the low intravesicular pH. This was attributed to the presence of ammonia in plasma samples rather than proteins and other plasma components generally believed to alter release kinetics in physiological samples. These findings shed light on the critical role that ammonia may play in contributing to the preclinical/clinical variability and performance seen with actively-loaded liposomal formulations of TPT and other weakly-basic anticancer agents. PMID:25456833

  3. Accelerated test techniques for micro-circuits: Evaluation of high temperature (473 k - 573 K) accelerated life test techniques as effective microcircuit screening methods

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1976-01-01

    The application of high temperature accelerated test techniques was shown to be an effective method of microcircuit defect screening. Comprehensive microcircuit evaluations and a series of high temperature (473 K to 573 K) life tests demonstrated that a freak or early failure population of surface contaminated devices could be completely screened in thirty two hours of test at an ambient temperature of 523 K. Equivalent screening at 398 K, as prescribed by current Military and NASA specifications, would have required in excess of 1,500 hours of test. All testing was accomplished with a Texas Instruments' 54L10, low power triple-3 input NAND gate manufactured with a titanium- tungsten (Ti-W), Gold (Au) metallization system. A number of design and/or manufacturing anomalies were also noted with the Ti-W, Au metallization system. Further study of the exact nature and cause(s) of these anomalies is recommended prior to the use of microcircuits with Ti-W, Au metallization in long life/high reliability applications. Photomicrographs of tested circuits are included.

  4. A simplified spherical harmonic method for coupled electron-photon transport calculations

    SciTech Connect

    Josef, J.A.

    1996-12-01

    In this thesis we have developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. Our theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. Previous analyses have indicated that the P{sub 1} DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S{sub N} equations, yet is very effective for the 1-D S{sub N} equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP{sub N} method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP{sub N} method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations.

  5. Two-Screen Method for Determining Electron Beam Energy and Deflection from Laser Wakefield Acceleration

    SciTech Connect

    Pollock, B B; Ross, J S; Tynan, G R; Divol, L; Glenzer, S H; Leurent, V; Palastro, J P; Ralph, J E; Froula, D H; Clayton, C E; Marsh, K A; Pak, A E; Wang, T L; Joshi, C

    2009-04-24

    Laser Wakefield Acceleration (LWFA) experiments have been performed at the Jupiter Laser Facility, Lawrence Livermore National Laboratory. In order to unambiguously determine the output electron beam energy and deflection angle at the plasma exit, we have implemented a two-screen electron spectrometer. This system is comprised of a dipole magnet followed by two image plates. By measuring the electron beam deviation from the laser axis on each plate, both the energy and deflection angle at the plasma exit are determined through the relativistic equation of motion.

  6. Proposed method for high-speed plasma density measurement in proton-driven plasma wakefield acceleration

    SciTech Connect

    Tarkeshian, R.; Reimann, O.; Muggli, P.

    2012-12-21

    Recently a proton-bunch-driven plasma wakefield acceleration experiment using the CERN-SPS beam was proposed. Different types of plasma cells are under study, especially laser ionization, plasma discharge, and helicon sources. One of the key parameters is the spatial uniformity of the plasma density profile along the cell that has to be within < 1% of the nominal density (6 Multiplication-Sign 10{sup 14} cm{sup -3}). Here a setup based on a photomixing concept is proposed to measure the plasma cut-off frequency and determine the plasma density.

  7. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  8. Mixed Modulation Method of PWM Inverter by Considering Acceleration Torque and Voltage Saturation for Speed Servo System

    NASA Astrophysics Data System (ADS)

    Takahashi, Kenji; Ohishi, Kiyoshi; Kanmachi, Tosiyuki

    The speed servo system of an AC motor should always have a rapid and smooth response without current ripple. For this purpose, this paper proposes a new mixed modulation method of the PWM inverter by considering acceleration torque and voltage saturation. The rapid and robust speed servo system often has the high gain speed controller and the high gain current controller. In this case, this speed servo system often has the voltage saturation in the transient state. This paper discusses the amplitude and THD of output voltage on the condition of voltage saturation for each voltage modulation method of three phase inverter such as the carrier comparison inverter using the two phase modulation (2ph. M) and the space voltage vector modulation (SVM) inverter. The carrier comparison inverter using the 2ph. M has the large voltage with large harmonic current. The SVM inverter has the smooth voltage response with small harmonic current. The proposed method switches over the SVM and the 2ph. M methods properly by considering acceleration torque and voltage saturation. The experimental results confirm the effectiveness of the proposed mixed modulation method of the PWM inverter.

  9. Accelerator-based neutron source for boron neutron capture therapy (BNCT) and method

    DOEpatents

    Yoon, Woo Y.; Jones, James L.; Nigg, David W.; Harker, Yale D.

    1999-01-01

    A source for boron neutron capture therapy (BNCT) comprises a body of photoneutron emitter that includes heavy water and is closely surrounded in heat-imparting relationship by target material; one or more electron linear accelerators for supplying electron radiation having energy of substantially 2 to 10 MeV and for impinging such radiation on the target material, whereby photoneutrons are produced and heat is absorbed from the target material by the body of photoneutron emitter. The heavy water is circulated through a cooling arrangement to remove heat. A tank, desirably cylindrical or spherical, contains the heavy water, and a desired number of the electron accelerators circumferentially surround the tank and the target material as preferably made up of thin plates of metallic tungsten. Neutrons generated within the tank are passed through a surrounding region containing neutron filtering and moderating materials and through neutron delimiting structure to produce a beam or beams of epithermal neutrons normally having a minimum flux intensity level of 1.0.times.10.sup.9 neutrons per square centimeter per second. Such beam or beams of epithermal neutrons are passed through gamma ray attenuating material to provide the required epithermal neutrons for BNCT use.

  10. Accelerator-based neutron source for boron neutron capture therapy (BNCT) and method

    DOEpatents

    Yoon, W.Y.; Jones, J.L.; Nigg, D.W.; Harker, Y.D.

    1999-05-11

    A source for boron neutron capture therapy (BNCT) comprises a body of photoneutron emitter that includes heavy water and is closely surrounded in heat-imparting relationship by target material; one or more electron linear accelerators for supplying electron radiation having energy of substantially 2 to 10 MeV and for impinging such radiation on the target material, whereby photoneutrons are produced and heat is absorbed from the target material by the body of photoneutron emitter. The heavy water is circulated through a cooling arrangement to remove heat. A tank, desirably cylindrical or spherical, contains the heavy water, and a desired number of the electron accelerators circumferentially surround the tank and the target material as preferably made up of thin plates of metallic tungsten. Neutrons generated within the tank are passed through a surrounding region containing neutron filtering and moderating materials and through neutron delimiting structure to produce a beam or beams of epithermal neutrons normally having a minimum flux intensity level of 1.0{times}10{sup 9} neutrons per square centimeter per second. Such beam or beams of epithermal neutrons are passed through gamma ray attenuating material to provide the required epithermal neutrons for BNCT use. 3 figs.

  11. 2D models of gas flow and ice grain acceleration in Enceladus' vents using DSMC methods

    NASA Astrophysics Data System (ADS)

    Tucker, Orenthal J.; Combi, Michael R.; Tenishev, Valeriy M.

    2015-09-01

    The gas distribution of the Enceladus water vapor plume and the terminal speeds of ejected ice grains are physically linked to its subsurface fissures and vents. It is estimated that the gas exits the fissures with speeds of ∼300-1000 m/s, while the micron-sized grains are ejected with speeds comparable to the escape speed (Schmidt, J. et al. [2008]. Nature 451, 685-688). We investigated the effects of isolated axisymmetric vent geometries on subsurface gas distributions, and in turn, the effects of gas drag on grain acceleration. Subsurface gas flows were modeled using a collision-limiter Direct Simulation Monte Carlo (DSMC) technique in order to consider a broad range of flow regimes (Bird, G. [1994]. Molecular Gas Dynamics and the Direct Simulation of Gas Flows. Oxford University Press, Oxford; Titov, E.V. et al. [2008]. J. Propul. Power 24(2), 311-321). The resulting DSMC gas distributions were used to determine the drag force for the integration of ice grain trajectories in a test particle model. Simulations were performed for diffuse flows in wide channels (Reynolds number ∼10-250) and dense flows in narrow tubular channels (Reynolds number ∼106). We compared gas properties like bulk speed and temperature, and the terminal grain speeds obtained at the vent exit with inferred values for the plume from Cassini data. In the simulations of wide fissures with dimensions similar to that of the Tiger Stripes the resulting subsurface gas densities of ∼1014-1020 m-3 were not sufficient to accelerate even micron-sized ice grains to the Enceladus escape speed. In the simulations of narrow tubular vents with radii of ∼10 m, the much denser flows with number densities of 1021-1023 m-3 accelerated micron-sized grains to bulk gas speed of ∼600 m/s. Further investigations are required to understand the complex relationship between the vent geometry, gas source rate and the sizes and speeds of ejected grains.

  12. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    NASA Technical Reports Server (NTRS)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  13. The normalized weighting factor method: A novel technique for accelerating the convergence of high-resolution convective schemes

    SciTech Connect

    Darwish, M.D.; Moukalled, F.

    1996-09-01

    This article deals with the development of a new method for accelerating the solution of flow problems discretized using high-resolution convective schemes. The technique is based on the normalized variable and space formulation (NVSF) methodology and is denoted here by the normalized weighting-factor (NWF) method. In contrast with the well-known deferred-correction (DC) procedure, the NWF method is fully implicit and is derived by directly replacing the control-volume face values by their functional relationships in the discretized equation. The direct substitution is performed by the introduction of a variable, NWF, that accounts for the multiplicity of interpolation profiles in HR schemes. The new method is compared with the widely used DC procedure and is shown to be, on average, four times faster.

  14. Laser triggered injection of electrons in a laser wakefield accelerator with the colliding pulse method

    SciTech Connect

    Nakamura, K.; Fubiani, G.; Geddes, C.G.R.; Michel, P.; van Tilborg, J.; Toth, C.; Esarey, E.; Schroeder, C.B.; Leemans, W.P.

    2004-10-22

    An injection scheme for a laser wakefield accelerator that employs a counter propagating laser (colliding with the drive laser pulse, used to generate a plasma wake) is discussed. The threshold laser intensity for electron injection into the wakefield was analyzed using a heuristic model based on phase-space island overlap. Analysis shows that the injection can be performed using modest counter propagating laser intensity a{sub 1} < 0.5 for a drive laser intensity of a{sub 0} = 1.0. Preliminary experiments were preformed using a drive beam and colliding beam. Charge enhancement by the colliding pulse was observed. Increasing the signal-to-noise ratio by means of a preformed plasma channel is discussed.

  15. Neutron source, linear-accelerator fuel enricher and regenerator and associated methods

    DOEpatents

    Steinberg, Meyer; Powell, James R.; Takahashi, Hiroshi; Grand, Pierre; Kouts, Herbert

    1982-01-01

    A device for producing fissile material inside of fabricated nuclear elements so that they can be used to produce power in nuclear power reactors. Fuel elements, for example, of a LWR are placed in pressure tubes in a vessel surrounding a liquid lead-bismuth flowing columnar target. A linear-accelerator proton beam enters the side of the vessel and impinges on the dispersed liquid lead-bismuth columns and produces neutrons which radiate through the surrounding pressure tube assembly or blanket containing the nuclear fuel elements. These neutrons are absorbed by the natural fertile uranium-238 elements and are transformed to fissile plutonium-239. The fertile fuel is thus enriched in fissile material to a concentration whereby they can be used in power reactors. After use in the power reactors, dispensed depleted fuel elements can be reinserted into the pressure tubes surrounding the target and the nuclear fuel regenerated for further burning in the power reactor.

  16. Evaluation of Dynamic Mechanical Loading as an Accelerated Test Method for Ribbon Fatigue

    SciTech Connect

    Bosco, Nick; Silverman, Timothy J.; Wohlgemuth, John; Kurtz, Sarah; Inoue, Masanao; Sakurai, Keiichiro; Shioda, Tsuyoshi; Zenkoh, Hirofumi; Hirota, Kusato; Miyashita, Masanori; Tadanori, Tanahashi; Suzuki, Soh; Chen, Yifeng; Verlinden, Pierre J.

    2014-12-31

    Dynamic Mechanical Loading (DML) of photovoltaic modules is explored as a route to quickly fatigue copper interconnect ribbons. Results indicate that most of the interconnect ribbons may be strained through module mechanical loading to a level that will result in failure in a few hundred to thousands of cycles. Considering the speed at which DML may be applied, this translates into a few hours of testing. To evaluate the equivalence of DML to thermal cycling, parallel tests were conducted with thermal cycling. Preliminary analysis suggests that one +/-1 kPa DML cycle is roughly equivalent to one standard accelerated thermal cycle and approximately 175 of these cycles are equivalent to a 25-year exposure in Golden Colorado for the mechanism of module ribbon fatigue.

  17. Evaluation of Dynamic Mechanical Loading as an Accelerated Test Method for Ribbon Fatigue: Preprint

    SciTech Connect

    Bosco, N.; Silverman, T. J.; Wohlgemuth, J.; Kurtz, S.; Inoue, M.; Sakurai, K.; Shinoda, T.; Zenkoh, H.; Hirota, K.; Miyashita, M.; Tadanori, T.; Suzuki, S.

    2015-04-07

    Dynamic Mechanical Loading (DML) of photovoltaic modules is explored as a route to quickly fatigue copper interconnect ribbons. Results indicate that most of the interconnect ribbons may be strained through module mechanical loading to a level that will result in failure in a few hundred to thousands of cycles. Considering the speed at which DML may be applied, this translates into a few hours o testing. To evaluate the equivalence of DML to thermal cycling, parallel tests were conducted with thermal cycling. Preliminary analysis suggests that one +/-1 kPa DML cycle is roughly equivalent to one standard accelerated thermal cycle and approximately 175 of these cycles are equivalent to a 25-year exposure in Golden Colorado for the mechanism of module ribbon fatigue.

  18. Cavity digital control testing system by Simulink step operation method for TESLA linear accelerator and free electron laser

    NASA Astrophysics Data System (ADS)

    Czarski, Tomasz; Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Simrock, Stefan

    2004-07-01

    The cavity control system for the TESLA -- TeV-Energy Superconducting Linear Accelerator project is initially introduced in this paper. The FPGA -- Field Programmable Gate Array technology has been implemented for digital controller stabilizing cavity field gradient. The cavity SIMULINK model has been applied to test the hardware controller. The step operation method has been developed for testing the FPGA device coupled to the SIMULINK model of the analog real plant. The FPGA signal processing has been verified according to the required algorithm of the reference MATLAB controller. Some experimental results have been presented for different cavity operational conditions.

  19. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units

    NASA Astrophysics Data System (ADS)

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  20. Click chemistry approach to conventional vegetable tanning process: accelerated method with improved organoleptic properties.

    PubMed

    Krishnamoorthy, Ganesan; Ramamurthy, Govindaswamy; Sadulla, Sayeed; Sastry, Thotapalli Parvathaleswara; Mandal, Asit Baran

    2014-09-01

    Click chemistry approaches are tailored to generate molecular building blocks quickly and reliably by joining small units together selectively and covalently, stably and irreversibly. The vegetable tannins such as hydrolyzable and condensed tannins are capable to produce rather stable radicals or inhibit the progress of radicals and are prone to oxidations such as photo and auto-oxidation, and their anti-oxidant nature is well known. A lot remains to be done to understand the extent of the variation of leather stability, color variation (lightening and darkening reaction of leather), and poor resistance to water uptake for prolonged periods. In the present study, we have reported click chemistry approaches to accelerated vegetable tanning processes based on periodates catalyzed formation of oxidized hydrolysable and condensed tannins for high exhaustion with improved properties. The distribution of oxidized vegetable tannin, the thermal stability such as shrinkage temperature (T s) and denaturation temperature (T d), resistance to collagenolytic activities, and organoleptic properties of tanned leather as well as the evaluations of eco-friendly characteristics were investigated. Scanning electron microscopic analysis indicates the cross section of tightness of the leather. Differential scanning calorimetric analysis shows that the T d of leather is more than that of vegetable tanned or equal to aldehyde tanned one. The leathers exhibited fullness, softness, good color, and general appearance when compared to non-oxidized vegetable tannin. The developed process benefits from significant reduction in total solids and better biodegradability in the effluent, compared to non-oxidized vegetable tannins. PMID:24888617

  1. A method for the accelerated simulation of micro-embossed topographies in thermoplastic polymers

    NASA Astrophysics Data System (ADS)

    Taylor, Hayden; Hale, Melinda; Cheong Lam, Yee; Boning, Duane

    2010-06-01

    Users of hot micro-embossing often wish to simulate numerically the topographies produced by the process. We have previously demonstrated a fast simulation technique that encapsulates the embossed layer's viscoelastic properties using the response of its surface topography to a mechanical impulse applied at a single location. The simulated topography is the convolution of this impulse response with an iteratively found stamp-polymer contact-pressure distribution. Here, we show how the simulation speed can be radically increased by abstracting feature-rich embossing-stamp designs. The stamp is divided into a grid of regions, each characterized by feature shape, pitch and areal density. The simulation finds a contact-pressure distribution at the resolution of the grid, from which the completeness of pattern replication is predicted. For a 25 mm square device design containing microfluidic features down to 5 µm diameter, simulation can be completed within 10 s, as opposed to the 104 s expected if each stamp feature were represented individually. We verify the accuracy of our simulation procedure by comparison with embossing experiments. We also describe a way of abstracting designs at multiple levels of spatial resolution, further accelerating the simulation of patterns whose detail is contained in a small proportion of their area.

  2. Ultrasensitive detection method for primordial nuclides in copper with Accelerator Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Famulok, N.; Faestermann, T.; Fimiani, L.; Gómez-Guzmán, J. M.; Hain, K.; Korschinek, G.; Ludwig, P.; Schönert, S.

    2015-10-01

    The sensitivity of rare event physics experiments like neutrino or direct dark matter detection crucially depends on the background level. A significant background contribution originates from the primordial actinides thorium (Th) and uranium (U) and the progenies of their decay chains. The applicability of ultra-sensitive Accelerator Mass Spectrometry (AMS) for the direct detection of Th and U impurities in three copper samples is evaluated. Although AMS has been proven to reach outstanding sensitivities for long-lived isotopes, this technique has only very rarely been used to detect ultra low concentrations of primordial actinides. Here it is utilized for the first time to detect primordial Th and U in ultra pure copper serving as shielding material in low level detectors. The lowest concentrations achieved were (1.5 ± 0.6) ·10-11 g/g for Th and (8 ± 4) ·10-14 g/g for U which corresponds to (59 ± 24) and (1.0 ± 0.5) μBq/kg, respectively.

  3. Numerical methods for instability mitigation in the modeling of laser wakefield accelerators in a Lorentz-boosted frame

    SciTech Connect

    Vay, J.-L.; Geddes, C.G.R.; Cormier-Michel, E.; Grote, D.P.

    2011-07-01

    Modeling of laser-plasma wakefield accelerators in an optimal frame of reference has been shown to produce orders of magnitude speed-up of calculations from first principles. Obtaining these speedups required mitigation of a high-frequency instability that otherwise limits effectiveness. In this paper, methods are presented which mitigated the observed instability, including an electromagnetic solver with tunable coefficients, its extension to accommodate Perfectly Matched Layers and Friedman's damping algorithms, as well as an efficient large bandwidth digital filter. It is observed that choosing the frame of the wake as the frame of reference allows for higher levels of filtering or damping than is possible in other frames for the same accuracy. Detailed testing also revealed the existence of a singular time step at which the instability level is minimized, independently of numerical dispersion. A combination of the techniques presented in this paper prove to be very efficient at controlling the instability, allowing for efficient direct modeling of 10 GeV class laser plasma accelerator stages. The methods developed in this paper may have broader application, to other Lorentz-boosted simulations and Particle-In-Cell simulations in general.

  4. Accelerated test methods for life prediction of hermetic motor insulation systems exposed to alternative refrigerant/lubricant mixtures. Final report

    SciTech Connect

    Ellis, P.F. II; Ferguson, A.F.

    1995-04-19

    In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. Phase 1 of the project, Conceptual Design of an accelerated test method and apparatus, was successfully completed in June 1993. The culmination of that effort was the concept of the Simulated Stator Unit (SSU) test. The objective of the Phase 2 limited proof-of-concept demonstration was to: answer specific engineering/design questions; design and construct an analog control sequencer and supporting apparatus; and conduct limited tests to determine the viability of the SSU test concept. This report reviews the SSU test concept, and describes the results through the conclusion of the proof-of-concept prototype tests in March 1995. The technical design issues inherent in transforming any conceptual design to working equipment have been resolved, and two test systems and controllers have been constructed. Pilot tests and three prototype tests have been completed, concluding the current phase of work. One prototype unit was tested without thermal stress loads. Twice daily insulation property measurements (IPMs) on this unit demonstrated that the insulation property measurements themselves did not degrade the SSU.

  5. An Experimental Study on the Fabrication of Glass-based Acceleration Sensor Body Using Micro Powder Blasting Method

    PubMed Central

    Park, Dong-Sam; Yun, Dae-Jin; Cho, Myeong-Woo; Shin, Bong-Cheol

    2007-01-01

    This study investigated the feasibility of the micro powder blasting technique for the micro fabrication of sensor structures using the Pyrex glass to replace the existing silicon-based acceleration sensor fabrication processes. As the preliminary experiments, the effects of the blasting pressure, the mass flow rate of abrasive and the number of nozzle scanning times on erosion depth of the Pyrex and the soda lime glasses were examined. From the experimental results, optimal blasting conditions were selected for the Pyrex glass machining. The dimensions of the designed glass sensor was 1.7×1.7×0.6mm for the vibrating mass, and 2.9×0.7×0.2mm for the cantilever beam. The machining results showed that the dimensional errors of the machined glass sensor ranged from 3 μm in minimum to 20 μm in maximum. These results imply that the micro powder blasting method can be applied for the micromachining of glass-based acceleration sensors to replace the exiting method.

  6. Accelerated test methods for life prediction of hermetic motor insulation systems exposed to alternative refrigerant/lubricant mixtures

    NASA Astrophysics Data System (ADS)

    Ellis, P. F., II; Ferguson, A. F.

    1995-04-01

    In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. Phase 1 of the project, Conceptual Design of an accelerated test method and apparatus, was successfully completed in June 1993. The culmination of that effort was the concept of the Simulated Stator Unit (SSU) test. The objective of the Phase 2 limited proof-of-concept demonstration was to: answer specific engineering/design questions; design and construct an analog control sequencer and supporting apparatus; and conduct limited tests to determine the viability of the SSU test concept. This report reviews the SSU test concept, and describes the results through the conclusion of the proof-of-concept prototype tests in March 1995. The technical design issues inherent in transforming any conceptual design to working equipment have been resolved, and two test systems and controllers have been constructed. Pilot tests and three prototype tests have been completed, concluding the current phase of work. One prototype unit was tested without thermal stress loads. Twice daily insulation property measurements (IPM's) on this unit demonstrated that the insulation property measurements themselves did not degrade the SSU.

  7. Acceleration of reinforcement learning by policy evaluation using nonstationary iterative method.

    PubMed

    Senda, Kei; Hattori, Suguru; Hishinuma, Toru; Kohda, Takehisa

    2014-12-01

    Typical methods for solving reinforcement learning problems iterate two steps, policy evaluation and policy improvement. This paper proposes algorithms for the policy evaluation to improve learning efficiency. The proposed algorithms are based on the Krylov Subspace Method (KSM), which is a nonstationary iterative method. The algorithms based on KSM are tens to hundreds times more efficient than existing algorithms based on the stationary iterative methods. Algorithms based on KSM are far more efficient than they have been generally expected. This paper clarifies what makes algorithms based on KSM makes more efficient with numerical examples and theoretical discussions. PMID:24733037

  8. Note: An online testing method for lifetime projection of high power light-emitting diode under accelerated reliability test

    NASA Astrophysics Data System (ADS)

    Chen, Qi; Chen, Quan; Luo, Xiaobing

    2014-09-01

    In recent years, due to the fast development of high power light-emitting diode (LED), its lifetime prediction and assessment have become a crucial issue. Although the in situ measurement has been widely used for reliability testing in laser diode community, it has not been applied commonly in LED community. In this paper, an online testing method for LED life projection under accelerated reliability test was proposed and the prototype was built. The optical parametric data were collected. The systematic error and the measuring uncertainty were calculated to be within 0.2% and within 2%, respectively. With this online testing method, experimental data can be acquired continuously and sufficient amount of data can be gathered. Thus, the projection fitting accuracy can be improved (r2 = 0.954) and testing duration can be shortened.

  9. Silhouette method for hidden surface removal in computer holography and its acceleration using the switch-back technique.

    PubMed

    Matsushima, Kyoji; Nakamura, Masaki; Nakahara, Sumio

    2014-10-01

    A powerful technique is presented for occlusion processing in computer holography. The technique offers an improvement on the conventional silhouette method, which is a general wave optics-based occlusion processing method. The proposed technique dramatically reduces the computation time required for computer-generated holograms (CGH) of self-occluded objects. Performance measurements show that a full-parallax high-definition CGH composed of billions of pixels and a small CGH intended to be reconstructed in electro-holography can be computed in only 1.7 h and 4.5 s, respectively, without any hardware acceleration. Optical reconstruction of the high-definition CGH shows natural and continuous motion parallax in the self-occluded object. PMID:25322021

  10. Large full band gaps for photonic crystals in two dimensions computed by an inverse method with multigrid acceleration

    NASA Astrophysics Data System (ADS)

    Chern, R. L.; Chang, C. Chung; Chang, Chien C.; Hwang, R. R.

    2003-08-01

    In this study, two fast and accurate methods of inverse iteration with multigrid acceleration are developed to compute band structures of photonic crystals of general shape. In particular, we report two-dimensional photonic crystals of silicon air with an optimal full band gap of gap-midgap ratio Δω/ωmid=0.2421, which is 30% larger than ever reported in the literature. The crystals consist of a hexagonal array of circular columns, each connected to its nearest neighbors by slender rectangular rods. A systematic study with respect to the geometric parameters of the photonic crystals was made possible with the present method in drawing a three-dimensional band-gap diagram with reasonable computing time.

  11. Large full band gaps for photonic crystals in two dimensions computed by an inverse method with multigrid acceleration.

    PubMed

    Chern, R L; Chang, C Chung; Chang, Chien C; Hwang, R R

    2003-08-01

    In this study, two fast and accurate methods of inverse iteration with multigrid acceleration are developed to compute band structures of photonic crystals of general shape. In particular, we report two-dimensional photonic crystals of silicon air with an optimal full band gap of gap-midgap ratio Deltaomega/omega(mid)=0.2421, which is 30% larger than ever reported in the literature. The crystals consist of a hexagonal array of circular columns, each connected to its nearest neighbors by slender rectangular rods. A systematic study with respect to the geometric parameters of the photonic crystals was made possible with the present method in drawing a three-dimensional band-gap diagram with reasonable computing time. PMID:14525145

  12. Using combinatorial bioinformatics methods to analyze annual perspective changes of influenza viruses and to accelerate development of effective vaccines.

    PubMed

    Hu, Yu-Jen; Chow, Kuan-Chih; Liu, Ching-Chuan; Lin, Li-Jen; Wang, Sheng-Cheng; Wang, Shulhn-Der

    2015-08-01

    The standard World Health Organization procedure for vaccine development has provided a guideline for influenza viruses, but no systematic operational model. We recently designed a systemic analysis method to evaluate annual perspective sequence changes of influenza virus strains. We applied dnaml of PHYLIP 3.69, developed by Joseph Felsenstein of Washington University, and ClustalX2, developed by Larkin et al, for calculating, comparing, and localizing the most plausible vaccine epitopes. This study identified the changes in biological sequences and associated alignment alterations, which would ultimately affect epitope structures, as well as the plausible hidden features to search for the most conserved and effective epitopes for vaccine development. Addition our newly designed systemic analysis method to supplement the WHO guidelines could accelerate the development of urgently needed vaccines that might concurrently combat several strains of viruses within a shorter period. PMID:26044364

  13. Accelerating seismic interpolation with a gradient projection method based on tight frame property of curvelet

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei; Wang, Benfeng

    2015-08-01

    Seismic interpolation, as an efficient strategy of providing reliable wavefields, belongs to large-scale computing problems. The rapid increase of data volume in high dimensional interpolation requires highly efficient methods to relieve computational burden. Most methods adopt the L1 norm as a sparsity constraint of solutions in some transformed domain; however, the L1 norm is non-differentiable and gradient-type methods cannot be applied directly. On the other hand, methods for unconstrained L1 norm optimisation always depend on the regularisation parameter which needs to be chosen carefully. In this paper, a fast gradient projection method for the smooth L1 problem is proposed based on the tight frame property of the curvelet transform that can overcome these shortcomings. Some smooth L1 norm functions are discussed and their properties are analysed, then the Huber function is chosen to replace the L1 norm. The novelty of the proposed method is that the tight frame property of the curvelet transform is utilised to improve the computational efficiency. Numerical experiments on synthetic and real data demonstrate the validity of the proposed method which can be used in large-scale computing.

  14. On the equivalence of LIST and DIIS methods for convergence acceleration

    SciTech Connect

    Garza, Alejandro J.; Scuseria, Gustavo E.

    2015-04-28

    Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.

  15. Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method

    SciTech Connect

    Dana Knoll; HyeongKae Park; Chris Newman

    2011-02-01

    We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.

  16. Linear-scaling multipole-accelerated Gaussian and finite-element Coulomb method

    NASA Astrophysics Data System (ADS)

    Watson, Mark A.; Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko

    2008-02-01

    A linear-scaling implementation of the Gaussian and finite-element Coulomb (GFC) method is presented for the rapid computation of the electronic Coulomb potential. The current work utilizes the fast multipole method (FMM) for the evaluation of the Poisson equation boundary condition. The FMM affords significant savings for small- and medium-sized systems and overcomes the bottleneck in the GFC method for very large systems. Compared to an exact analytical treatment of the boundary, more than 100-fold speedups are observed for systems with more than 1000 basis functions without any significant loss of accuracy. We present CPU times to demonstrate the effectiveness of the linear-scaling GFC method for both one-dimensional polyalanine chains and the challenging case of three-dimensional diamond fragments.

  17. Multiwavelet Discontinuous Galerkin Accelerated ELP Method for the Shallow Water Equations on the Cubed Sphere

    SciTech Connect

    White III, James B; Archibald, Richard K; Evans, Katherine J; Drake, John

    2011-01-01

    In this paper we present a new approach to increase the time-step size for an explicit discontinuous Galerkin numerical method. The attributes of this approach are demonstrated on standard tests for the shallow-water equations on the sphere. The addition of multiwavelets to discontinuous Galerkin method, which has the benefit of being scalable, flexible, and conservative, provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact linear part time-evolution schemes, which can remain stable for implicit-sized time steps, can help increase the time-step size for shallow water equations on the sphere.

  18. [Development of an accelerated method of determining the antibiotic sensitivity of Cl. perfringens type A].

    PubMed

    Zemlianitskaia, E P; Kurbanova, I Z; Sergeeva, T I

    1979-02-01

    An express method for determination of antibiotic sensitivity in the strains of Cl. perfringens of type A using Soviet dry nutrient media and antibiotics is proposed. The criteria for estimation of the level of the antibiotic sensitivity of the causative agent of gas gangrene in short periods on the basis of comparison of the data of the antibiotic agar diffusion procedure and the antibiotic MIC were worked out. Twelve antibiotics and 45 collection strains of Cl. perfringens of type A were used in the experiment. The antibiotic agar diffusion method with the use of the nutrient media, microbial load and cultivation conditions developed by the authors is recommended for tentative determination of the antibiotic sensitivity in Cl. perfringens of type A for 4 hours. The use of the agar diffusion method and determination of the antibiotic MIC provided complete estimation of tha antibiotic sensitivity of Cl. perfringens of type A within not more than 24 hours. PMID:219770

  19. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace

    PubMed Central

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery

    2016-01-01

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632

  20. Monitoring method for neutron flux for a spallation target in an accelerator driven sub-critical system

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang, He, Zhi-Yong; Yang, Lei; Zhang, Xue-Ying; Cui, Wen-Juan; Chen, Zhi-Qiang; Xu, Hu-Shan

    2016-07-01

    In this paper, we study a monitoring method for neutron flux for the spallation target used in an accelerator driven sub-critical (ADS) system, where a spallation target located vertically at the centre of a sub-critical core is bombarded vertically by high-energy protons from an accelerator. First, by considering the characteristics in the spatial variation of neutron flux from the spallation target, we propose a multi-point measurement technique, i.e. the spallation neutron flux should be measured at multiple vertical locations. To explain why the flux should be measured at multiple locations, we have studied neutron production from a tungsten target bombarded by a 250 MeV-proton beam with Geant4-based Monte Carlo simulations. The simulation results indicate that the neutron flux at the central location is up to three orders of magnitude higher than the flux at lower locations. Secondly, we have developed an effective technique in order to measure the spallation neutron flux with a fission chamber (FC), by establishing the relation between the fission rate measured by FC and the spallation neutron flux. Since this relation is linear for a FC, a constant calibration factor is used to derive the neutron flux from the measured fission rate. This calibration factor can be extracted from the energy spectra of spallation neutrons. Finally, we have evaluated the proposed calibration method for a FC in the environment of an ADS system. The results indicate that the proposed method functions very well. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03010000 and XDA03030000) and the National Natural Science Foundation of China(91426301).

  1. Evaluation of micro-colorimetric lipid determination method with samples prepared using sonication and accelerated solvent extraction methods

    EPA Science Inventory

    Two common laboratory extraction techniques were evaluated for routine use with the micro-colorimetric lipid determination method developed by Van Handel (1985) [E. Van Handel, J. Am. Mosq. Control Assoc. 1(1985) 302] and recently validated for small samples by Inouye and Lotufo ...

  2. Production-passage-time approximation: a new approximation method to accelerate the simulation process of enzymatic reactions.

    PubMed

    Kuwahara, Hiroyuki; Myers, Chris J

    2008-09-01

    Given the substantial computational requirements of stochastic simulation, approximation is essential for efficient analysis of any realistic biochemical system. This paper introduces a new approximation method to reduce the computational cost of stochastic simulations of an enzymatic reaction scheme which in biochemical systems often includes rapidly changing fast reactions with enzyme and enzyme-substrate complex molecules present in very small counts. Our new method removes the substrate dissociation reaction by approximating the passage time of the formation of each enzyme-substrate complex molecule which is destined to a production reaction. This approach skips the firings of unimportant yet expensive reaction events, resulting in a substantial acceleration in the stochastic simulations of enzymatic reactions. Additionally, since all the parameters used in our new approach can be derived by the Michaelis-Menten parameters which can actually be measured from experimental data, applications of this approximation can be practical even without having full knowledge of the underlying enzymatic reaction. Here, we apply this new method to various enzymatic reaction systems, resulting in a speedup of orders of magnitude in temporal behavior analysis without any significant loss in accuracy. Furthermore, we show that our new method can perform better than some of the best existing approximation methods for enzymatic reactions in terms of accuracy and efficiency. PMID:18662102

  3. Accelerating the Use of Weblogs as an Alternative Method to Deliver Case-Based Learning

    ERIC Educational Resources Information Center

    Chen, Charlie; Wu, Jiinpo; Yang, Samuel C.

    2008-01-01

    Weblog technology is an alternative medium to deliver the case-based method of learning business concepts. The social nature of this technology can potentially promote active learning and enhance analytical ability of students. The present research investigates the primary factors contributing to the adoption of Weblog technology by students to…

  4. An accelerated lambda iteration method for multilevel radiative transfer. III. Noncoherent electron scattering

    NASA Astrophysics Data System (ADS)

    Rybicki, G. B.; Hummer, D. G.

    1994-10-01

    Since the mass of the electron is very small relative to atomic masses, Thomson scattering of low-energy photons (hν<method is developed here to evaluate the electron scattering emissivity from a given radiation field which is considerably faster than previous methods based on straightforward evaluation of the scattering integral. This procedure is implemented in our multilevel radiative code (MALI), which now takes full account of the effects of noncoherent electron scattering on level populations, as well as on the emergent spectrum. Calculations using model atmospheres of hot, low-gravity stars display not only the expected broad wings of strong emission lines but also effects arising from the scattering of photons across continuum edges. In extreme cases this leads to significant shifts of the ionization equilibrium of helium.

  5. Effects of numerical methods on comparisons between experiments and simulations of shock-accelerated mixing.

    SciTech Connect

    Rider, William; Kamm, J. R.; Tomkins, C. D.; Zoldi, C. A.; Prestridge, K. P.; Marr-Lyon, M.; Rightley, P. M.; Benjamin, R. F.

    2002-01-01

    We consider the detailed structures of mixing flows for Richtmyer-Meshkov experiments of Prestridge et al. [PRE 00] and Tomkins et al. [TOM 01] and examine the most recent measurements from the experimental apparatus. Numerical simulations of these experiments are performed with three different versions of high resolution finite volume Godunov methods. We compare experimental data with simulations for configurations of one and two diffuse cylinders of SF{sub 6} in air using integral measures as well as fractal analysis and continuous wavelet transforms. The details of the initial conditions have a significant effect on the computed results, especially in the case of the double cylinder. Additionally, these comparisons reveal sensitive dependence of the computed solution on the numerical method.

  6. Finite difference method accelerated with sparse solvers for structural analysis of the metal-organic complexes

    NASA Astrophysics Data System (ADS)

    Guda, A. A.; Guda, S. A.; Soldatov, M. A.; Lomachenko, K. A.; Bugaev, A. L.; Lamberti, C.; Gawelda, W.; Bressler, C.; Smolentsev, G.; Soldatov, A. V.; Joly, Y.

    2016-05-01

    Finite difference method (FDM) implemented in the FDMNES software [Phys. Rev. B, 2001, 63, 125120] was revised. Thorough analysis shows, that the calculated diagonal in the FDM matrix consists of about 96% zero elements. Thus a sparse solver would be more suitable for the problem instead of traditional Gaussian elimination for the diagonal neighbourhood. We have tried several iterative sparse solvers and the direct one MUMPS solver with METIS ordering turned out to be the best. Compared to the Gaussian solver present method is up to 40 times faster and allows XANES simulations for complex systems already on personal computers. We show applicability of the software for metal-organic [Fe(bpy)3]2+ complex both for low spin and high spin states populated after laser excitation.

  7. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  8. A Method of Social Collaboration and Knowledge Sharing Acceleration for e-Learning System: The Distance Learning Network Scenario

    NASA Astrophysics Data System (ADS)

    Różewski, Przemysław

    Nowadays, e-learning systems take the form of the Distance Learning Network (DLN) due to widespread use and accessibility of the Internet and networked e-learning services. The focal point of the DLN performance is efficiency of knowledge processing in asynchronous learning mode and facilitating cooperation between students. In addition, the DLN articulates attention to social aspects of the learning process as well. In this paper, a method for the DLN development is proposed. The main research objectives for the proposed method are the processes of acceleration of social collaboration and knowledge sharing in the DLN. The method introduces knowledge-disposed agents (who represent students in educational scenarios) that form a network of individuals aimed to increase their competence. For every agent the competence expansion process is formulated. Based on that outcome the process of dynamic network formation performed on the social and knowledge levels. The method utilizes formal apparatuses of competence set and network game theories combined with an agent system-based approach.

  9. Accelerated image reconstruction in fluorescence molecular tomography using a nonuniform updating scheme with momentum and ordered subsets methods

    NASA Astrophysics Data System (ADS)

    Zhu, Dianwen; Li, Changqing

    2016-01-01

    Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. It remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a nonuniform multiplicative updating algorithm that combines with the ordered subsets (OS) method for fast convergence. However, increasing the number of OS leads to greater approximation errors and the speed gain from larger number of OS is limited. We propose to further enhance the convergence speed by incorporating a first-order momentum method that uses previous iterations to achieve optimal convergence rate. Using numerical simulations and a cubic phantom experiment, we have systematically compared the effects of the momentum technique, the OS method, and the nonuniform updating scheme in accelerating the FMT reconstruction. We found that the proposed combined method can produce a high-quality image using an order of magnitude less time.

  10. Circuit and Scattering Matrix Analysis of the Wire Measurement Method of Beam Impedance in Accelerating Structures

    SciTech Connect

    Jones, Roger M

    2003-05-23

    In order to measure the wakefield left behind multiple bunches of energetic electrons we have previously used the ASSET facility in the SLC [1]. However, in order to produce a more rapid and cost-effective determination of the wakefields we have designed a wire experimental method to measure the beam impedance and from the Fourier transform thereof, the wakefields. In this paper we present studies of the wire effect on the properties of X-band structures in study for the JLC/NLC (Japanese Linear Collider/Next Linear Collider) project. Simulations are made on infinite and finite periodical structures. The results are discussed.

  11. Block co-polymer approach for CD uniformity and placement error improvement in DSA hole grapho-epitaxy process

    NASA Astrophysics Data System (ADS)

    Matsumiya, Tasuku; Kurosawa, Tsuyoshi; Yahagi, Masahito; Yamano, Hitoshi; Miyagi, Ken; Maehashi, Takaya; Suzuki, Issei; Kawaue, Akiya; Komuro, Yoshitaka; Hirayama, Taku; Ohmori, Katsumi

    2015-03-01

    Directed Self-Assembly (DSA) of Block Co-Polymer (BCP) with conventional lithography is being thought as one of the potential patterning solution for future generation devices manufacturing. Many studies have been reported to fabricate the aligned patterns both on grapho and chemoepitaxy for semiconductor application1, 2. The hole shrink and multiplication by graphoepitaxy are one of the DSA implementation candidates in terms of relatively realistic process and versatility of chip design. The critical challenges on hole shrink and multiplication by using conventional Poly (styrene-b-methyl methacrylate) (PS-b-PMMA) BCP have been reported such as CD uniformity, placement error3 and defectivity. It is needed to overcome these challenging issues by improving not only whole process but materials. From the material aspect, the surface treatment material for guide structure, and process friendly BCP material are key development items on graphoepitaxy. In this paper, it will be shown in BCP approach about conventional PS-b-PMMA with additives and new casting solvent as PS-b-PMMA extension for CD uniformity and placement error improvement and then it'll be discussed on what is the key factor and solution from BCP material approach.

  12. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    SciTech Connect

    Courau, T.; Plagne, L.; Ponicot, A.; Sjoden, G.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)

  13. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  14. PARTICLE ACCELERATOR

    DOEpatents

    Teng, L.C.

    1960-01-19

    ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.

  15. Powered by DFT: Screening methods that accelerate materials development for hydrogen in metals applications.

    PubMed

    Nicholson, Kelly M; Chandrasekhar, Nita; Sholl, David S

    2014-11-18

    CONSPECTUS: Not only is hydrogen critical for current chemical and refining processes, it is also projected to be an important energy carrier for future green energy systems such as fuel cell vehicles. Scientists have examined light metal hydrides for this purpose, which need to have both good thermodynamic properties and fast charging/discharging kinetics. The properties of hydrogen in metals are also important in the development of membranes for hydrogen purification. In this Account, we highlight our recent work aimed at the large scale screening of metal-based systems with either favorable hydrogen capacities and thermodynamics for hydrogen storage in metal hydrides for use in onboard fuel cell vehicles or promising hydrogen permeabilities relative to pure Pd for hydrogen separation from high temperature mixed gas streams using dense metal membranes. Previously, chemists have found that the metal hydrides need to hit a stability sweet spot: if the compound is too stable, it will not release enough hydrogen under low temperatures; if the compound is too unstable, the reaction may not be reversible under practical conditions. Fortunately, we can use DFT-based methods to assess this stability via prediction of thermodynamic properties, equilibrium reaction pathways, and phase diagrams for candidate metal hydride systems with reasonable accuracy using only proposed crystal structures and compositions as inputs. We have efficiently screened millions of mixtures of pure metals, metal hydrides, and alloys to identify promising reaction schemes via the grand canonical linear programming method. Pure Pd and Pd-based membranes have ideal hydrogen selectivities over other gases but suffer shortcomings such as sensitivity to sulfur poisoning and hydrogen embrittlement. Using a combination of detailed DFT, Monte Carlo techniques, and simplified models, we are able to accurately predict hydrogen permeabilities of metal membranes and screen large libraries of candidate alloys

  16. CUDA Fortran acceleration for the finite-difference time-domain method

    NASA Astrophysics Data System (ADS)

    Hadi, Mohammed F.; Esmaeili, Seyed A.

    2013-05-01

    A detailed description of programming the three-dimensional finite-difference time-domain (FDTD) method to run on graphical processing units (GPUs) using CUDA Fortran is presented. Two FDTD-to-CUDA thread-block mapping designs are investigated and their performances compared. Comparative assessment of trade-offs between GPU's shared memory and L1 cache is also discussed. This presentation is for the benefit of FDTD programmers who work exclusively with Fortran and are reluctant to port their codes to C in order to utilize GPU computing. The derived CUDA Fortran code is compared with an optimized CPU version that runs on a workstation-class CPU to present a realistic GPU to CPU run time comparison and thus help in making better informed investment decisions on FDTD code redesigns and equipment upgrades. All analyses are mirrored with CUDA C simulations to put in perspective the present state of CUDA Fortran development.

  17. Comparison of accelerated methods for the extraction of phenolic compounds from different vine-shoot cultivars.

    PubMed

    Delgado-Torre, M Pilar; Ferreiro-Vera, Carlos; Priego-Capote, Feliciano; Pérez-Juan, Pedro M; Luque de Castro, María Dolores

    2012-03-28

    Most research on the extraction of high-priced compounds from vineyard/wine byproducts has traditionally been focused on grape seeds and skins as raw materials. Vine-shoots can represent an additional source to those materials, the characteristics of which could depend on the cultivar. A comparative study of hydroalcoholic extracts from 18 different vineyard cultivars obtained by superheated liquid extraction (SHLE), microwave-assisted extraction (MAE), and ultrasound-assisted extraction (USAE) is here presented. The optimal working conditions for each type of extraction have been investigated by using multivariate experimental designs to maximize the yield of total phenolic compounds, measured by the Folin-Ciocalteu method, and control hydroxymethylfurfural because of the organoleptic properties of furanic derivatives and toxicity at given levels. The best values found for the influential variables on each extraction method were 80% (v/v) aqueous ethanol at pH 3, 180 °C, and 60 min for SHLE; 140 W and 5 min microwave irradiation for MAE; and 280 W, 50% duty cycle, and 7.5 min extraction for USAE. SHLE reported better extraction efficiencies as compared to the other two approaches, supporting the utility of SHLE for scaling-up the process. The extracts were dried in a rotary evaporator, reconstituted in 5 mL of methanol, and finally subjected to liquid-liquid extraction with n-hexane to remove nonpolar compounds that could complicate chromatographic separation. The methanolic fractions were analyzed by both LC-DAD and LC-TOF/MS, and the differences in composition according to the extraction conditions were studied. Compounds usually present in commercial wood extracts (mainly benzoic and hydroxycinnamic acids and aldehydes) were detected in vine-shoot extracts. PMID:22372567

  18. Investigation of using shrinking method in construction of Institute for Research in Fundamental Sciences Electron Linear Accelerator TW-tube (IPM TW-Linac tube)

    NASA Astrophysics Data System (ADS)

    Ghasemi, F.; Abbasi Davani, F.

    2015-06-01

    Due to Iran's growing need for accelerators in various applications, IPM's electron Linac project has been defined. This accelerator is a 15 MeV energy S-band traveling-wave accelerator which is being designed and constructed based on the klystron that has been built in Iran. Based on the design, operating mode is π /2 and the accelerating chamber consists of two 60cm long tubes with constant impedance and a 30cm long buncher. Amongst all construction methods, shrinking method is selected for construction of IPM's electron Linac tube because it has a simple procedure and there is no need for large vacuum or hydrogen furnaces. In this paper, different aspects of this method are investigated. According to the calculations, linear ratio of frequency alteration to radius change is 787.8 MHz/cm, and the maximum deformation at the tube wall where disks and the tube make contact is 2.7μ m. Applying shrinking method for construction of 8- and 24-cavity tubes results in satisfactory frequency and quality factor. Average deviations of cavities frequency of 8- and 24-cavity tubes to the design values are 0.68 MHz and 1.8 MHz respectively before tune and 0.2 MHz and 0.4 MHz after tune. Accelerating tubes, buncher, and high power couplers of IPM's electron linac are constructed using shrinking method.

  19. A GPU-accelerated semi-implicit ADI method for incompressible and compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ha, Sanghyun; You, Donghyun

    2015-11-01

    Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of both incompressible and compressible Navier-Stokes equations. A semi-implicit ADI finite-volume method for integration of the incompressible and compressible Navier-Stokes equations, which are discretized on a structured arbitrary grid, is parallelized for GPU computations using CUDA (Compute Unified Device Architecture). In the semi-implicit ADI finite-volume method, the nonlinear convection terms and the linear diffusion terms are integrated in time using a combination of an explicit scheme and an ADI scheme. Inversion of multiple tri-diagonal matrices is found to be the major challenge in GPU computations of the present method. Some of the algorithms for solving tri-diagonal matrices on GPUs are evaluated and optimized for GPU-acceleration of the present semi-implicit ADI computations of incompressible and compressible Navier-Stokes equations. Supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning Grant NRF-2014R1A2A1A11049599.

  20. Technical note: Acceleration of sparse operations for average-information REML analyses with supernodal methods and sparse-storage refinements.

    PubMed

    Masuda, Y; Aguilar, I; Tsuruta, S; Misztal, I

    2015-10-01

    The objective of this study was to remove bottlenecks generally found in a computer program for average-information REML. The refinements included improvements to setting-up mixed-model equations on a hash table with a faster hash function as sparse matrix storage, changing sparse structures in calculation of traces, and replacing a sparse matrix package using traditional methods (FSPAK) with a new package using supernodal methods (YAMS); the latter package quickly processed sparse matrices containing large, dense blocks. Comparisons included 23 models with data sets from broiler, swine, beef, and dairy cattle. Models included single-trait, multiple-trait, maternal, and random regression models with phenotypic data; selected models used genomic information in a single-step approach. Setting-up mixed model equations was completed without abnormal termination in all analyses. Calculations in traces were accelerated with a hash format, especially for models with a genomic relationship matrix, and the maximum speed was 67 times faster. Computations with YAMS were, on average, more than 10 times faster than with FSPAK and had greater advantages for large data and more complicated models including multiple traits, random regressions, and genomic effects. These refinements can be applied to general average-information REML programs. PMID:26523559

  1. Effects of fuel cetane number on the structure of diesel spray combustion: An accelerated Eulerian stochastic fields method

    NASA Astrophysics Data System (ADS)

    Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song

    2015-09-01

    An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.

  2. A new method of measuring the poloidal magnetic and radial electric fields in a tokamak using a laser-accelerated ion-beam trace probe.

    PubMed

    Yang, X Y; Chen, Y H; Lin, C; Wang, L; Xu, M; Wang, X G; Xiao, C J

    2014-11-01

    Both the poloidal magnetic field (Bp) and radial electric field (Er) are significant in magnetic confinement devices. In this paper, a new method was proposed to diagnose both Bp and Er at the same time, which was named Laser-accelerated Ion-beam Trace Probe (LITP). This method based on the laser-accelerated ion beam, which has three properties: large energy spread, short pulse lengths, and multiple charge states. LITP can provide the 1D profiles, or 2D images of both Bp and Er. In this paper, we present the basic principle and some preliminary theoretical results. PMID:25430336

  3. BICEP's acceleration

    SciTech Connect

    Contaldi, Carlo R.

    2014-10-01

    The recent Bicep2 [1] detection of, what is claimed to be primordial B-modes, opens up the possibility of constraining not only the energy scale of inflation but also the detailed acceleration history that occurred during inflation. In turn this can be used to determine the shape of the inflaton potential V(φ) for the first time — if a single, scalar inflaton is assumed to be driving the acceleration. We carry out a Monte Carlo exploration of inflationary trajectories given the current data. Using this method we obtain a posterior distribution of possible acceleration profiles ε(N) as a function of e-fold N and derived posterior distributions of the primordial power spectrum P(k) and potential V(φ). We find that the Bicep2 result, in combination with Planck measurements of total intensity Cosmic Microwave Background (CMB) anisotropies, induces a significant feature in the scalar primordial spectrum at scales k∼ 10{sup -3} Mpc {sup -1}. This is in agreement with a previous detection of a suppression in the scalar power [2].

  4. A polarization-based frequency scanning interferometer and the signal processing acceleration method based on parallel processing architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung Hyun; Kim, Min Young

    FSI system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on FFT. However, it still suffers from optical noise from target surface and relatively long processing time due to the number of images acquired in frequency scanning phase. First, a polarization-based frequency scanning interferometry (PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, λ/4 plate in front of reference mirror, λ/4 plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, λ/2 plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem low contrast of acquired fringe image by using polarization technique. Also, we can control light distribution of object beam and reference beam. Second, the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as GPU (Graphic Processing Unit) and CUDA (Compute Unified Device Architecture). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

  5. The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory

    NASA Technical Reports Server (NTRS)

    Parker, L. Neergaard; Zank, G. P.

    2015-01-01

    The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forwardforward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.

  6. The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory

    NASA Technical Reports Server (NTRS)

    Parker, L. Neergaard; Zank, G. P.

    2015-01-01

    The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forward-forward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.

  7. PREFACE: Joint IPPP Durham/Cockcroft Institute/ICFA Workshop on Advanced QED methods for Future Accelerators

    NASA Astrophysics Data System (ADS)

    Bailey, I. R.; Barber, D. P.; Chattopadhyay, S.; Hartin, A.; Heinzl, T.; Hesselbach, S.; Moortgat-Pick, G. A.

    2009-11-01

    The joint IPPP Durham/Cockcroft Institute/ICFA workshop on advanced QED methods for future accelerators took place at the Cockcroft Institute in early March 2009. The motivation for the workshop was the need for a detailed consideration of the physics processes associated with beam-beam effects at the interaction points of future high-energy electron-positron colliders. There is a broad consensus within the particle physics community that the next international facility for experimental high-energy physics research beyond the Large Hadron Collider at CERN should be a high-luminosity electron-positron collider working at the TeV energy scale. One important feature of such a collider will be its ability to deliver polarised beams to the interaction point and to provide accurate measurements of the polarisation state during physics collisions. The physics collisions take place in very dense charge bunches in the presence of extremely strong electromagnetic fields of field strength of order of the Schwinger critical field strength of 4.4×1013 Gauss. These intense fields lead to depolarisation processes which need to be thoroughly understood in order to reduce uncertainty in the polarisation state at collision. To that end, this workshop reviewed the formalisms for describing radiative processes and the methods of calculation in the future strong-field environments. These calculations are based on the Furry picture of organising the interaction term of the Lagrangian. The means of deriving the transition probability of the most important of the beam-beam processes - Beamsstrahlung - was reviewed. The workshop was honoured by the presentations of one of the founders, V N Baier, of the 'Operator method' - one means for performing these calculations. Other theoretical methods of performing calculations in the Furry picture, namely those due to A I Nikishov, V I Ritus et al, were reviewed and intense field quantum processes in fields of different form - namely those

  8. Cosmic ray acceleration at perpendicular shocks in supernova remnants

    SciTech Connect

    Ferrand, Gilles; Danos, Rebecca J.; Shalchi, Andreas; Safi-Harb, Samar; Edmon, Paul; Mendygral, Peter

    2014-09-10

    Supernova remnants (SNRs) are believed to accelerate particles up to high energies through the mechanism of diffusive shock acceleration (DSA). Except for direct plasma simulations, all modeling efforts must rely on a given form of the diffusion coefficient, a key parameter that embodies the interactions of energetic charged particles with magnetic turbulence. The so-called Bohm limit is commonly employed. In this paper, we revisit the question of acceleration at perpendicular shocks, by employing a realistic model of perpendicular diffusion. Our coefficient reduces to a power law in momentum for low momenta (of index α), but becomes independent of the particle momentum at high momenta (reaching a constant value κ{sub ∞} above some characteristic momentum p {sub c}). We first provide simple analytical expressions of the maximum momentum that can be reached at a given time with this coefficient. Then we perform time-dependent numerical simulations to investigate the shape of the particle distribution that can be obtained when the particle pressure back-reacts on the flow. We observe that for a given index α and injection level, the shock modifications are similar for different possible values of p {sub c}, whereas the particle spectra differ markedly. Of particular interest, low values of p {sub c} tend to remove the concavity once thought to be typical of non-linear DSA, and result in steep spectra, as required by recent high-energy observations of Galactic SNRs.

  9. Adapting capillary gel electrophoresis as a sensitive, high-throughput method to accelerate characterization of nucleic acid metabolic enzymes

    PubMed Central

    Greenough, Lucia; Schermerhorn, Kelly M.; Mazzola, Laurie; Bybee, Joanna; Rivizzigno, Danielle; Cantin, Elizabeth; Slatko, Barton E.; Gardner, Andrew F.

    2016-01-01

    Detailed biochemical characterization of nucleic acid enzymes is fundamental to understanding nucleic acid metabolism, genome replication and repair. We report the development of a rapid, high-throughput fluorescence capillary gel electrophoresis method as an alternative to traditional polyacrylamide gel electrophoresis to characterize nucleic acid metabolic enzymes. The principles of assay design described here can be applied to nearly any enzyme system that acts on a fluorescently labeled oligonucleotide substrate. Herein, we describe several assays using this core capillary gel electrophoresis methodology to accelerate study of nucleic acid enzymes. First, assays were designed to examine DNA polymerase activities including nucleotide incorporation kinetics, strand displacement synthesis and 3′-5′ exonuclease activity. Next, DNA repair activities of DNA ligase, flap endonuclease and RNase H2 were monitored. In addition, a multicolor assay that uses four different fluorescently labeled substrates in a single reaction was implemented to characterize GAN nuclease specificity. Finally, a dual-color fluorescence assay to monitor coupled enzyme reactions during Okazaki fragment maturation is described. These assays serve as a template to guide further technical development for enzyme characterization or nucleoside and non-nucleoside inhibitor screening in a high-throughput manner. PMID:26365239

  10. Development of a non-thermal accelerated pulsed UV photolysis assisted digestion method for fresh and dried food samples

    NASA Astrophysics Data System (ADS)

    Solís, C.; Lagunas-Solar, M. C.; Perley, B. P.; Piña, C.; Aguilar, L. F.; Flocchini, R. G.

    2002-04-01

    A simple, fast digestion procedure for fresh and dried foods, using high-power pulsed UV photolysis in the presence of hydrogen peroxide, is being developed. The homogenized food samples were mixed with H 2O 2 or with a mixture of H 2O 2 and HNO 3, and irradiated for short times with a 248-nm UV excimer laser. After centrifugation, a clear, colorless solution was obtained and aliquots were deposited on Teflon filters for XRF and/or PIXE analyses. Standard reference materials (NIST Peach Leaves; Typical Diet) were also analyzed to compare recoveries and detection limits. Improvements in detection limits were observed, but a few trace elements (<1 ppm) were not reproducibly detected (Fe, Sr). This method proved to be practical for the accelerated digestion of food samples and preparing analytes in short-time intervals. In combination with PIXE and XRF, it allows high-sensitivity multi-elemental analyses for screening the nutritional elements and for food safety purposes regarding the potential presence of toxic elements. Further development to optimize and validate this procedure for a broader range of analytes is in progress.

  11. Plasma accelerators

    SciTech Connect

    Ruth, R.D.; Chen, P.

    1986-03-01

    In this paper we discuss plasma accelerators which might provide high gradient accelerating fields suitable for TeV linear colliders. In particular we discuss two types of plasma accelerators which have been proposed, the Plasma Beat Wave Accelerator and the Plasma Wake Field Accelerator. We show that the electric fields in the plasma for both schemes are very similar, and thus the dynamics of the driven beams are very similar. The differences appear in the parameters associated with the driving beams. In particular to obtain a given accelerating gradient, the Plasma Wake Field Accelerator has a higher efficiency and a lower total energy for the driving beam. Finally, we show for the Plasma Wake Field Accelerator that one can accelerate high quality low emittance beams and, in principle, obtain efficiencies and energy spreads comparable to those obtained with conventional techniques.

  12. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators of use in cancer therapy

    NASA Astrophysics Data System (ADS)

    García-Pareja, S.; Vilches, M.; Lallena, A. M.

    2010-01-01

    The Monte Carlo simulation of clinical electron linear accelerators requires large computation times to achieve the level of uncertainty required for radiotherapy. In this context, variance reduction techniques play a fundamental role in the reduction of this computational time. Here we describe the use of the ant colony method to control the application of two variance reduction techniques: Splitting and Russian roulette. The approach can be applied to any accelerator in a straightforward way and permits the increasing of the efficiency of the simulation by a factor larger than 50.

  13. Hepatic Arterial Configuration in Relation to the Segmental Anatomy of the Liver; Observations on MDCT and DSA Relevant to Radioembolization Treatment

    SciTech Connect

    Hoven, Andor F. van den Leeuwen, Maarten S. van Lam, Marnix G. E. H. Bosch, Maurice A. A. J. van den

    2015-02-15

    PurposeCurrent anatomical classifications do not include all variants relevant for radioembolization (RE). The purpose of this study was to assess the individual hepatic arterial configuration and segmental vascularization pattern and to develop an individualized RE treatment strategy based on an extended classification.MethodsThe hepatic vascular anatomy was assessed on MDCT and DSA in patients who received a workup for RE between February 2009 and November 2012. Reconstructed MDCT studies were assessed to determine the hepatic arterial configuration (origin of every hepatic arterial branch, branching pattern and anatomical course) and the hepatic segmental vascularization territory of all branches. Aberrant hepatic arteries were defined as hepatic arterial branches that did not originate from the celiac axis/CHA/PHA. Early branching patterns were defined as hepatic arterial branches originating from the celiac axis/CHA.ResultsThe hepatic arterial configuration and segmental vascularization pattern could be assessed in 110 of 133 patients. In 59 patients (54 %), no aberrant hepatic arteries or early branching was observed. Fourteen patients without aberrant hepatic arteries (13 %) had an early branching pattern. In the 37 patients (34 %) with aberrant hepatic arteries, five also had an early branching pattern. Sixteen different hepatic arterial segmental vascularization patterns were identified and described, differing by the presence of aberrant hepatic arteries, their respective vascular territory, and origin of the artery vascularizing segment four.ConclusionsThe hepatic arterial configuration and segmental vascularization pattern show marked individual variability beyond well-known classifications of anatomical variants. We developed an individualized RE treatment strategy based on an extended anatomical classification.

  14. GPU-Accelerated Monte Carlo Electron Transport Methods: Development and Application for Radiation Dose Calculations Using Six GPU cards

    NASA Astrophysics Data System (ADS)

    Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

    2014-06-01

    An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.

  15. The Role of Diffusive Shock Acceleration on Nonequilibrium Ionization in Supernova Remnants

    NASA Astrophysics Data System (ADS)

    Patnaude, Daniel J.; Ellison, Donald C.; Slane, Patrick

    2009-05-01

    We present results of semianalytic calculations which show clear evidence for changes in the nonequilibrium ionization behind a supernova remnant forward shock undergoing efficient diffusive shock acceleration (DSA). The efficient acceleration of particles (i.e., cosmic rays (CRs)) lowers the shock temperature and raises the density of the shocked gas, thus altering the ionization state of the plasma in comparison to the test-particle (TP) approximation where CRs gain an insignificant fraction of the shock energy. The differences between the TP and efficient acceleration cases are substantial and occur for both slow and fast temperature equilibration rates: in cases of higher acceleration efficiency, particular ion states are more populated at lower electron temperatures. We also present results which show that, in the efficient shock acceleration case, higher ionization fractions are reached noticeably closer to the shock front than in the TP case, clearly indicating that DSA may enhance thermal X-ray production. We attribute this to the higher postshock densities which lead to faster electron temperature equilibration and higher ionization rates. These spatial differences should be resolvable with current and future X-ray missions, and can be used as diagnostics in estimating the acceleration efficiency in CR-modified shocks.

  16. A data processing method for determining instantaneous angular speed and acceleration of crankshaft in an aircraft engine-propeller system using a magnetic encoder

    NASA Astrophysics Data System (ADS)

    Yu, S. D.; Zhang, X.

    2010-05-01

    This paper presents a method for determining the instantaneous angular speed and instantaneous angular acceleration of the crankshaft in a reciprocating engine and propeller dynamical system from electrical pulse signals generated by a magnetic encoder. The method is based on accurate determination of the measured global mean angular speed and precise values of times when leading edges of individual magnetic teeth pass through the magnetic sensor. Under a steady-state operating condition, a discrete deviation time vs. shaft rotational angle series of uniform interval is obtained and used for accurate determination of the crankshaft speed and acceleration. The proposed method for identifying sub- and super-harmonic oscillations in the instantaneous angular speeds and accelerations is new and efficient. Experiments were carried out on a three-cylinder four-stroke Saito 450R model aircraft engine and a Solo propeller in connection with a 64-teeth Admotec KL2202 magnetic encoder and an HS-4 data acquisition system. Comparisons with an independent data processing scheme indicate that the proposed method yields noise-free instantaneous angular speeds and is superior to the finite difference based methods commonly used in the literature.

  17. Statistical correlation of the soil incubation and the accelerated laboratory extraction methods to estimate nitrogen release rates of slow- and controlled-release fertilizers.

    PubMed

    Medina, L Carolina; Sartain, Jerry; Obreza, Thomas; Hall, William L; Thiex, Nancy J

    2014-01-01

    Several technologies have been proposed to characterize the nutrient release patterns of enhanced-efficiency fertilizers (EEFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize nutrient release of slow-release fertilizer (SRF) and controlled-release fertilizer (CRF) materials, no official method exists to assess their nutrient release patterns. However, the increased production and distribution of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. Nonlinear regression was used to establish a correlation between the data generated from a 180-day soil incubation-column leaching procedure and 74 h accelerated lab extraction method, and to develop a model that can predict the 180-day nitrogen (N) release curve for a specific SRF and CRF product based on the data from the accelerated laboratory extraction method. Based on the R2 > 0.90 obtained for most materials, results indicated that the data generated from the 74 h accelerated lab extraction method could be used to predict N release from the selected materials during 180 days, including those fertilizers that require biological activity for N release. PMID:25051612

  18. YOUNG SUPERNOVAE AS EXPERIMENTAL SITES FOR STUDYING THE ELECTRON ACCELERATION MECHANISM

    SciTech Connect

    Maeda, Keiichi

    2013-01-10

    Radio emissions from young supernovae ({approx}<1 year after the explosion) show a peculiar feature in the relativistic electron population at a shock wave, where their energy distribution is steeper than typically found in supernova remnants and than that predicted from the standard diffusive shock acceleration (DSA) mechanism. This has been especially established for the case for a class of stripped envelope supernovae (SNe IIb/Ib/Ic), where a combination of high shock velocity and low circumstellar material density makes it easier to derive the intrinsic energy distribution than in other classes of SNe. We suggest that this apparent discrepancy reflects a situation where the low energy electrons, before being accelerated by the DSA-like mechanism, are responsible for the radio synchrotron emission from young SNe, and that studying young SNe sheds light on the still-unresolved electron injection problem in the acceleration theory of cosmic rays. We suggest that the electron's energy distribution could be flattened toward high energy, most likely around 100 MeV, which marks a transition from inefficient to efficient acceleration. Identifying this feature will be a major advance in understanding the electron acceleration mechanism. We suggest two further probes: (1) millimeter/submillimeter observations in the first year after the explosion and (2) X-ray observations at about one year and thereafter. We show that these are reachable by ALMA and Chandra for nearby SNe.

  19. Accelerated test methods for life prediction of hermetic motor insulation systems exposed to alternative refrigerant/lubricant mixtures. Phase 3: Reproducibility and discrimination testing. Final report

    SciTech Connect

    Ellis, P.F. II; Ferguson, A.F.; Fuentes, K.T.

    1996-05-06

    In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. This report presents the results of phase three concerning the reproducibility and discrimination testing.

  20. Synthesis of Ultradisperse Carbon Dioxide Powder with Plasma-Dynamic Method in the Coaxial Magneto-Plasma Accelerator

    NASA Astrophysics Data System (ADS)

    Golyanskaya, Evgeniya. O.; Sivkov, Aleksandr A.; Anikina, Zhanna S.

    2016-02-01

    One of the most promising trends in modern physics is the high-temperature superconductivity. Analysis of high-temperature superconductors revealed that almost all of them are complex copper-based oxides. Studies have shown the possibility of using them for the synthesis of coaxial magneto accelerator. Studies have identified the products synthesized soot: Cu, Cu2O, CuO, their shape and size. Also been deciphered and electron microscopy confirmed the composition of the nanopowder obtained in laboratory conditions.

  1. A novel method to generate a self-accelerating Bessel-like beam based on graded index multimode optical fiber

    NASA Astrophysics Data System (ADS)

    Zhang, Yaxun; Liu, Chunlan; Yu, Zhang; Liu, Zhihai; Zhao, Enming; Yang, Jun; Yuan, Libo

    2015-09-01

    We propose and demonstrate a transverse self-accelerating Bessel-like beam generator based on a graded index multimode optical fiber(GIF). The single-mode fiber and the graded-index multimode fiber are spliced with a defined offset. The offset Δx and the GIF length L affect the final properties of the Bessel-like beam, here we choose the offset Δx=20μm and the GIF length L=430μm to be optimal. The beam accelerates along the designed parabolic path up to 250μm in z direction and 40μm in x direction, the curvature of bending is 16% (40μm/250μm, x/z). This transverse self-accelerating Bessel-like beam generator based on the graded index multimode optical fiber constitutes a new development for high-precision micro particles experiments and manipulations because of its simple structure, high integration and small size.

  2. Stochastic shock response spectrum decomposition method based on probabilistic definitions of temporal peak acceleration, spectral energy, and phase lag distributions of mechanical impact pyrotechnic shock test data

    NASA Astrophysics Data System (ADS)

    Hwang, James Ho-Jin; Duran, Adam

    2016-08-01

    Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC

  3. Stability of CIGS Solar Cells and Component Materials Evaluated by a Step-Stress Accelerated Degradation Test Method: Preprint

    SciTech Connect

    Pern, F. J.; Noufi, R.

    2012-10-01

    A step-stress accelerated degradation testing (SSADT) method was employed for the first time to evaluate the stability of CuInGaSe2 (CIGS) solar cells and device component materials in four Al-framed test structures encapsulated with an edge sealant and three kinds of backsheet or moisture barrier film for moisture ingress control. The SSADT exposure used a 15oC and then a 15% relative humidity (RH) increment step, beginning from 40oC/40%RH (T/RH = 40/40) to 85oC/70%RH (85/70) as of the moment. The voluminous data acquired and processed as of total DH = 3956 h with 85/70 = 704 h produced the following results. The best CIGS solar cells in sample Set-1 with a moisture-permeable TPT backsheet showed essentially identical I-V degradation trend regardless of the Al-doped ZnO (AZO) layer thickness ranging from standard 0.12 μm to 0.50 μm on the cells. No clear 'stepwise' feature in the I-V parameter degradation curves corresponding to the SSADT T/RH/time profile was observed. Irregularity in I-V performance degradation pattern was observed with some cells showing early degradation at low T/RH < 55/55 and some showing large Voc, FF, and efficiency degradation due to increased series Rs (ohm-cm2) at T/RH ≥ 70/70. Results of (electrochemical) impedance spectroscopy (ECIS) analysis indicate degradation of the CIGS solar cells corresponded to increased series resistance Rs (ohm) and degraded parallel (minority carrier diffusion/recombination) resistance Rp, capacitance C, overall time constant Rp*C, and 'capacitor quality' factor (CPE-P), which were related to the cells? p-n junction properties. Heating at 85/70 appeared to benefit the CIGS solar cells as indicated by the largely recovered CPE-P factor. Device component materials, Mo on soda lime glass (Mo/SLG), bilayer ZnO (BZO), AlNi grid contact, and CdS/CIGS/Mo/SLG in test structures with TPT showed notable to significant degradation at T/RH ≥ 70/70. At T/RH = 85/70, substantial blistering of BZO layers on CIGS

  4. Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data

    SciTech Connect

    Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.; Solberg, Timothy D.; Chetty, Indrin J.

    2009-12-15

    In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrc Monte Carlo (MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a 6 MV photon beam from a Siemens Primus linear accelerator (linac) and phase space (PHSP) files were generated at 100 cm source-to-surface distance for the 10x10 and 40x40 cm{sup 2} field sizes. The BEAMnrc parameters/techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections, (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe-Heitler) was used as the ''base line'' for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were {approx}935 ({approx}111 min on a single 2.6 GHz processor) and {approx}200 ({approx}45 min on a single processor) for the 10x10 field size with 50 million histories and 40x40 cm{sup 2} field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting (DBS) with no

  5. Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data

    PubMed Central

    Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.; Solberg, Timothy D.; Chetty, Indrin J.

    2009-01-01

    In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrc Monte Carlo (MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a 6 MV photon beam from a Siemens Primus linear accelerator (linac) and phase space (PHSP) files were generated at 100 cm source-to-surface distance for the 10×10 and 40×40 cm2 field sizes. The BEAMnrc parameters∕techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections, (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe–Heitler) was used as the “base line” for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were ∼935 (∼111 min on a single 2.6 GHz processor) and ∼200 (∼45 min on a single processor) for the 10×10 field size with 50 million histories and 40×40 cm2 field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting (DBS) with no electron splitting. When

  6. Stability of CIGS solar cells and component materials evaluated by a step-stress accelerated degradation test method

    NASA Astrophysics Data System (ADS)

    Pern, F. J.; Noufi, R.

    2012-10-01

    A step-stress accelerated degradation testing (SSADT) method was employed for the first time to evaluate the stability of CuInGaSe2 (CIGS) solar cells and device component materials in four Al-framed test structures encapsulated with an edge sealant and three kinds of backsheet or moisture barrier film for moisture ingress control. The SSADT exposure used a 15°C and then a 15% relative humidity (RH) increment step, beginning from 40°C/40%RH (T/RH = 40/40) to 85°C/70%RH (85/70) as of the moment. The voluminous data acquired and processed as of total DH = 3956 h with 85/70 = 704 h produced the following results. The best CIGS solar cells in sample Set-1 with a moisture-permeable TPT backsheet showed essentially identical I-V degradation trend regardless of the Al-doped ZnO (AZO) layer thickness ranging from standard 0.12 μm to 0.50 μm on the cells. No clear "stepwise" feature in the I-V parameter degradation curves corresponding to the SSADT T/RH/time profile was observed. Irregularity in I-V performance degradation pattern was observed with some cells showing early degradation at low T/RH < 55/55 and some showing large Voc, FF, and efficiency degradation due to increased series Rs (ohm-cm2) at T/RH >= 70/70. Results of (electrochemical) impedance spectroscopy (ECIS) analysis indicate degradation of the CIGS solar cells corresponded to increased series resistance Rs (ohm) and degraded parallel (minority carrier diffusion/recombination) resistance Rp, capacitance C, overall time constant Rp*C, and "capacitor quality" factor (CPE-P), which were related to the cells' p-n junction properties. Heating at 85/70 appeared to benefit the CIGS solar cells as indicated by the largely recovered CPE-P factor. Device component materials, Mo on soda lime glass (Mo/SLG), bilayer ZnO (BZO), AlNi grid contact, and CdS/CIGS/Mo/SLG in test structures with TPT showed notable to significant degradation at T/RH >= 70/70. At T/RH = 85/70, substantial blistering of BZO layers on CIGS

  7. Particle acceleration and reconnection in the solar wind

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Hunana, P.; Mostafavi, P.; le Roux, J. A.; Webb, G. M.; Khabarova, O.; Cummings, A. C.; Stone, E. C.; Decker, R. B.

    2016-03-01

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized quasi-2D small-scale magnetic island reconnection processes. An advection-diffusion transport equation for a nearly isotropic particle distribution describes particle transport and energization in a region of interacting magnetic islands [1; 2]. The dominant charged particle energization processes are 1) the electric field induced by quasi-2D magnetic island merging, and 2) magnetic island contraction. The acceleration of charged particles in a "sea of magnetic islands" in a super-Alfvénic flow, and the energization of particles by combined diffusive shock acceleration (DSA) and downstream magnetic island reconnection processes are discussed.

  8. A GPU accelerated, discrete time random walk model for simulating reactive transport in porous media using colocation probability function based reaction methods

    NASA Astrophysics Data System (ADS)

    Barnard, J. M.; Augarde, C. E.

    2012-12-01

    The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.

  9. Accelerated Reader.

    ERIC Educational Resources Information Center

    Education Commission of the States, Denver, CO.

    This paper provides an overview of Accelerated Reader, a system of computerized testing and record-keeping that supplements the regular classroom reading program. Accelerated Reader's primary goal is to increase literature-based reading practice. The program offers a computer-aided reading comprehension and management program intended to motivate…

  10. Accelerated/abbreviated test methods, study 4 of task 3 (encapsulation) of the low-cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Kolyer, J. M.; Mann, N. R.

    1978-01-01

    Inherent weatherability is controlled by the three weather factors common to all exposure sites: insolation, temperature, and humidity. Emphasis was focused on the transparent encapsulant portion of miniature solar cell arrays by eliminating weathering effects on the substrate and circuitry (which are also parts of the encapsulant system). The most extensive data were for yellowing, which were measured conveniently and precisely. Considerable data also were obtained on tensile strength. Changes in these two properties after outdoor exposure were predicted very well from accelerated exposure data.

  11. Cascaded radiation pressure acceleration

    SciTech Connect

    Pei, Zhikun; Shen, Baifei E-mail: zhxm@siom.ac.cn; Zhang, Xiaomei E-mail: zhxm@siom.ac.cn; Wang, Wenpeng; Zhang, Lingang; Yi, Longqing; Shi, Yin; Xu, Zhizhan

    2015-07-15

    A cascaded radiation-pressure acceleration scheme is proposed. When an energetic proton beam is injected into an electrostatic field moving at light speed in a foil accelerated by light pressure, protons can be re-accelerated to much higher energy. An initial 3-GeV proton beam can be re-accelerated to 7 GeV while its energy spread is narrowed significantly, indicating a 4-GeV energy gain for one acceleration stage, as shown in one-dimensional simulations and analytical results. The validity of the method is further confirmed by two-dimensional simulations. This scheme provides a way to scale proton energy at the GeV level linearly with laser energy and is promising to obtain proton bunches at tens of gigaelectron-volts.

  12. What can we learn from inverse methods regarding the processes behind the acceleration and retreat of Helheim glacier (Greenland)?

    NASA Astrophysics Data System (ADS)

    Gagliardini, O.; Gillet-chaulet, F.; Martin, N.; Monnier, J.; Singh, J.

    2011-12-01

    Greenland outlet glaciers control the ice discharge toward the sea and the resulting contribution to sea level rise. Physical processes at the root of the observed acceleration and retreat, - decrease of the back force at the calving terminus, increase of basal lubrication and decrease of the lateral friction -, are still not well understood. All these three processes certainly play a role but their relative contributions have not yet been quantified. Helheim glacier, located on the east coast of Greenland, has undergone an enhanced retreat since 2003, and this retreat was concurrent with accelerated ice flow. In this study, the flowline dataset including surface elevation, surface velocity and front position of Helheim from 2001 to 2006 is used to quantify the sensitivity of each of these processes. For that, we used the full-Stokes finite element ice flow model DassFlow/Ice, including adjoint code and full 4d-var data assimilation process in which the control variables are the basal and lateral friction parameters as well as the calving front pressure. For each available date, the sensitivity of each processes is first studied and an optimal distribution is then inferred from the surface measurements. Using this optimal distribution of these parameters, a transient simulation is performed over the whole dataset period. The relative contributions of the basal friction, lateral friction and front back force are then discussed under the light of these new results.

  13. Contrast enhanced diffusion NMR: quantifying impurities in block copolymers for DSA

    NASA Astrophysics Data System (ADS)

    Wojtecki, Rudy; Porath, Ellie; Vora, Ankit; Nelson, Alshakim; Sanders, Daniel

    2016-03-01

    Block-copolymers (BCPs) offer the potential to meet the demands of next generation lithographic materials as they can self-assemble into scalable and tailorable nanometer scale patterns. In order for these materials to find wide spread adoption many challenges remain, including reproducible thin film morphology, for which the purity of block copolymers is critical. One of the sources of impurities are reaction conditions used to synthesize block copolymers that may result in the formation of homopolymer as a side product, which can impact the quality and the morphology of self-assembled features. Detection and characterization of these homopolymer impurities can be challenging by traditional methods of polymer characterization. We will discuss an alternate NMR-based method for the detection of homopolymer impurities in block copolymers - contrast enhanced diffusion ordered spectroscopy (CEDOSY). This experimental technique measures the diffusion coefficient of polymeric materials in the solution allowing for the `virtual' or spectroscopic separation of BCPs that contain homopolymer impurities. Furthermore, the contrast between the diffusion coefficient of mixtures containing BCPs and homopolymer impurities can be enhanced by taking advantage of the chemical mismatch of the two blocks to effectively increase the size of the BCP (and diffusion coefficient) through the formation of micelles using a cosolvent, while the size and diffusion coefficient of homopolymer impurities remain unchanged. This enables the spectroscopic separation of even small amounts of homopolymer impurities that are similar in size to BCPs. Herein, we present the results using the CEDOSY technique with both first generation BCP system, poly(styrene)-b-poly(methyl methacrylate), as well as a second generation high-χ system.

  14. KEK digital accelerator

    NASA Astrophysics Data System (ADS)

    Iwashita, T.; Adachi, T.; Takayama, K.; Leo, K. W.; Arai, T.; Arakida, Y.; Hashimoto, M.; Kadokura, E.; Kawai, M.; Kawakubo, T.; Kubo, Tomio; Koyama, K.; Nakanishi, H.; Okazaki, K.; Okamura, K.; Someya, H.; Takagi, A.; Tokuchi, A.; Wake, M.

    2011-07-01

    The High Energy Accelerator Research Organization KEK digital accelerator (KEK-DA) is a renovation of the KEK 500 MeV booster proton synchrotron, which was shut down in 2006. The existing 40 MeV drift tube linac and rf cavities have been replaced by an electron cyclotron resonance (ECR) ion source embedded in a 200 kV high-voltage terminal and induction acceleration cells, respectively. A DA is, in principle, capable of accelerating any species of ion in all possible charge states. The KEK-DA is characterized by specific accelerator components such as a permanent magnet X-band ECR ion source, a low-energy transport line, an electrostatic injection kicker, an extraction septum magnet operated in air, combined-function main magnets, and an induction acceleration system. The induction acceleration method, integrating modern pulse power technology and state-of-art digital control, is crucial for the rapid-cycle KEK-DA. The key issues of beam dynamics associated with low-energy injection of heavy ions are beam loss caused by electron capture and stripping as results of the interaction with residual gas molecules and the closed orbit distortion resulting from relatively high remanent fields in the bending magnets. Attractive applications of this accelerator in materials and biological sciences are discussed.

  15. Soiling of building envelope surfaces and its effect on solar reflectance – Part II: Development of an accelerated aging method for roofing materials

    SciTech Connect

    Sleiman, Mohamad; Kirchstetter, Thomas W.; Berdahl, Paul; Gilbert, Haley E.; Quelen, Sarah; Marlot, Lea; Preble, Chelsea V.; Chen, Sharon; Montalbano, Amandine; Rosseler, Olivier; Akbari, Hashem; Levinson, Ronnen; Destaillats, Hugo

    2014-01-09

    Highly reflective roofs can decrease the energy required for building air conditioning, help mitigate the urban heat island effect, and slow global warming. However, these benefits are diminished by soiling and weathering processes that reduce the solar reflectance of most roofing materials. Soiling results from the deposition of atmospheric particulate matter and the growth of microorganisms, each of which absorb sunlight. Weathering of materials occurs with exposure to water, sunlight, and high temperatures. This study developed an accelerated aging method that incorporates features of soiling and weathering. The method sprays a calibrated aqueous soiling mixture of dust minerals, black carbon, humic acid, and salts onto preconditioned coupons of roofing materials, then subjects the soiled coupons to cycles of ultraviolet radiation, heat and water in a commercial weatherometer. Three soiling mixtures were optimized to reproduce the site-specific solar spectral reflectance features of roofing products exposed for 3 years in a hot and humid climate (Miami, Florida); a hot and dry climate (Phoenix, Arizona); and a polluted atmosphere in a temperate climate (Cleveland, Ohio). A fourth mixture was designed to reproduce the three-site average values of solar reflectance and thermal emittance attained after 3 years of natural exposure, which the Cool Roof Rating Council (CRRC) uses to rate roofing products sold in the US. This accelerated aging method was applied to 25 products₋single ply membranes, factory and field applied coatings, tiles, modified bitumen cap sheets, and asphalt shingles₋and reproduced in 3 days the CRRC's 3-year aged values of solar reflectance. In conclusion, this accelerated aging method can be used to speed the evaluation and rating of new cool roofing materials.

  16. EVIDENCE FOR PARTICLE ACCELERATION TO THE KNEE OF THE COSMIC RAY SPECTRUM IN TYCHO'S SUPERNOVA REMNANT

    SciTech Connect

    Eriksen, Kristoffer A.; Hughes, John P.; Badenes, Carles; Fesen, Robert; Ghavamian, Parviz; Moffett, David; Plucinksy, Paul P.; Slane, Patrick; Rakowski, Cara E.; Reynoso, Estela M.

    2011-02-20

    Supernova remnants (SNRs) have long been assumed to be the source of cosmic rays (CRs) up to the 'knee' of the CR spectrum at 10{sup 15} eV, accelerating particles to relativistic energies in their blast waves by the process of diffusive shock acceleration (DSA). Since CR nuclei do not radiate efficiently, their presence must be inferred indirectly. Previous theoretical calculations and X-ray observations show that CR acceleration significantly modifies the structure of the SNR and greatly amplifies the interstellar magnetic field. We present new, deep X-ray observations of the remnant of Tycho's supernova (SN 1572, henceforth Tycho), which reveal a previously unknown, strikingly ordered pattern of non-thermal high-emissivity stripes in the projected interior of the remnant, with spacing that corresponds to the gyroradii of 10{sup 14}-10{sup 15} eV protons. Spectroscopy of the stripes shows the plasma to be highly turbulent on the (smaller) scale of the Larmor radii of TeV energy electrons. Models of the shock amplification of magnetic fields produce structure on the scale of the gyroradius of the highest energy CRs present, but they do not predict the highly ordered pattern we observe. We interpret the stripes as evidence for acceleration of particles to near the knee of the CR spectrum in regions of enhanced magnetic turbulence, while the observed highly ordered pattern of these features provides a new challenge to models of DSA.

  17. Evidence for Particle Acceleration to the Knee of the Cosmic Ray Spectrum in Tycho's Supernova Remnant

    NASA Astrophysics Data System (ADS)

    Eriksen, Kristoffer A.; Hughes, John P.; Badenes, Carles; Fesen, Robert; Ghavamian, Parviz; Moffett, David; Plucinksy, Paul P.; Rakowski, Cara E.; Reynoso, Estela M.; Slane, Patrick

    2011-02-01

    Supernova remnants (SNRs) have long been assumed to be the source of cosmic rays (CRs) up to the "knee" of the CR spectrum at 1015 eV, accelerating particles to relativistic energies in their blast waves by the process of diffusive shock acceleration (DSA). Since CR nuclei do not radiate efficiently, their presence must be inferred indirectly. Previous theoretical calculations and X-ray observations show that CR acceleration significantly modifies the structure of the SNR and greatly amplifies the interstellar magnetic field. We present new, deep X-ray observations of the remnant of Tycho's supernova (SN 1572, henceforth Tycho), which reveal a previously unknown, strikingly ordered pattern of non-thermal high-emissivity stripes in the projected interior of the remnant, with spacing that corresponds to the gyroradii of 1014-1015 eV protons. Spectroscopy of the stripes shows the plasma to be highly turbulent on the (smaller) scale of the Larmor radii of TeV energy electrons. Models of the shock amplification of magnetic fields produce structure on the scale of the gyroradius of the highest energy CRs present, but they do not predict the highly ordered pattern we observe. We interpret the stripes as evidence for acceleration of particles to near the knee of the CR spectrum in regions of enhanced magnetic turbulence, while the observed highly ordered pattern of these features provides a new challenge to models of DSA.

  18. TIGER2 with solvent energy averaging (TIGER2A): An accelerated sampling method for large molecular systems with explicit representation of solvent.

    PubMed

    Li, Xianfeng; Snyder, James A; Stuart, Steven J; Latour, Robert A

    2015-10-14

    The recently developed "temperature intervals with global exchange of replicas" (TIGER2) accelerated sampling method is found to have inaccuracies when applied to systems with explicit solvation. This inaccuracy is due to the energy fluctuations of the solvent, which cause the sampling method to be less sensitive to the energy fluctuations of the solute. In the present work, the problem of the TIGER2 method is addressed in detail and a modification to the sampling method is introduced to correct this problem. The modified method is called "TIGER2 with solvent energy averaging," or TIGER2A. This new method overcomes the sampling problem with the TIGER2 algorithm and is able to closely approximate Boltzmann-weighted sampling of molecular systems with explicit solvation. The difference in performance between the TIGER2 and TIGER2A methods is demonstrated by comparing them against analytical results for simple one-dimensional models, against replica exchange molecular dynamics (REMD) simulations for sampling the conformation of alanine dipeptide and the folding behavior of (AAQAA)3 peptide in aqueous solution, and by comparing their performance in sampling the behavior of hen egg-white lysozyme in aqueous solution. The new TIGER2A method solves the problem caused by solvent energy fluctuations in TIGER2 while maintaining the two important characteristics of TIGER2, i.e., (1) using multiple replicas sampled at different temperature levels to help systems efficiently escape from local potential energy minima and (2) enabling the number of replicas used for a simulation to be independent of the size of the molecular system, thus providing an accelerated sampling method that can be used to efficiently sample systems considered too large for the application of conventional temperature REMD. PMID:26472361

  19. TIGER2 with solvent energy averaging (TIGER2A): An accelerated sampling method for large molecular systems with explicit representation of solvent

    NASA Astrophysics Data System (ADS)

    Li, Xianfeng; Snyder, James A.; Stuart, Steven J.; Latour, Robert A.

    2015-10-01

    The recently developed "temperature intervals with global exchange of replicas" (TIGER2) accelerated sampling method is found to have inaccuracies when applied to systems with explicit solvation. This inaccuracy is due to the energy fluctuations of the solvent, which cause the sampling method to be less sensitive to the energy fluctuations of the solute. In the present work, the problem of the TIGER2 method is addressed in detail and a modification to the sampling method is introduced to correct this problem. The modified method is called "TIGER2 with solvent energy averaging," or TIGER2A. This new method overcomes the sampling problem with the TIGER2 algorithm and is able to closely approximate Boltzmann-weighted sampling of molecular systems with explicit solvation. The difference in performance between the TIGER2 and TIGER2A methods is demonstrated by comparing them against analytical results for simple one-dimensional models, against replica exchange molecular dynamics (REMD) simulations for sampling the conformation of alanine dipeptide and the folding behavior of (AAQAA)3 peptide in aqueous solution, and by comparing their performance in sampling the behavior of hen egg-white lysozyme in aqueous solution. The new TIGER2A method solves the problem caused by solvent energy fluctuations in TIGER2 while maintaining the two important characteristics of TIGER2, i.e., (1) using multiple replicas sampled at different temperature levels to help systems efficiently escape from local potential energy minima and (2) enabling the number of replicas used for a simulation to be independent of the size of the molecular system, thus providing an accelerated sampling method that can be used to efficiently sample systems considered too large for the application of conventional temperature REMD.

  20. A multimodality vascular imaging phantom with fiducial markers visible in DSA, CTA, MRA, and ultrasound.

    PubMed

    Cloutier, Guy; Soulez, Gilles; Qanadli, Salah D; Teppaz, Pierre; Allard, Louise; Qin, Zhao; Cloutier, François; Durand, Louis-Gilles

    2004-06-01

    The objective was to design a vascular phantom compatible with digital subtraction angiography, computerized tomography angiography, ultrasound and magnetic resonance angiography (MRA). Fiducial markers were implanted at precise known locations in the phantom to facilitate identification and orientation of plane views from three-dimensional (3-D) reconstructed images. A vascular conduit connected to tubing at the extremities of the phantom ran through an agar-based gel filling it. A vessel wall in latex was included around the conduit to avoid diffusion of contrast agents. Using a lost-material casting technique based on a low melting point metal, geometries of pathological vessels were modeled. During the experimental testing, fiducial markers were detectable in all modalities without distortion. No leak of gadolinium through the vascular wall was observed on MRA after 5 hours. Moreover, no significant deformation of the vascular conduit was noted during the fabrication process (confirmed by microtome slicing along the vessel). The potential use of the phantom for calibration, rescaling, and fusion of 3-D images obtained from the different modalities as well as its use for the evaluation of intra- and inter-modality comparative studies of imaging systems are discussed. In conclusion, the vascular phantom can allow accurate calibration of radiological imaging devices based on x-ray, magnetic resonance and ultrasound and quantitative comparisons of the geometric accuracy of the vessel lumen obtained with each of these methods on a given well defined 3-D geometry. PMID:15259645

  1. Finite Element Method Simulation of a New One-Chip-Style Quartz Crystal Motion Sensor with Two Functions of Gyro and Acceleration Detection

    NASA Astrophysics Data System (ADS)

    Koitabashi, Tatsuo; Kudo, Seiichi; Okada, Shigeya; Tomikawa, Yoshiro

    2001-09-01

    In this study, a new one-chip-style quartz crystal motion sensor which detects one-axis angular velocity and one-axis acceleration is proposed. Some characteristics of the sensor are simulated by the finite element method, along with some simulations of vibrational characteristics. This sensor is aimed to be used as a small wristwatch-type instrumentation unit to monitor some motions of the human body. The dimensions of the prototype sensor are 16 mm in length, 6 mm in width and 0.3 mm in thickness. The sensor consists of two parts with different functions; one part is a flatly supported vibratory gyrosensor using a quartz crystal trident-type tuning fork resonator and the other is a frequency-changeable type acceleration sensor. The results of simulations show, that the gyrosensor part has a good linearity of sensitivity, although it is also sensitive to an angular velocity which could not be detected fundamentally. It also has a good linearity of sensitivity for detection of acceleration.

  2. Coarse-mesh diffusion synthetic acceleration of the scattering source iteration scheme for one-speed slab-geometry discrete ordinates problems

    NASA Astrophysics Data System (ADS)

    Santos, Frederico P.; Filho, Hermes Alves; Barros, Ricardo C.

    2013-10-01

    The scattering source iterative (SI) scheme is traditionally applied to converge fine-mesh numerical solutions to fixed-source discrete ordinates (SN) neutron transport problems. The SI scheme is very simple to implement under a computational viewpoint. However, the SI scheme may show very slow convergence rate, mainly for diffusive media (low absorption) with several mean free paths in extent (low leakage). In this work we describe an acceleration technique based on an improved initial guess for the scattering source distribution within the slab. In other words, we use as initial guess for the fine-mesh scattering source, the coarse-mesh solution of the neutron diffusion equation with special boundary conditions to account for the classical SN prescribed boundary conditions, including vacuum boundary conditions. Therefore, we first implement a spectral nodal method that generates coarse-mesh diffusion solution that is completely free from spatial truncation errors, then we reconstruct this coarse-mesh solution within each spatial cell of the discretization grid, to further yield the initial guess for the fine-mesh scattering source in the first SN transport sweep (forward and backward) across the spatial grid. We consider a number of numerical experiments to illustrate the efficiency of the offered diffusion synthetic acceleration (DSA) technique.

  3. LINEAR ACCELERATOR

    DOEpatents

    Colgate, S.A.

    1958-05-27

    An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.

  4. Development of an accelerated solvent extraction, ultrasonic derivatisation LC-MS/MS method for the determination of the marker residues of nitrofurans in freshwater fish.

    PubMed

    Tao, Yanfei; Chen, Dongmei; Wei, Huimin; Yuanhu, Pan; Liu, Zhenli; Huang, Lingli; Wang, Yulian; Xie, Shuyu; Yuan, Zonghui

    2012-01-01

    A rapid method using accelerated solvent extraction (ASE) and ultrasound enhanced derivatisation has been developed for the quantitative determination of metabolites of nitrofurans, namely 3-amino-2-oxalidinone (AOZ), 5-morpholinomethyl-3-amino-2-oxalidinone (AMOZ), 1-amino-hydantoin (AHD) and semicarbazide (SEM), in muscle and skin of carp and finless eel. The target analytes were extracted using ASE, ultrasonic derivatisation for 1 h and then purified by solid phase extraction. Averaged decision limits (CCα) and detection capability (CCβ) of the method were in the range of 0.07-0.13 and 0.31-0.49 µg kg⁻¹ in carp and finless eel, respectively. The accuracy in terms of recovery was in the range 77.2-97.4%. The simplified and traditional methods were compared with incurred residue samples. The simplified method reduced the derivatisation time and has been applied to the determination of nitrofurans residues in fish. PMID:22320705

  5. Fast perspective volume ray casting method using GPU-based acceleration techniques for translucency rendering in 3D endoluminal CT colonography.

    PubMed

    Lee, Taek-Hee; Lee, Jeongjin; Lee, Ho; Kye, Heewon; Shin, Yeong Gil; Kim, Soo Hong

    2009-08-01

    Recent advances in graphics processing unit (GPU) have enabled direct volume rendering at interactive rates. However, although perspective volume rendering for opaque isosurface is rapidly performed using conventional GPU-based method, perspective volume rendering for non-opaque volume such as translucency rendering is still slow. In this paper, we propose an efficient GPU-based acceleration technique of fast perspective volume ray casting for translucency rendering in computed tomography (CT) colonography. The empty space searching step is separated from the shading and compositing steps, and they are divided into separate processing passes in the GPU. Using this multi-pass acceleration, empty space leaping is performed exactly at the voxel level rather than at the block level, so that the efficiency of empty space leaping is maximized for colon data set, which has many curved or narrow regions. In addition, the numbers of shading and compositing steps are fixed, and additional empty space leapings between colon walls are performed to increase computational efficiency further near the haustral folds. Experiments were performed to illustrate the efficiency of the proposed scheme compared with the conventional GPU-based method, which has been known to be the fastest algorithm. The experimental results showed that the rendering speed of our method was 7.72fps for translucency rendering of 1024x1024 colonoscopy image, which was about 3.54 times faster than that of the conventional method. Since our method performed the fully optimized empty space leaping for any kind of colon inner shapes, the frame-rate variations of our method were about two times smaller than that of the conventional method to guarantee smooth navigation. The proposed method could be successfully applied to help diagnose colon cancer using translucency rendering in virtual colonoscopy. PMID:19541296

  6. HEAVY ION LINEAR ACCELERATOR

    DOEpatents

    Van Atta, C.M.; Beringer, R.; Smith, L.

    1959-01-01

    A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.

  7. Acceleration switch

    DOEpatents

    Abbin, J.P. Jr.; Devaney, H.F.; Hake, L.W.

    1979-08-29

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  8. Acceleration switch

    DOEpatents

    Abbin, Jr., Joseph P.; Devaney, Howard F.; Hake, Lewis W.

    1982-08-17

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  9. ION ACCELERATOR

    DOEpatents

    Bell, J.S.

    1959-09-15

    An arrangement for the drift tubes in a linear accelerator is described whereby each drift tube acts to shield the particles from the influence of the accelerating field and focuses the particles passing through the tube. In one embodiment the drift tube is splii longitudinally into quadrants supported along the axis of the accelerator by webs from a yoke, the quadrants. webs, and yoke being of magnetic material. A magnetic focusing action is produced by energizing a winding on each web to set up a magnetic field between adjacent quadrants. In the other embodiment the quadrants are electrically insulated from each other and have opposite polarity voltages on adjacent quadrants to provide an electric focusing fleld for the particles, with the quadrants spaced sufficienily close enough to shield the particles within the tube from the accelerating electric field.

  10. LINEAR ACCELERATOR

    DOEpatents

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  11. Shape optimization for DSA

    NASA Astrophysics Data System (ADS)

    Ouaknin, Gaddiel; Laachi, Nabil; Delaney, Kris; Fredrickson, Glenn; Gibou, Frederic

    2016-03-01

    Directed self-assembly using block copolymers for positioning vertical interconnect access in integrated circuits relies on the proper shape of a confined domain in which polymers will self-assemble into the targeted design. Finding that shape, i.e., solving the inverse problem, is currently mainly based on trial and error approaches. We introduce a level-set based algorithm that makes use of a shape optimization strategy coupled with self-consistent field theory to solve the inverse problem in an automated way. It is shown that optimal shapes are found for different targeted topologies with accurate placement and distances between the different components.

  12. Re-acceleration Model for Radio Relics with Spectral Curvature

    NASA Astrophysics Data System (ADS)

    Kang, Hyesung; Ryu, Dongsu

    2016-05-01

    Most of the observed features of radio gischt relics, such as spectral steepening across the relic width and a power-law-like integrated spectrum, can be adequately explained by a diffusive shock acceleration (DSA) model in which relativistic electrons are (re-)accelerated at shock waves induced in the intracluster medium. However, the steep spectral curvature in the integrated spectrum above ∼2 GHz detected in some radio relics, such as the Sausage relic in cluster CIZA J2242.8+5301, may not be interpreted by the simple radiative cooling of postshock electrons. In order to understand such steepening, we consider here a model in which a spherical shock sweeps through and then exits out of a finite-size cloud with fossil relativistic electrons. The ensuing integrated radio spectrum is expected to steepen much more than predicted for aging postshock electrons, since the re-acceleration stops after the cloud-crossing time. Using DSA simulations that are intended to reproduce radio observations of the Sausage relic, we show that both the integrated radio spectrum and the surface brightness profile can be fitted reasonably well, if a shock of speed {u}s ∼ 2.5–2.8 × {10}3 {km} {{{s}}}-1 and a sonic Mach number {M}s ∼ 2.7–3.0 traverses a fossil cloud for ∼45 Myr, and the postshock electrons cool further for another ∼10 Myr. This attempt illustrates that steep curved spectra of some radio gischt relics could be modeled by adjusting the shape of the fossil electron spectrum and adopting the specific configuration of the fossil cloud.

  13. INSTRUMENTS AND METHODS OF INVESTIGATION: Giant pulses of thermal neutrons in large accelerator beam dumps. Possibilities for experiments

    NASA Astrophysics Data System (ADS)

    Stavissky, Yurii Ya

    2006-12-01

    A short review is presented of the development in Russia of intense pulsed neutron sources for physical research — the pulsating fast reactors IBR-1, IBR-30, IBR-2 (Joint Institute for Nuclear Research, Dubna), and the neutron-radiation complex of the Moscow meson factory — the 'Troitsk Trinity' (RAS Institute for Nuclear Research, Troitsk, Moscow region). The possibility of generating giant neutron pulses in beam dumps of superhigh energy accelerators is discussed. In particular, the possibility of producing giant pulsed thermal neutron fluxes in modified beam dumps of the large hadron collider (LHD) under construction at CERN is considered. It is shown that in the case of one-turn extraction ov 7-TeV protons accumulated in the LHC main rings on heavy targets with water or zirconium-hydride moderators placed in the front part of the LHC graphite beam-dump blocks, every 10 hours relatively short (from ~100 µs) thermal neutron pulses with a peak flux density of up to ~1020 neutrons cm-2 s-1 may be produced. The possibility of applying such neutron pulses in physical research is discussed.

  14. Design of new block copolymer systems to achieve thick films with defect-free structures for applications of DSA into lithographic large nodes

    NASA Astrophysics Data System (ADS)

    Chevalier, X.; Coupillaud, P.; Lombard, G.; Nicolet, C.; Beausoleil, J.; Fleury, G.; Zelsmann, M.; Bezard, P.; Cunge, G.; Berron, J.; Sakavuyi, K.; Gharbi, A.; Tiron, R.; Hadziioannou, G.; Navarro, C.; Cayrefourcq, I.

    2016-03-01

    Properties of new block copolymers systems, specifically designed to reach large periods for the features, are compared to the ones exhibited by classical PS-b-PMMA materials of same dimensions. Conducted studies, like free-surface defects analysis, mild-plasma tomography experiments, graphoepitaxy-guided structures, etch-transfer… indicate much better performances, in terms of achievable film-thicknesses with perpendicular features, defects levels, and dimensional uniformities, for the new system than for the classical PS-b-PMMA. These results clearly highlight unique and original solutions toward an early introduction of DSA technology into large lithographic nodes.

  15. Radioisotope Dating with Accelerators.

    ERIC Educational Resources Information Center

    Muller, Richard A.

    1979-01-01

    Explains a new method of detecting radioactive isotopes by counting their accelerated ions rather than the atoms that decay during the counting period. This method increases the sensitivity by several orders of magnitude, and allows one to find the ages of much older and smaller samples. (GA)

  16. Statistical methods for transverse beam position diagnostics with higher order modes in third harmonic 3.9 GHz superconducting accelerating cavities at FLASH

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Baboi, Nicoleta; Jones, Roger M.

    2014-01-01

    Beam-excited higher order modes (HOMs) can be used to provide beam diagnostics. Here we focus on 3.9 GHz superconducting accelerating cavities. In particular we study dipole mode excitation and its application to beam position determinations. In order to extract beam position information, linear regression can be used. Due to a large number of sampling points in the waveforms, statistical methods are used to effectively reduce the dimension of the system, such as singular value decomposition (SVD) and k-means clustering. These are compared with the direct linear regression (DLR) on the entire waveforms. A cross-validation technique is used to study the sample independent precisions of the position predictions given by these three methods. A RMS prediction error in the beam position of approximately 50 μm can be achieved by DLR and SVD, while k-means clustering suggests 70 μm.

  17. GAMMA-RAY EMISSION OF ACCELERATED PARTICLES ESCAPING A SUPERNOVA REMNANT IN A MOLECULAR CLOUD

    SciTech Connect

    Ellison, Donald C.; Bykov, Andrei M. E-mail: byk@astro.ioffe.ru

    2011-04-20

    We present a model of gamma-ray emission from core-collapse supernovae (SNe) originating from the explosions of massive young stars. The fast forward shock of the supernova remnant (SNR) can accelerate particles by diffusive shock acceleration (DSA) in a cavern blown by a strong, pre-SN stellar wind. As a fundamental part of nonlinear DSA, some fraction of the accelerated particles escape the shock and interact with a surrounding massive dense shell producing hard photon emission. To calculate this emission, we have developed a new Monte Carlo technique for propagating the cosmic rays (CRs) produced by the forward shock of the SNR, into the dense, external material. This technique is incorporated in a hydrodynamic model of an evolving SNR which includes the nonlinear feedback of CRs on the SNR evolution, the production of escaping CRs along with those that remain trapped within the remnant, and the broadband emission of radiation from trapped and escaping CRs. While our combined CR-hydro-escape model is quite general and applies to both core collapse and thermonuclear SNe, the parameters we choose for our discussion here are more typical of SNRs from very massive stars whose emission spectra differ somewhat from those produced by lower mass progenitors directly interacting with a molecular cloud.

  18. Acceleration Studies

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.

    1993-01-01

    Work to support the NASA MSFC Acceleration Characterization and Analysis Project (ACAP) was performed. Four tasks (analysis development, analysis research, analysis documentation, and acceleration analysis) were addressed by parallel projects. Work concentrated on preparation for and implementation of near real-time SAMS data analysis during the USMP-1 mission. User support documents and case specific software documentation and tutorials were developed. Information and results were presented to microgravity users. ACAP computer facilities need to be fully implemented and networked, data resources must be cataloged and accessible, future microgravity missions must be coordinated, and continued Orbiter characterization is necessary.

  19. Baryon Loading Efficiency and Particle Acceleration Efficiency of Relativistic Jets: Cases for Low Luminosity BL Lacs

    NASA Astrophysics Data System (ADS)

    Inoue, Yoshiyuki; Tanaka, Yasuyuki T.

    2016-09-01

    Relativistic jets launched by supermassive black holes, so-called active galactic nuclei (AGNs), are known as the most energetic particle accelerators in the universe. However, the baryon loading efficiency onto the jets from the accretion flows and their particle acceleration efficiencies have been veiled in mystery. With the latest data sets, we perform multi-wavelength spectral analysis of quiescent spectra of 13 TeV gamma-ray detected high-frequency-peaked BL Lacs (HBLs) following one-zone static synchrotron self-Compton (SSC) model. We determine the minimum, cooling break, and maximum electron Lorentz factors following the diffusive shock acceleration (DSA) theory. We find that HBLs have {P}B/{P}e∼ 6.3× {10}-3 and the radiative efficiency {ε }{{rad,jet}}∼ 6.7× {10}-4, where P B and P e is the Poynting and electron power, respectively. By assuming 10 leptons per one proton, the jet power relates to the black hole mass as {P}{{jet}}/{L}{{Edd}}∼ 0.18, where {P}{{jet}} and {L}{{Edd}} is the jet power and the Eddington luminosity, respectively. Under our model assumptions, we further find that HBLs have a jet production efficiency of {η }{{jet}}∼ 1.5 and a mass loading efficiency of {ξ }{{jet}}≳ 5× {10}-2. We also investigate the particle acceleration efficiency in the blazar zone by including the most recent Swift/BAT data. Our samples ubiquitously have particle acceleration efficiencies of {η }g∼ {10}4.5, which is inefficient to accelerate particles up to the ultra-high-energy-cosmic-ray (UHECR) regime. This implies that the UHECR acceleration sites should not be the blazar zones of quiescent low power AGN jets, if one assumes the one-zone SSC model based on the DSA theory.

  20. Baryon Loading Efficiency and Particle Acceleration Efficiency of Relativistic Jets: Cases for Low Luminosity BL Lacs

    NASA Astrophysics Data System (ADS)

    Inoue, Yoshiyuki; Tanaka, Yasuyuki T.

    2016-09-01

    Relativistic jets launched by supermassive black holes, so-called active galactic nuclei (AGNs), are known as the most energetic particle accelerators in the universe. However, the baryon loading efficiency onto the jets from the accretion flows and their particle acceleration efficiencies have been veiled in mystery. With the latest data sets, we perform multi-wavelength spectral analysis of quiescent spectra of 13 TeV gamma-ray detected high-frequency-peaked BL Lacs (HBLs) following one-zone static synchrotron self-Compton (SSC) model. We determine the minimum, cooling break, and maximum electron Lorentz factors following the diffusive shock acceleration (DSA) theory. We find that HBLs have {P}B/{P}e˜ 6.3× {10}-3 and the radiative efficiency {ɛ }{{rad,jet}}˜ 6.7× {10}-4, where P B and P e is the Poynting and electron power, respectively. By assuming 10 leptons per one proton, the jet power relates to the black hole mass as {P}{{jet}}/{L}{{Edd}}˜ 0.18, where {P}{{jet}} and {L}{{Edd}} is the jet power and the Eddington luminosity, respectively. Under our model assumptions, we further find that HBLs have a jet production efficiency of {η }{{jet}}˜ 1.5 and a mass loading efficiency of {ξ }{{jet}}≳ 5× {10}-2. We also investigate the particle acceleration efficiency in the blazar zone by including the most recent Swift/BAT data. Our samples ubiquitously have particle acceleration efficiencies of {η }g˜ {10}4.5, which is inefficient to accelerate particles up to the ultra-high-energy-cosmic-ray (UHECR) regime. This implies that the UHECR acceleration sites should not be the blazar zones of quiescent low power AGN jets, if one assumes the one-zone SSC model based on the DSA theory.

  1. An effective method of UV-oxidation of dissolved organic carbon in natural waters for radiocarbon analysis by accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Xue, Yuejun; Ge, Tiantian; Wang, Xuchen

    2015-12-01

    Radiocarbon (14C) measurement of dissolved organic carbon (DOC) is a very powerful tool to study the sources, transformation and cycling of carbon in the ocean. The technique, however, remains great challenges for complete and successful oxidation of sufficient DOC with low blanks for high precision carbon isotopic ratio analysis, largely due to the overwhelming proportion of salts and low DOC concentrations in the ocean. In this paper, we report an effective UV-Oxidation method for oxidizing DOC in natural waters for radiocarbon analysis by accelerator mass spectrometry (AMS). The UV-oxidation system and method show 95%±4% oxidation efficiency and high reproducibility for DOC in both river and seawater samples. The blanks associated with the method was also low (about 3 µg C) that is critical for 14C analysis. As a great advantage of the method, multiple water samples can be oxidized at the same time so it reduces the sample processing time substantially compared with other UV-oxidation method currently being used in other laboratories. We have used the system and method for 14C studies of DOC in rivers, estuaries, and oceanic environments and have received promise results.

  2. A theoretical comparison of x-ray angiographic image quality using energy-dependent and conventional subtraction methods

    SciTech Connect

    Tanguay, Jesse; Kim, Ho Kyung; Cunningham, Ian A.

    2012-01-15

    Purpose: X-ray digital subtraction angiography (DSA) is widely used for vascular imaging. However, the need to subtract a mask image can result in motion artifacts and compromised image quality. The current interest in energy-resolving photon-counting (EPC) detectors offers the promise of eliminating motion artifacts and other advanced applications using a single exposure. The authors describe a method of assessing the iodine signal-to-noise ratio (SNR) that may be achieved with energy-resolved angiography (ERA) to enable a direct comparison with other approaches including DSA and dual-energy angiography for the same patient exposure. Methods: A linearized noise-propagation approach, combined with linear expressions of dual-energy and energy-resolved imaging, is used to describe the iodine SNR. The results were validated by a Monte Carlo calculation for all three approaches and compared visually for dual-energy and DSA imaging using a simple angiographic phantom with a CsI-based flat-panel detector. Results: The linearized SNR calculations show excellent agreement with Monte Carlo results. While dual-energy methods require an increased tube heat load of 2x to 4x compared to DSA, and photon-counting detectors are not yet ready for angiographic imaging, the available iodine SNR for both methods as tested is within 10% of that of conventional DSA for the same patient exposure over a wide range of patient thicknesses and iodine concentrations. Conclusions: While the energy-based methods are not necessarily optimized and further improvements are likely, the linearized noise-propagation analysis provides the theoretical framework of a level playing field for optimization studies and comparison with conventional DSA. It is concluded that both dual-energy and photon-counting approaches have the potential to provide similar angiographic image quality to DSA.

  3. Plasma accelerator

    DOEpatents

    Wang, Zhehui; Barnes, Cris W.

    2002-01-01

    There has been invented an apparatus for acceleration of a plasma having coaxially positioned, constant diameter, cylindrical electrodes which are modified to converge (for a positive polarity inner electrode and a negatively charged outer electrode) at the plasma output end of the annulus between the electrodes to achieve improved particle flux per unit of power.

  4. Accelerated Achievement

    ERIC Educational Resources Information Center

    Ford, William J.

    2010-01-01

    This article focuses on the accelerated associate degree program at Ivy Tech Community College (Indiana) in which low-income students will receive an associate degree in one year. The three-year pilot program is funded by a $2.3 million grant from the Lumina Foundation for Education in Indianapolis and a $270,000 grant from the Indiana Commission…

  5. ACCELERATION INTEGRATOR

    DOEpatents

    Pope, K.E.

    1958-01-01

    This patent relates to an improved acceleration integrator and more particularly to apparatus of this nature which is gyrostabilized. The device may be used to sense the attainment by an airborne vehicle of a predetermined velocitv or distance along a given vector path. In its broad aspects, the acceleration integrator utilizes a magnetized element rotatable driven by a synchronous motor and having a cylin drical flux gap and a restrained eddy- current drag cap deposed to move into the gap. The angular velocity imparted to the rotatable cap shaft is transmitted in a positive manner to the magnetized element through a servo feedback loop. The resultant angular velocity of tae cap is proportional to the acceleration of the housing in this manner and means may be used to measure the velocity and operate switches at a pre-set magnitude. To make the above-described dcvice sensitive to acceleration in only one direction the magnetized element forms the spinning inertia element of a free gyroscope, and the outer housing functions as a gimbal of a gyroscope.

  6. Particle acceleration

    NASA Technical Reports Server (NTRS)

    Vlahos, L.; Machado, M. E.; Ramaty, R.; Murphy, R. J.; Alissandrakis, C.; Bai, T.; Batchelor, D.; Benz, A. O.; Chupp, E.; Ellison, D.

    1986-01-01

    Data is compiled from Solar Maximum Mission and Hinothori satellites, particle detectors in several satellites, ground based instruments, and balloon flights in order to answer fundamental questions relating to: (1) the requirements for the coronal magnetic field structure in the vicinity of the energization source; (2) the height (above the photosphere) of the energization source; (3) the time of energization; (4) transistion between coronal heating and flares; (5) evidence for purely thermal, purely nonthermal and hybrid type flares; (6) the time characteristics of the energization source; (7) whether every flare accelerates protons; (8) the location of the interaction site of the ions and relativistic electrons; (9) the energy spectra for ions and relativistic electrons; (10) the relationship between particles at the Sun and interplanetary space; (11) evidence for more than one acceleration mechanism; (12) whether there is single mechanism that will accelerate particles to all energies and also heat the plasma; and (13) how fast the existing mechanisms accelerate electrons up to several MeV and ions to 1 GeV.

  7. An accurate Rb density measurement method for a plasma wakefield accelerator experiment using a novel Rb reservoir

    NASA Astrophysics Data System (ADS)

    Öz, E.; Batsch, F.; Muggli, P.

    2016-09-01

    A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE) (Assmann et al., 2014 [1]) project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook (Marlow, 1967 [2]) method and has been described in great detail in the work by Hill et al. (1986) [3]. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of 1% for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prototype 8 cm long novel Rb vapor cell.

  8. Acute Antibody-Mediated Rejection in Presence of MICA-DSA and Successful Renal Re-Transplant with Negative-MICA Virtual Crossmatch

    PubMed Central

    Ming, Yingzi; Hu, Juan; Luo, Qizhi; Ding, Xiang; Luo, Weiguang; Zhuang, Quan; Zou, Yizhou

    2015-01-01

    The presence of donor-specific alloantibodies (DSAs) against the MICA antigen results in high risk for antibody-mediated rejection (AMR) of a transplanted kidney, especially in patients receiving a re-transplant. We describe the incidence of acute C4d+ AMR in a patient who had received a first kidney transplant with a zero HLA antigen mismatch. Retrospective analysis of post-transplant T and B cell crossmatches were negative, but a high level of MICA alloantibody was detected in sera collected both before and after transplant. The DSA against the first allograft mismatched MICA*018 was in the recipient. Flow cytometry and cytotoxicity tests with five samples of freshly isolated human umbilical vein endothelial cells demonstrated the alloantibody nature of patient’s MICA-DSA. Prior to the second transplant, a MICA virtual crossmatch and T and B cell crossmatches were used to identify a suitable donor. The patient received a second kidney transplant, and allograft was functioning well at one-year follow-up. Our study indicates that MICA virtual crossmatch is important in selection of a kidney donor if the recipient has been sensitized with MICA antigens. PMID:26024219

  9. An accelerated lambda iteration method for multilevel radiative transfer. I - Non-overlapping lines with background continuum

    NASA Technical Reports Server (NTRS)

    Rybicki, G. B.; Hummer, D. G.

    1991-01-01

    A method is presented for solving multilevel transfer problems when nonoverlapping lines and background continuum are present and active continuum transfer is absent. An approximate lambda operator is employed to derive linear, 'preconditioned', statistical-equilibrium equations. A method is described for finding the diagonal elements of the 'true' numerical lambda operator, and therefore for obtaining the coefficients of the equations. Iterations of the preconditioned equations, in conjunction with the transfer equation's formal solution, are used to solve linear equations. Some multilevel problems are considered, including an eleven-level neutral helium atom. Diagonal and tridiagonal approximate lambda operators are utilized in the problems to examine the convergence properties of the method, and it is found to be effective for the line transfer problems.

  10. Accelerating spatially non-uniform update for sparse target recovery in fluorescence molecular tomography by ordered subsets and momentum methods

    NASA Astrophysics Data System (ADS)

    Zhu, Dianwen; Li, Changqing

    2015-03-01

    Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. However, it remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a non-uniform multiplicative updating algorithm, and obtained some further speed gain with the ordered subsets (OS) method. However, increasing the number of OS leads to larger approximation errors and the speed gain from larger number of OS is marginal. In this paper, we propose to further enhance the convergence speed by incorporating the first order momentum method that uses previous iterations to achieve a quadratic convergence rate. Using cubic phantom experiment, we have shown that the proposed method indeed leads to a much faster convergence.

  11. Biomedical accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Freeman, Stewart P. H. T.; Vogel, John S.

    1995-05-01

    Ultrasensitive SIMS with accelerator based spectrometers has recently begun to be applied to biomedical problems. Certain very long-lived radioisotopes of very low natural abundances can be used to trace metabolism at environmental dose levels ( [greater-or-equal, slanted] z mol in mg samples). 14C in particular can be employed to label a myriad of compounds. Competing technologies typically require super environmental doses that can perturb the system under investigation, followed by uncertain extrapolation to the low dose regime. 41Ca and 26Al are also used as elemental tracers. Given the sensitivity of the accelerator method, care must be taken to avoid contamination of the mass spectrometer and the apparatus employed in prior sample handling including chemical separation. This infant field comprises the efforts of a dozen accelerator laboratories. The Center for Accelerator Mass Spectrometry has been particularly active. In addition to collaborating with groups further afield, we are researching the kinematics and binding of genotoxins in-house, and we support innovative uses of our capability in the disciplines of chemistry, pharmacology, nutrition and physiology within the University of California. The field can be expected to grow further given the numerous potential applications and the efforts of several groups and companies to integrate more the accelerator technology into biomedical research programs; the development of miniaturized accelerator systems and ion sources capable of interfacing to conventional HPLC and GMC, etc. apparatus for complementary chemical analysis is anticipated for biomedical laboratories.

  12. Validation and application of a multi-residue method, using accelerated solvent extraction followed by gas chromatography, for pesticides quantification in soil.

    PubMed

    Leyva-Morales, J B; Valdez-Torres, J B; Bastidas-Bastidas, P J; Betancourt-Lozano, M

    2015-01-01

    A multi-residue method was developed to determine different types of pesticides in soils. An extraction with pressure and temperature, through accelerated solvent extraction (dichloromethane:acetone, 50:50, v/v). The pesticides were determined by gas chromatography with several selective detectors: electron capture detector, pulsed flame photometric detector and thermionic specific detector. The following parameters were determined: limit of detection, limit of quantification, equipment linearity (working interval), method linearity as well as, method accuracy and precision. The average recoveries ranged between 76 and 106%, with the exception of chlorothalonil, which had an average recovery of 46%. Additionally, detection limits from 0.9 to 7.6ng g -: (1) and the quantification limits from 3.00 to 25.47ng g -: (1) were estimated. In terms of linearity and precision, the results obtained were in the ranges considered adequate (R(2) ≥ 0.98 and coefficient of variation (CV) ≤ 20%), with the exception of aldrin (R(2) = 0.946, CV = 35.79%), lindane (R(2) = 0.917, CV = 32.91%) and chlorothalonil (R(2) = 0.8184, CV = 81.35%). The proposed method was used to evaluate pesticides in real soil samples, detecting concentrations over 1000ng g -: (1) for some pesticides. The method was correctly validated and provided for the rapid determination of pesticides in soil. PMID:26041247

  13. Comparison of accelerated solvent extraction and quick, easy, cheap, effective, rugged and safe method for extraction and determination of pharmaceuticals in vegetables.

    PubMed

    Chuang, Ya-Hui; Zhang, Yingjie; Zhang, Wei; Boyd, Stephen A; Li, Hui

    2015-07-24

    Land application of biosolids and irrigation with reclaimed water in agricultural production could result in accumulation of pharmaceuticals in vegetable produce. To better assess the potential human health impact from long-term consumption of pharmaceutical-contaminated vegetables, it is important to accurately quantify the amount of pharmaceuticals accumulated in vegetables. In this study, a quick, easy, cheap, effective, rugged and safe (QuEChERS) method was developed and optimized to extract multiple classes of pharmaceuticals from vegetables, which were subsequently quantified by liquid chromatography coupled to tandem mass spectrometry. For the eleven target pharmaceuticals in celery and lettuce, the extraction recovery of the QuEChERS method ranged from 70.1 to 118.6% with relative standard deviation <20%, and the method detection limit was achieved at the levels of nanograms of pharmaceuticals per gram of vegetables. The results revealed that the performance of the QuEChERS method was comparable to, or better than that of accelerated solvent extraction (ASE) method for extraction of pharmaceuticals from plants. The two optimized extraction methods were applied to quantify the uptake of pharmaceuticals by celery and lettuce growing hydroponically. The results showed that all the eleven target pharmaceuticals could be absorbed by the vegetables from water. Compared to the ASE method, the QuEChERS method offers the advantages of short time and reduced costs of sample preparation, and less amount of organic solvents used. The established QuEChERS method could be used to determine the accumulation of multiple classes of pharmaceutical residues in vegetables and other plants, which is needed to evaluate the quality and safety of agricultural produce consumed by humans. PMID:26065569

  14. Accurate, efficient, and scalable parallel simulation of mesoscale electrostatic/magnetostatic problems accelerated by a fast multipole method

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Karpeev, Dmitry; Li, Jiyuan; de Pablo, Juan; Hernandez-Ortiz, Juan; Heinonen, Olle

    Boundary integrals arise in many electrostatic and magnetostatic problems. In computational modeling of these problems, although the integral is performed only on the boundary of a domain, its direct evaluation needs O(N2) operations, where N is number of unknowns on the boundary. The O(N2) scaling impedes a wider usage of the boundary integral method in scientific and engineering communities. We have developed a parallel computational approach that utilize the Fast Multipole Method to evaluate the boundary integral in O(N) operations. To demonstrate the accuracy, efficiency, and scalability of our approach, we consider two test cases. In the first case, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space using a hybrid finite element-boundary integral method. In the second case, we solve an electrostatic problem involving the polarization of dielectric objects in free space using the boundary element method. The results from test cases show that our parallel approach can enable highly efficient and accurate simulations of mesoscale electrostatic/magnetostatic problems. Computing resources was provided by Blues, a high-performance cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. Work at Argonne was supported by U. S. DOE, Office of Science under Contract No. DE-AC02-06CH11357.

  15. Recent Advances in Plasma Acceleration

    SciTech Connect

    Hogan, Mark

    2007-03-19

    The costs and the time scales of colliders intended to reach the energy frontier are such that it is important to explore new methods of accelerating particles to high energies. Plasma-based accelerators are particularly attractive because they are capable of producing accelerating fields that are orders of magnitude larger than those used in conventional colliders. In these accelerators a drive beam, either laser or particle, produces a plasma wave (wakefield) that accelerates charged particles. The ultimate utility of plasma accelerators will depend on sustaining ultra-high accelerating fields over a substantial length to achieve a significant energy gain. More than 42 GeV energy gain was achieved in an 85 cm long plasma wakefield accelerator driven by a 42 GeV electron drive beam in the Final Focus Test Beam (FFTB) Facility at SLAC. Most of the beam electrons lose energy to the plasma wave, but some electrons in the back of the same beam pulse are accelerated with a field of {approx}52 GV/m. This effectively doubles their energy, producing the energy gain of the 3 km long SLAC accelerator in less than a meter for a small fraction of the electrons in the injected bunch. Prospects for a drive-witness bunch configuration and high-gradient positron acceleration experiments planned for the SABER facility will be discussed.

  16. Accelerating the discontinuous Galerkin method for seismic wave propagation simulations using the graphic processing unit (GPU)—single-GPU implementation

    NASA Astrophysics Data System (ADS)

    Mu, Dawei; Chen, Po; Wang, Liqiang

    2013-02-01

    We have successfully ported an arbitrary high-order discontinuous Galerkin (ADER-DG) method for solving the three-dimensional elastic seismic wave equation on unstructured tetrahedral meshes to an Nvidia Tesla C2075 GPU using the Nvidia CUDA programming model. On average our implementation obtained a speedup factor of about 24.3 for the single-precision version of our GPU code and a speedup factor of about 12.8 for the double-precision version of our GPU code when compared with the double precision serial CPU code running on one Intel Xeon W5880 core. When compared with the parallel CPU code running on two, four and eight cores, the speedup factor of our single-precision GPU code is around 12.9, 6.8 and 3.6, respectively. In this article, we give a brief summary of the ADER-DG method, a short introduction to the CUDA programming model and a description of our CUDA implementation and optimization of the ADER-DG method on the GPU. To our knowledge, this is the first study that explores the potential of accelerating the ADER-DG method for seismic wave-propagation simulations using a GPU.

  17. Accelerating the performance of a novel meshless method based on collocation with radial basis functions by employing a graphical processing unit as a parallel coprocessor

    NASA Astrophysics Data System (ADS)

    Owusu-Banson, Derek

    In recent times, a variety of industries, applications and numerical methods including the meshless method have enjoyed a great deal of success by utilizing the graphical processing unit (GPU) as a parallel coprocessor. These benefits often include performance improvement over the previous implementations. Furthermore, applications running on graphics processors enjoy superior performance per dollar and performance per watt than implementations built exclusively on traditional central processing technologies. The GPU was originally designed for graphics acceleration but the modern GPU, known as the General Purpose Graphical Processing Unit (GPGPU) can be used for scientific and engineering calculations. The GPGPU consists of massively parallel array of integer and floating point processors. There are typically hundreds of processors per graphics card with dedicated high-speed memory. This work describes an application written by the author, titled GaussianRBF to show the implementation and results of a novel meshless method that in-cooperates the collocation of the Gaussian radial basis function by utilizing the GPU as a parallel co-processor. Key phases of the proposed meshless method have been executed on the GPU using the NVIDIA CUDA software development kit. Especially, the matrix fill and solution phases have been carried out on the GPU, along with some post processing. This approach resulted in a decreased processing time compared to similar algorithm implemented on the CPU while maintaining the same accuracy.

  18. Accelerating electrostatic interaction calculations with graphical processing units based on new developments of Ewald method using non-uniform fast Fourier transform.

    PubMed

    Yang, Sheng-Chun; Wang, Yong-Lei; Jiao, Gui-Sheng; Qian, Hu-Jun; Lu, Zhong-Yuan

    2016-01-30

    We present new algorithms to improve the performance of ENUF method (F. Hedman, A. Laaksonen, Chem. Phys. Lett. 425, 2006, 142) which is essentially Ewald summation using Non-Uniform FFT (NFFT) technique. A NearDistance algorithm is developed to extensively reduce the neighbor list size in real-space computation. In reciprocal-space computation, a new algorithm is developed for NFFT for the evaluations of electrostatic interaction energies and forces. Both real-space and reciprocal-space computations are further accelerated by using graphical processing units (GPU) with CUDA technology. Especially, the use of CUNFFT (NFFT based on CUDA) very much reduces the reciprocal-space computation. In order to reach the best performance of this method, we propose a procedure for the selection of optimal parameters with controlled accuracies. With the choice of suitable parameters, we show that our method is a good alternative to the standard Ewald method with the same computational precision but a dramatically higher computational efficiency. PMID:26584145

  19. Compact accelerator

    DOEpatents

    Caporaso, George J.; Sampayan, Stephen E.; Kirbie, Hugh C.

    2007-02-06

    A compact linear accelerator having at least one strip-shaped Blumlein module which guides a propagating wavefront between first and second ends and controls the output pulse at the second end. Each Blumlein module has first, second, and third planar conductor strips, with a first dielectric strip between the first and second conductor strips, and a second dielectric strip between the second and third conductor strips. Additionally, the compact linear accelerator includes a high voltage power supply connected to charge the second conductor strip to a high potential, and a switch for switching the high potential in the second conductor strip to at least one of the first and third conductor strips so as to initiate a propagating reverse polarity wavefront(s) in the corresponding dielectric strip(s).

  20. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  1. Accelerated hydrolysis method to estimate the amino acid content of wheat (Triticum durum Desf.) flour using microwave irradiation.

    PubMed

    Kabaha, Khaled; Taralp, Alpay; Cakmak, Ismail; Ozturk, Levent

    2011-04-13

    The technique of microwave-assisted acid hydrolysis was applied to wholegrain wheat (Triticum durum Desf. cv. Balcali 2000) flour in order to speed the preparation of samples for analysis. The resultant hydrolysates were chromatographed and quantified in an automated amino acid analyzer. The effect of different hydrolysis temperatures, times and sample weights was examined using flour dispersed in 6 N HCl. Within the range of values tested, the highest amino acid recoveries were generally obtained by setting the hydrolysis parameters to 150 °C, 3 h and 200 mg sample weight. These conditions struck an optimal balance between liberating amino acid residues from the wheat matrix and limiting their subsequent degradation or transformation. Compared to the traditional 24 h reflux method, the hydrolysates were prepared in dramatically less time, yet afforded comparable ninhydrin color yields. Under optimal hydrolysis conditions, the total amino acid recovery corresponded to at least 85.1% of the total protein content, indicating the efficient extraction of amino acids from the flour matrix. The findings suggest that this microwave-assisted method can be used to rapidly profile the amino acids of numerous wheat grain samples, and can be extended to the grain analysis of other cereal crops. PMID:21375298

  2. Validation of an accelerated solvent extraction liquid chromatography-tandem mass spectrometry method for Pacific ciguatoxin-1 in fish flesh and comparison with the mouse neuroblastoma assay.

    PubMed

    Wu, Jia Jun; Mak, Yim Ling; Murphy, Margaret B; Lam, James C W; Chan, Wing Hei; Wang, Mingfu; Chan, Leo L; Lam, Paul K S

    2011-07-01

    Ciguatera fish poisoning (CFP) is a global foodborne illness caused by consumption of seafood containing ciguatoxins (CTXs) originating from dinoflagellates such as Gambierdiscus toxicus. P-CTX-1 has been suggested to be the most toxic CTX, causing ciguatera at 0.1 μg/kg in the flesh of carnivorous fish. CTXs are structurally complex and difficult to quantify, but there is a need for analytical methods for CFP toxins in coral reef fishes to protect human health. In this paper, we describe a sensitive and rapid extraction method using accelerated solvent extraction combined with high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) for the detection and quantification of P-CTX-1 in fish flesh. By the use of a more sensitive MS system (5500 QTRAP), the validated method has a limit of quantification (LOQ) of 0.01 μg/kg, linearity correlation coefficients above 0.99 for both solvent- and matrix-based standard solutions as well as matrix spike recoveries ranging from 49% to 85% in 17 coral reef fish species. Compared with previous methods, this method has better overall recovery, extraction efficiency and LOQ. Fish flesh from 12 blue-spotted groupers (Cephalopholis argus) was assessed for the presence of CTXs using HPLC-MS/MS analysis and the commonly used mouse neuroblastoma assay, and the results of the two methods were strongly correlated. This method is capable of detecting low concentrations of P-CTX-1 in fish at levels that are relevant to human health, making it suitable for monitoring of suspected ciguateric fish both in the environment and in the marketplace. PMID:21505950

  3. Some issues related to the novel spectral acceleration method for the fast computation of radiation/scattering from one-dimensional extremely large scale quasi-planar structures

    NASA Astrophysics Data System (ADS)

    Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng

    2002-03-01

    The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm

  4. The solution of radiative transfer problems in molecular bands without the LTE assumption by accelerated lambda iteration methods

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.

    1991-01-01

    An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.

  5. An MLC-based version for the ecliptic method for the determination of backscatter into the beam monitor chambers in photon beams of medical accelerators.

    PubMed

    Nelli, Flavio Enrico

    2016-03-01

    A very simple method to measure the effect of the backscatter from secondary collimators into the beam monitor chambers in linear accelerators equipped with multi-leaf collimators (MLC) is presented here. The backscatter to the monitor chambers from the upper jaws of the secondary collimator was measured on three beam-matched linacs by means of three methods: this new methodology, the ecliptic method, and assessing the variation of the beam-on time per monitor unit with dose rate feedback disabled. This new methodology was used to assess the backscatter characteristics of asymmetric over-traveling jaws. Excellent agreement between the backscatter values measured using the new methodology introduced here and the ones obtained using the other two methods was established. The experimental values reported here differ by less than 1% from published data. The sensitivity of this novel technique allowed differences in backscatter due to the same opening of the jaws, when placed at different positions on the beam path, to be resolved. The introduction of the ecliptic method has made the determination of the backscatter to the monitor chambers an easy procedure. The method presented here for machines equipped with MLCs makes the determination of backscatter to the beam monitor chambers even easier, and suitable to characterize linacs equipped with over-traveling asymmetric secondary collimators. This experimental procedure could be simply implemented to fully characterize the backscatter output factor constituent when detailed dosimetric modeling of the machine's head is required. The methodology proved to be uncomplicated, accurate and suitable for clinical or experimental environments. PMID:26671445

  6. Optimization and validation of an accelerated laboratory extraction method to estimate nitrogen release patterns of slow- and controlled-release fertilizers.

    PubMed

    Medina, L Carolina; Sartain, Jerry B; Obreza, Thomas A; Hall, William L; Thiex, Nancy J

    2014-01-01

    Several technologies have been proposed to characterize the nutrient release and availability patterns of enhanced-efficiency fertilizers (EEFs), especially slow-release fertilizers (SRFs) and controlled-release fertilizers (CRFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize EEF materials, no validated method exists to assess their nutrient release patterns. However, the increased use of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. A series of experiments were conducted to evaluate the effect of temperature, fertilizer test portion size, and extraction time on the performance of a 74 h accelerated laboratory extraction method to measure SRF and CRF nutrient release profiles. Temperature was the only factor that influenced nutrient release rate, with a highly marked effect for phosphorus and to a lesser extent for nitrogen (N) and potassium. Based on the results, the optimal extraction temperature set was: Extraction No. 1-2:00 h at 25 degrees C; Extraction No. 2-2:00 h at 50 degrees C; Extraction No. 3-20:00 h at 55 degrees C; and Extraction No. 4-50:00 h at 60 degrees C. Ruggedness of the method was tested by evaluating the effect of small changes in seven selected factors on method behavior using a fractional multifactorial design. Overall, the method showed ruggedness for measuring N release rates of coated CRFs. PMID:25051611

  7. A new method of accelerated life testing based on the Grey System Theory for a model-based lithium-ion battery life evaluation system

    NASA Astrophysics Data System (ADS)

    Gu, Weijun; Sun, Zechang; Wei, Xuezhe; Dai, Haifeng

    2014-12-01

    The lack of data samples is the main difficulty for the lifetime study of a lithium-ion battery, especially for a model-based evaluation system. To determine the mapping relationship between the battery fading law and the different external factors, the testing of batteries should be implemented to the greatest extent possible. As a result, performing a battery lifetime study has become a notably time-consuming undertaking. Without reducing the number of testing items pre-specified within the test matrices of an accelerated life testing schedule, a grey model that can be used to predict the cycle numbers that result in the specific life ending index is established in this paper. No aging mechanism is required for this model, which is exclusively a data-driven method obtained from a small quantity of actual testing data. For higher accuracy, a specific smoothing method is introduced, and the error between the predicted value and the actual value is also modeled using the same method. By the verification of a phosphate iron lithium-ion battery and a manganese oxide lithium-ion battery, this grey model demonstrated its ability to reduce the required number of cycles for the operational mode of various electric vehicles.

  8. Determination of polychlorinated biphenyls in fish: optimisation and validation of a method based on accelerated solvent extraction and gas chromatography-mass spectrometry.

    PubMed

    Ottonello, Giuliana; Ferrari, Angelo; Magi, Emanuele

    2014-01-01

    A simple and robust method for the determination of 18 polychlorinated biphenyls (PCBs) in fish was developed and validated. A mixture of acetone/n-hexane (1:1, v/v) was selected for accelerated solvent extraction (ASE). After the digestion of fat, the clean-up was carried out using solid phase extraction silica cartridges. Samples were analysed by GC-MS in selected ion monitoring (SIM) using three fragment ions for each congener (one quantifier and two qualifiers). PCB 155 and PCB 198 were employed as internal standards. The lowest limit of detection was observed for PCB 28 (0.4ng/g lipid weight). The accuracy of the method was verified by means of the Certified Reference Material EDF-2525 and good results in terms of linearity (R(2)>0.994) and recoveries (80-110%) were also achieved. Precision was evaluated by spiking blank samples at 4, 8 and 12ng/g. Relative standard deviation values for repeatability and reproducibility were lower than 8% and 16%, respectively. The method was applied to the determination of PCBs in 80 samples belonging to four Mediterranean fish species. The proposed procedure is particularly effective because it provides good recoveries with lowered extraction time and solvent consumption; in fact, the total time of extraction is about 12min per sample and, for the clean-up step, a total solvent volume of 13ml is required. PMID:24001849

  9. Development of long-lived thick carbon stripper foils for high energy heavy ion accelerators by a heavy ion beam sputtering method

    NASA Astrophysics Data System (ADS)

    Muto, Hideshi; Ohshiro, Yukimitsu; Kawasaki, Katsunori; Oyaizu, Michihiro; Hattori, Toshiyuki

    2013-04-01

    In the past decade, we have developed extremely long-lived carbon stripper foils of 1-50 μg/cm2 thickness prepared by a heavy ion beam sputtering method. These foils were mainly used for low energy heavy ion beams. Recently, high energy negative Hydrogen and heavy ion accelerators have started to use carbon stripper foils of over 100 μg/cm2 in thickness. However, the heavy ion beam sputtering method was unsuccessful in production of foils thicker than about 50 μg/cm2 because of the collapse of carbon particle build-up from substrates during the sputtering process. The reproduction probability of the foils was less than 25%, and most of them had surface defects. However, these defects were successfully eliminated by introducing higher beam energies of sputtering ions and a substrate heater during the sputtering process. In this report we describe a highly reproducible method for making thick carbon stripper foils by a heavy ion beam sputtering with a Krypton ion beam.

  10. Development of long-lived thick carbon stripper foils for high energy heavy ion accelerators by a heavy ion beam sputtering method

    SciTech Connect

    Muto, Hideshi; Ohshiro, Yukimitsu; Kawasaki, Katsunori; Oyaizu, Michihiro; Hattori, Toshiyuki

    2013-04-19

    In the past decade, we have developed extremely long-lived carbon stripper foils of 1-50 {mu}g/cm{sup 2} thickness prepared by a heavy ion beam sputtering method. These foils were mainly used for low energy heavy ion beams. Recently, high energy negative Hydrogen and heavy ion accelerators have started to use carbon stripper foils of over 100 {mu}g/cm{sup 2} in thickness. However, the heavy ion beam sputtering method was unsuccessful in production of foils thicker than about 50 {mu}g/cm{sup 2} because of the collapse of carbon particle build-up from substrates during the sputtering process. The reproduction probability of the foils was less than 25%, and most of them had surface defects. However, these defects were successfully eliminated by introducing higher beam energies of sputtering ions and a substrate heater during the sputtering process. In this report we describe a highly reproducible method for making thick carbon stripper foils by a heavy ion beam sputtering with a Krypton ion beam.

  11. Advanced concepts for acceleration

    SciTech Connect

    Keefe, D.

    1986-07-01

    Selected examples of advanced accelerator concepts are reviewed. Such plasma accelerators as plasma beat wave accelerator, plasma wake field accelerator, and plasma grating accelerator are discussed particularly as examples of concepts for accelerating relativistic electrons or positrons. Also covered are the pulsed electron-beam, pulsed laser accelerator, inverse Cherenkov accelerator, inverse free-electron laser, switched radial-line accelerators, and two-beam accelerator. Advanced concepts for ion acceleration discussed include the electron ring accelerator, excitation of waves on intense electron beams, and two-wave combinations. (LEW)

  12. Accelerating the discontinuous Galerkin method for seismic wave propagation simulations using multiple GPUs with CUDA and MPI

    NASA Astrophysics Data System (ADS)

    Mu, Dawei; Chen, Po; Wang, Liqiang

    2013-12-01

    We have successfully ported an arbitrary high-order discontinuous Galerkin method for solving the three-dimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) of NVIDIA and Message Passing Interface (MPI) and obtained a speedup factor of about 28.3 for the single-precision version of our codes and a speedup factor of about 14.9 for the double-precision version. The GPU used in the comparisons is NVIDIA Tesla C2070 Fermi, and the CPU used is Intel Xeon W5660. To effectively overlap inter-process communication with computation, we separate the elements on each subdomain into inner and outer elements and complete the computation on outer elements and fill the MPI buffer first. While the MPI messages travel across the network, the GPU performs computation on inner elements, and all other calculations that do not use information of outer elements from neighboring subdomains. A significant portion of the speedup also comes from a customized matrix-matrix multiplication kernel, which is used extensively throughout our program. Preliminary performance analysis on our parallel GPU codes shows favorable strong and weak scalabilities.

  13. OpenACC acceleration of an unstructured CFD solver based on a reconstructed discontinuous Galerkin method for compressible flows

    DOE PAGESBeta

    Xia, Yidong; Lou, Jialin; Luo, Hong; Edwards, Jack; Mueller, Frank

    2015-02-09

    Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less

  14. OpenACC acceleration of an unstructured CFD solver based on a reconstructed discontinuous Galerkin method for compressible flows

    SciTech Connect

    Xia, Yidong; Lou, Jialin; Luo, Hong; Edwards, Jack; Mueller, Frank

    2015-02-09

    Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementation of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.

  15. Accelerators and the Accelerator Community

    SciTech Connect

    Malamud, Ernest; Sessler, Andrew

    2008-06-01

    In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.

  16. Observations of Particle Acceleration Associated with Small-Scale Magnetic Islands Downstream of Interplanetary Shocks

    NASA Astrophysics Data System (ADS)

    Khabarova, Olga V.; Zank, Gary P.; Li, Gang; Malandraki, Olga E.; le Roux, Jakobus A.; Webb, Gary M.

    2016-04-01

    We have recently shown both theoretically (Zank et al. 2014, 2015; le Roux et al. 2015) and observationally (Khabarova et al. 2015) that dynamical small-scale magnetic islands play a significant role in local particle acceleration in the supersonic solar wind. We discuss here observational evidence for particle acceleration at shock waves that is enhanced by the recently proposed mechanism of particle energization by both island contraction and the reconnection electric field generated in merging or contracting magnetic islands downstream of the shocks (Zank et al. 2014, 2015; le Roux et al. 2015). Both observations and simulations suppose formation of magnetic islands in the turbulent wake of heliospheric or interplanetary shocks (ISs) (Turner et al. 2013; Karimabadi et al. 2014; Chasapis et al. 2015). A combination of the DSA mechanism with acceleration by magnetic island dynamics explain why the spectra of energetic particles that are supposed to be accelerated at heliospheric shocks are sometimes harder than predicted by DSA theory (Zank et al. 2015). Moreover, such an approach allows us to explain and describe other unusual behaviour of accelerated particles, such as when energetic particle flux intensity peaks are observed downstream of heliospheric shocks instead of peaking directly at the shock according to DSA theory. Zank et al. (2015) predicted the peak location to be behind the heliospheric termination shock (HTS) and showed that the distance from the shock to the peak depends on particle energy, which is in agreement with Voyager 2 observations. Similar particle behaviour is observed near strong ISs in the outer heliosphere as observed by Voyager 2. Observations show that heliospheric shocks are accompanied by current sheets, and that IS crossings always coincide with sharp changes in the IMF azimuthal angle and the IMF strength, which is typical for strong current sheets. The presence of current sheets in the vicinity of ISs acts to magnetically

  17. High frequency circular translation pin-on-disk method for accelerated wear testing of ultrahigh molecular weight polyethylene as a bearing material in total hip arthroplasty.

    PubMed

    Saikko, Vesa

    2015-01-21

    The temporal change of the direction of sliding relative to the ultrahigh molecular weight polyethylene (UHMWPE) component of prosthetic joints is known to be of crucial importance with respect to wear. One complete revolution of the resultant friction vector is commonly called a wear cycle. It was hypothesized that in order to accelerate the wear test, the cycle frequency may be substantially increased if the circumference of the slide track is reduced in proportion, and still the wear mechanisms remain realistic and no overheating takes place. This requires an additional slow motion mechanism with which the lubrication of the contact is maintained and wear particles are conveyed away from the contact. A three-station, dual motion high frequency circular translation pin-on-disk (HF-CTPOD) device with a relative cycle frequency of 25.3 Hz and an average sliding velocity of 27.4 mm/s was designed. The pins circularly translated at high frequency (1.0 mm per cycle, 24.8 Hz, clockwise), and the disks at low frequency (31.4mm per cycle, 0.5 Hz, counter-clockwise). In a 22 million cycle (10 day) test, the wear rate of conventional gamma-sterilized UHMWPE pins against polished CoCr disks in diluted serum was 1.8 mg per 24 h, which was six times higher than that in the established 1 Hz CTPOD device. The wear mechanisms were similar. Burnishing of the pin was the predominant feature. No overheating took place. With the dual motion HF-CTPOD method, the wear testing of UHMWPE as a bearing material in total hip arthroplasty can be substantially accelerated without concerns of the validity of the wear simulation. PMID:25498368

  18. Development of hardware accelerator for molecular dynamics simulations: a computation board that calculates nonbonded interactions in cooperation with fast multipole method.

    PubMed

    Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro

    2003-04-15

    Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby

  19. Accelerator based epithermal neutron source

    NASA Astrophysics Data System (ADS)

    Taskaev, S. Yu.

    2015-11-01

    We review the current status of the development of accelerator sources of epithermal neutrons for boron neutron capture therapy (BNCT), a promising method of malignant tumor treatment. Particular attention is given to the source of epithermal neutrons on the basis of a new type of charged particle accelerator: tandem accelerator with vacuum insulation and lithium neutron-producing target. It is also shown that the accelerator with specialized targets makes it possible to generate fast and monoenergetic neutrons, resonance and monoenergetic gamma-rays, alpha-particles, and positrons.

  20. A theoretical perspective on particle acceleration by interplanetary shocks and the Solar Energetic Particle problem

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, Olga P.; Zank, Gary P.; Li, Gang

    2015-02-01

    Understanding the physics of Solar Energetic Particle (SEP) events is of importance to the general question of particle energization throughout the cosmos as well as playing a role in the technologically critical impact of space weather on society. The largest, and often most damaging, events are the so-called gradual SEP events, generally associated with shock waves driven by coronal mass ejections (CMEs). We review the current state of knowledge about particle acceleration at evolving interplanetary shocks with application to SEP events that occur in the inner heliosphere. Starting with a brief outline of recent theoretical progress in the field, we focus on current observational evidence that challenges conventional models of SEP events, including complex particle energy spectra, the blurring of the distinction between gradual and impulsive events, and the difference inherent in particle acceleration at quasi-parallel and quasi-perpendicular shocks. We also review the important problem of the seed particle population and its injection into particle acceleration at a shock. We begin by discussing the properties and characteristics of non-relativistic interplanetary shocks, from their formation close to the Sun to subsequent evolution through the inner heliosphere. The association of gradual SEP events with shocks is discussed. Several approaches to the energization of particles have been proposed, including shock drift acceleration, diffusive shock acceleration (DSA), acceleration by large-scale compression regions, acceleration by random velocity fluctuations (sometimes known as the "pump mechanism"), and others. We review these various mechanisms briefly and focus on the DSA mechanism. Much of our emphasis will be on our current understanding of the parallel and perpendicular diffusion coefficients for energetic particles and models of plasma turbulence in the vicinity of the shock. Because of its importance both to the DSA mechanism itself and to the particle

  1. Laser acceleration and its future.

    PubMed

    Tajima, Toshiki

    2010-01-01

    Laser acceleration is based on the concept to marshal collective fields that may be induced by laser. In order to exceed the material breakdown field by a large factor, we employ the broken-down matter of plasma. While the generated wakefields resemble with the fields in conventional accelerators in their structure (at least qualitatively), it is their extreme accelerating fields that distinguish the laser wakefield from others, amounting to tiny emittance and compact accelerator. The current research largely falls on how to master the control of acceleration process in spatial and temporal scales several orders of magnitude smaller than the conventional method. The efforts over the last several years have come to a fruition of generating good beam properties with GeV energies on a table top, leading to many applications, such as ultrafast radiolysis, intraoperative radiation therapy, injection to X-ray free electron laser, and a candidate for future high energy accelerators. PMID:20228616

  2. Laser acceleration and its future

    PubMed Central

    Tajima, Toshiki

    2010-01-01

    Laser acceleration is based on the concept to marshal collective fields that may be induced by laser. In order to exceed the material breakdown field by a large factor, we employ the broken-down matter of plasma. While the generated wakefields resemble with the fields in conventional accelerators in their structure (at least qualitatively), it is their extreme accelerating fields that distinguish the laser wakefield from others, amounting to tiny emittance and compact accelerator. The current research largely falls on how to master the control of acceleration process in spatial and temporal scales several orders of magnitude smaller than the conventional method. The efforts over the last several years have come to a fruition of generating good beam properties with GeV energies on a table top, leading to many applications, such as ultrafast radiolysis, intraoperative radiation therapy, injection to X-ray free electron laser, and a candidate for future high energy accelerators. PMID:20228616

  3. General purpose programmable accelerator board

    DOEpatents

    Robertson, Perry J.; Witzke, Edward L.

    2001-01-01

    A general purpose accelerator board and acceleration method comprising use of: one or more programmable logic devices; a plurality of memory blocks; bus interface for communicating data between the memory blocks and devices external to the board; and dynamic programming capabilities for providing logic to the programmable logic device to be executed on data in the memory blocks.

  4. Prediction of back-scatter radiations to a beam monitor chamber of medical linear accelerators by use of the digitized target-current-pulse analysis method.

    PubMed

    Suzuki, Yusuke; Hayashi, Naoki; Kato, Hideki; Fukuma, Hiroshi; Hirose, Yasujiro; Kawano, Makoto; Nishii, Yoshio; Nakamura, Masaru; Mukouyama, Takashi

    2013-01-01

    In small-field irradiation, the back-scattered radiation (BSR) affects the counts measured with a beam monitor chamber (BMC). In general, the effect of the BSR depends on the opened-jaw size. The effect is significantly large in small-field irradiation. Our purpose in this study was to predict the effect of BSR on LINAC output accurately with an improved target-current-pulse (TCP) technique. The pulse signals were measured with a system consisting of a personal computer and a digitizer. The pulse signals were analyzed with in-house software. The measured parameters were the number of pulses, the change in the waveform and the integrated signal values of the TCPs. The TCPs were measured for various field sizes with four linear accelerators. For comparison, Yu's method in which a universal counter was used was re-examined. The results showed that the variance of the measurements by the new method was reduced to approximately 1/10 of the variance by the previous method. There was no significant variation in the number of pulses due to a change in the field size in the Varian Clinac series. However, a change in the integrated signal value was observed. This tendency was different from the result of other investigations in the past. Our prediction method is able to define the cutoff voltage for the TCP acquired by digitizer. This functionality provides the capability of clearly classifying TCPs into signals and noise. In conclusion, our TCP analysis method can predict the effect of BSR on the BMC even for small-field irradiations. PMID:23096002

  5. Use of thermogravimetric analysis to develop accelerated test methods to investigate long-term environmental effects on fiber-reinforced plastics

    SciTech Connect

    Prian, L.; Pollard, R.; Shan, R.; Mastropietro, C.W.; Barkatt, A.; Gentry, T.R.; Bank, L.C.

    1997-12-31

    The development of accelerated test methods to characterize long-term environmental effects on fiber-reinforced plastics (FRPs) requires the use of physicochemical methods, as well as macromechanical measurements, in order to investigate the degradation processes and predict their course over long periods of time. Thermochemical and mechanical measurements were performed on a large number of FRPs exposed to neutral, basic, and acidic media between 23 and 80 C over periods of 7 to 224 days. The resin matrices used in the present study included vinylester, polyester, and epoxy, and the fiber materials were silicate glass, aramid, and carbon. TGA was used to study the effects of aqueous media on FRPs. In particular, the relative weight loss upon heating the previously exposed material from 150 to 300 C was found to be indicative of the extent of matrix depolymerization. Indications were obtained for correlation between this weight loss and the extent of degradation of various measures of mechanical strength. The measured weight change of the tested materials during exposure was found to reflect the extent of water absorption and could be related to the extent of the weight loss between 150 and 300 C. In basic environments, weight loss, rather than gain, took place as a result of fiber dissolution.

  6. A consecutive preparation method based upon accelerated solvent extraction and high-speed counter-current chromatography for isolation of aesculin from Cortex fraxinus.

    PubMed

    Tong, Xing; Zhou, Ting; Xiao, Xiaohua; Li, Gongke

    2012-12-01

    A consecutive preparation method based upon accelerated solvent extraction (ASE) coupled with high-speed counter-current chromatography (HSCCC) was presented and aesculin was obtained from Cortex fraxinus. The extraction condition of ASE was optimized with response surface methodology; some significant parameters such as the solvent system and its stability, the amount of loading sample in HSCCC were also investigated. The original sample was first extracted with methanol at 105°C and 104 bar for 7 min using ASE, then the extracts were consecutively introduced into the HSCCC system and separated and purified with the same ethyl acetate/n-butanol/water (7:3:10, v/v/v) solvent system for five times without further exchange and equilibrium. About 3.1 ± 0.2 mg/g in each time and total of 15.4 mg/g aesculin with purity over 95% was isolated from Cortex fraxinus. The results demonstrated that the consecutive preparation method was time and solvent saving and high throughput, it was suitable for isolation of aesculin from Cortex fraxinus, and also has good potential on the separation and purification of effective compounds from natural product. PMID:23225725

  7. Fully nonlinear time-domain simulation of a backward bent duct buoy floating wave energy converter using an acceleration potential method

    NASA Astrophysics Data System (ADS)

    Lee, Kyoung-Rok; Koo, Weoncheol; Kim, Moo-Hyun

    2013-12-01

    A floating Oscillating Water Column (OWC) wave energy converter, a Backward Bent Duct Buoy (BBDB), was simulated using a state-of-the-art, two-dimensional, fully-nonlinear Numerical Wave Tank (NWT) technique. The hydrodynamic performance of the floating OWC device was evaluated in the time domain. The acceleration potential method, with a full-updated kernel matrix calculation associated with a mode decomposition scheme, was implemented to obtain accurate estimates of the hydrodynamic force and displacement of a freely floating BBDB. The developed NWT was based on the potential theory and the boundary element method with constant panels on the boundaries. The mixed Eulerian-Lagrangian (MEL) approach was employed to capture the nonlinear free surfaces inside the chamber that interacted with a pneumatic pressure, induced by the time-varying airflow velocity at the air duct. A special viscous damping was applied to the chamber free surface to represent the viscous energy loss due to the BBDB's shape and motions. The viscous damping coefficient was properly selected using a comparison of the experimental data. The calculated surface elevation, inside and outside the chamber, with a tuned viscous damping correlated reasonably well with the experimental data for various incident wave conditions. The conservation of the total wave energy in the computational domain was confirmed over the entire range of wave frequencies.

  8. Quantitative analysis of artifacts in 4D DSA: the relative contributions of beam hardening and scatter to vessel dropout behind highly attenuating structures

    NASA Astrophysics Data System (ADS)

    Hermus, James; Szczykutowicz, Timothy P.; Strother, Charles M.; Mistretta, Charles

    2014-03-01

    When performing Computed Tomographic (CT) image reconstruction on digital subtraction angiography (DSA) projections, loss of vessel contrast has been observed behind highly attenuating anatomy, such as dental implants and large contrast filled aneurysms. Because this typically occurs only in a limited range of projection angles, the observed contrast time course can potentially be altered. In this work, we have developed a model for acquiring DSA projections that models both the polychromatic nature of the x-ray spectrum and the x-ray scattering interactions to investigate this problem. In our simulation framework, scatter and beam hardening contributions to vessel dropout can be analyzed separately. We constructed digital phantoms with large clearly defined regions containing iodine contrast, bone, soft issue, titanium (dental implants) or combinations of these materials. As the regions containing the materials were large and rectangular, when the phantoms were forward projected, the projections contained uniform regions of interest (ROI) and enabled accurate vessel dropout analysis. Two phantom models were used, one to model the case of a vessel behind a large contrast filled aneurysm and the other to model a vessel behind a dental implant. Cases in which both beam hardening and scatter were turned off, only scatter was turned on, only beam hardening was turned on, and both scatter and beam hardening were turned on, were simulated for both phantom models. The analysis of this data showed that the contrast degradation is primarily due to scatter. When analyzing the aneurysm case, 90.25% of the vessel contrast was lost in the polychromatic scatter image, however only 50.5% of the vessel contrast was lost in the beam hardening only image. When analyzing the teeth case, 44.2% of the vessel contrast was lost in the polychromatic scatter image and only 26.2% of the vessel contrast was lost in the beam hardening only image.

  9. A sensitive and validated HPLC method for the determination of cyromazine and melamine in herbal and edible plants using accelerated solvent extraction and cleanup with SPE.

    PubMed

    Ge, Xusheng; Wu, Xingqiang; Liang, Shuxuan; Sun, Hanwen

    2014-08-01

    A highly sensitive method was developed for the determination of the residues of cyromazine (CYR) and its metabolite, melamine (MEL), in herbal and edible plant samples by using reversed phase high-performance liquid chromatography-diode-array detection (RP-HPLC-DAD) with accelerated solvent extraction and solid phase extraction cleanup. The conditions of separation and detection were investigated and optimized. A Waters C18 column (250 × 4.6 mm i.d., 5 µm) was used for the RP-HPLC, with a mobile phase composed of 0.1% trifluoroacetic acid solution and methanol (85:15, v/v, pH 2.6). Under the optimized conditions, good linearity was achieved with a correlation coefficient of 0.9998. The limits of quantification of the method were 2.15 µg/kg for CYR and 2.51 µg/kg for MEL, which are maximum residue limits as low as three orders of magnitude. The recovery values at three spiked concentrations were in the range of 96.2-107.1% with relative standard deviations (RSDs) of 1.1-5.7% for CYR, and 92.7-104.9% with RSDs of 1.7-6.1% for MEL. The proposed method allows detection at levels as low as µg/kg levels for CYR and MEL. The method was validated by liquid chromatography-tandem mass spectrometry, and can be used for the routine determination of CYR and MEL in herbal and edible plant samples with the characteristics of speed, high sensitivity and accuracy, and low consumption of reagents. PMID:23845887

  10. Advanced accelerator methods: The cyclotrino

    SciTech Connect

    Welch, J.J.; Bertsche, K.J.; Friedman, P.G.; Morris, D.E.; Muller, R.A.

    1987-04-01

    Several new and unusual, advanced techniques in the small cyclotron are described. The cyclotron is run at low energy, using negative ions and at high harmonics. Electrostatic focusing is used exclusively. The ion source and injection system is in the center, which unfortunately does not provide enough current, but the new system design should solve this problem. An electrostatic extractor that runs at low voltage, under 5 kV, and a microchannel plate detector which is able to discriminate low energy ions from the /sup 14/C are used. The resolution is sufficient for /sup 14/C dating and a higher intensity source should allow dating of a milligram size sample of 30,000 year old material with less than 10% uncertainty.

  11. Attention's Accelerator.

    PubMed

    Reinhart, Robert M G; McClenahan, Laura J; Woodman, Geoffrey F

    2016-06-01

    How do people get attention to operate at peak efficiency in high-pressure situations? We tested the hypothesis that the general mechanism that allows this is the maintenance of multiple target representations in working and long-term memory. We recorded subjects' event-related potentials (ERPs) indexing the working memory and long-term memory representations used to control attention while performing visual search. We found that subjects used both types of memories to control attention when they performed the visual search task with a large reward at stake, or when they were cued to respond as fast as possible. However, under normal circumstances, one type of target memory was sufficient for slower task performance. The use of multiple types of memory representations appears to provide converging top-down control of attention, allowing people to step on the attentional accelerator in a variety of high-pressure situations. PMID:27056975

  12. Basic concepts in plasma accelerators.

    PubMed

    Bingham, Robert

    2006-03-15

    In this article, we present the underlying physics and the present status of high gradient and high-energy plasma accelerators. With the development of compact short pulse high-brightness lasers and electron and positron beams, new areas of studies for laser/particle beam-matter interactions is opening up. A number of methods are being pursued vigorously to achieve ultra-high-acceleration gradients. These include the plasma beat wave accelerator (PBWA) mechanism which uses conventional long pulse ( approximately 100 ps) modest intensity lasers (I approximately 10(14)-10(16) W cm(-2)), the laser wakefield accelerator (LWFA) which uses the new breed of compact high-brightness lasers (<1 ps) and intensities >10(18) W cm(-2), self-modulated laser wakefield accelerator (SMLWFA) concept which combines elements of stimulated Raman forward scattering (SRFS) and electron acceleration by nonlinear plasma waves excited by relativistic electron and positron bunches the plasma wakefield accelerator. In the ultra-high intensity regime, laser/particle beam-plasma interactions are highly nonlinear and relativistic, leading to new phenomenon such as the plasma wakefield excitation for particle acceleration, relativistic self-focusing and guiding of laser beams, high-harmonic generation, acceleration of electrons, positrons, protons and photons. Fields greater than 1 GV cm(-1) have been generated with monoenergetic particle beams accelerated to about 100 MeV in millimetre distances recorded. Plasma wakefields driven by both electron and positron beams at the Stanford linear accelerator centre (SLAC) facility have accelerated the tail of the beams. PMID:16483948

  13. Preliminary energy-filtering neutron imaging with time-of-flight method on PKUNIFTY: A compact accelerator based neutron imaging facility at Peking University

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Zou, Yubin; Wen, Weiwei; Lu, Yuanrong; Guo, Zhiyu

    2016-07-01

    Peking University Neutron Imaging Facility (PKUNIFTY) works on an accelerator-based neutron source with a repetition period of 10 ms and pulse duration of 0.4 ms, which has a rather low Cd ratio. To improve the effective Cd ratio and thus improve the detection capability of the facility, energy-filtering neutron imaging was realized with the intensified CCD camera and time-of-flight (TOF) method. Time structure of the pulsed neutron source was firstly simulated with Geant4, and the simulation result was evaluated with experiment. Both simulation and experiment results indicated that fast neutrons and epithermal neutrons were concentrated in the first 0.8 ms of each pulse period; meanwhile in the period of 0.8-2.0 ms only thermal neutrons existed. Based on this result, neutron images with and without energy filtering were acquired respectively, and it showed that detection capability of PKUNIFTY was improved with setting the exposure interval as 0.8-2.0 ms, especially for materials with strong moderating capability.

  14. Evaluation of non-volatile metabolites in beer stored at high temperature and utility as an accelerated method to predict flavour stability.

    PubMed

    Heuberger, Adam L; Broeckling, Corey D; Sedin, Dana; Holbrook, Christian; Barr, Lindsay; Kirkpatrick, Kaylyn; Prenni, Jessica E

    2016-06-01

    Flavour stability is vital to the brewing industry as beer is often stored for an extended time under variable conditions. Developing an accelerated model to evaluate brewing techniques that affect flavour stability is an important area of research. Here, we performed metabolomics on non-volatile compounds in beer stored at 37 °C between 1 and 14 days for two beer types: an amber ale and an India pale ale. The experiment determined high temperature to influence non-volatile metabolites, including the purine 5-methylthioadenosine (5-MTA). In a second experiment, three brewing techniques were evaluated for improved flavour stability: use of antioxidant crowns, chelation of pro-oxidants, and varying plant content in hops. Sensory analysis determined the hop method was associated with improved flavour stability, and this was consistent with reduced 5-MTA at both regular and high temperature storage. Future studies are warranted to understand the influence of 5-MTA on flavour and aging within different beer types. PMID:26830592

  15. Comparison of the Effects of Two Auditory Methods by Mother and Fetus on the Results of Non-Stress Test (Baseline Fetal Heart Rate and Number of Accelerations) in Pregnant Women: A Randomized Controlled Trial

    PubMed Central

    Khoshkholgh, Roghaie; Keshavarz, Tahereh; Moshfeghy, Zeinab; Akbarzadeh, Marzieh; Asadi, Nasrin; Zare, Najaf

    2016-01-01

    Objective: To compare the effects of two auditory methods by mother and fetus on the results of NST in 2011-2012. Materials and methods: In this single-blind clinical trial, 213 pregnant women with gestational age of 37-41 weeks who had no pregnancy complications were randomly divided into 3 groups (auditory intervention for mother, auditory intervention for fetus, and control) each containing 71 subjects. In the intervention groups, music was played through the second 10 minutes of NST. The three groups were compared regarding baseline fetal heart rate and number of accelerations in the first and second 10 minutes of NST. The data were analyzed using one-way ANOVA, Kruskal-Wallis, and paired T-test. Results: The results showed no significant difference among the three groups regarding baseline fetal heart rate in the first (p = 0.945) and second (p = 0.763) 10 minutes. However, a significant difference was found among the three groups concerning the number of accelerations in the second 10 minutes. Also, a significant difference was observed in the number of accelerations in the auditory intervention for mother (p = 0.013) and auditory intervention for fetus groups (p < 0.001). The difference between the number of accelerations in the first and second 10 minutes was also statistically significant (p = 0.002). Conclusion: Music intervention was effective in the number of accelerations which is the indicator of fetal health. Yet, further studies are required to be conducted on the issue. PMID:27385971

  16. Imaging using accelerated heavy ions

    SciTech Connect

    Chu, W.T.

    1982-05-01

    Several methods for imaging using accelerated heavy ion beams are being investigated at Lawrence Berkeley Laboratory. Using the HILAC (Heavy-Ion Linear Accelerator) as an injector, the Bevalac can accelerate fully stripped atomic nuclei from carbon (Z = 6) to krypton (Z = 34), and partly stripped ions up to uranium (Z = 92). Radiographic studies to date have been conducted with helium (from 184-inch cyclotron), carbon, oxygen, and neon beams. Useful ranges in tissue of 40 cm or more are available. To investigate the potential of heavy-ion projection radiography and computed tomography (CT), several methods and instrumentation have been studied.

  17. Accelerated Profile HMM Searches

    PubMed Central

    Eddy, Sean R.

    2011-01-01

    Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the “multiple segment Viterbi” (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call “sparse rescaling”. These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches. PMID:22039361

  18. Near-Surface Sensing of Vegetative Heavy Metal Stress: Method Development for an Accelerated Assessment of Mine Tailing Waste and Remediation Efforts

    NASA Astrophysics Data System (ADS)

    Lee, M. T.; Gottfried, M.; Berglund, E.; Rodriguez, G.; Ceckanowicz, D. J.; Cutter, N.; Badgeley, J.

    2014-12-01

    The boom and bust history of mineral extraction in the American southwest is visible today in tens of thousands of abandoned and slowly decaying mine installations that scar the landscape. Mine tailing piles, mounds of crushed mineral ore, often contain significant quantities of heavy metal elements which may leach into surrounding soils, surface water and ground water. Chemical analysis of contaminated soils is a tedious and time-consuming process. Regional assessment of heavy metal contamination for treatment prioritization would be greatly accelerated by the development of near-surface imaging indices of heavy-metal vegetative stress in western grasslands. Further, the method would assist in measuring the ongoing effectiveness of phytoremedatian and phytostabilization efforts. To test feasibility we ground truthed nine phytoremediated and two control sites sites along the mine-impacted Kerber Creek watershed in Saguache County, Colorado. Total metal concentration was determined by XRF for both plant and soil samples. Leachable metals were extracted from soil samples following US EPA method 1312. Plants were identified, sorted into roots, shoots and leaves, and digested via microwave acid extraction. Metal concentrations were determined with high accuracy by ICP-OES analysis. Plants were found to contain significantly higher concentrations of heavy metals than surrounding soils, particularly for manganese (Mn), iron (Fe), copper (Cu), zinc (Zn), barium (Ba), and lead (Pb). Plant species accumulated and distributed metals differently, yet most showed translocation of metals from roots to above ground structures. Ground analysis was followed by near surface imaging using an unmanned aerial vehicle equipped with visible/near and shortwave infrared (0.7 to 1.5 μm) cameras. Images were assessed for spectral shifts indicative of plant stress and attempts made to correlate results with measured soil and plant metal concentrations.

  19. Acceleration modules in linear induction accelerators

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Heng; Deng, Jian-Jun

    2014-05-01

    The Linear Induction Accelerator (LIA) is a unique type of accelerator that is capable of accelerating kilo-Ampere charged particle current to tens of MeV energy. The present development of LIA in MHz bursting mode and the successful application into a synchrotron have broadened LIA's usage scope. Although the transformer model is widely used to explain the acceleration mechanism of LIAs, it is not appropriate to consider the induction electric field as the field which accelerates charged particles for many modern LIAs. We have examined the transition of the magnetic cores' functions during the LIA acceleration modules' evolution, distinguished transformer type and transmission line type LIA acceleration modules, and re-considered several related issues based on transmission line type LIA acceleration module. This clarified understanding should help in the further development and design of LIA acceleration modules.

  20. Development and evaluation of convergent and accelerated penalized SPECT image reconstruction methods for improved dose–volume histogram estimation in radiopharmaceutical therapy

    PubMed Central

    Cheng, Lishui; Hobbs, Robert F.; Sgouros, George; Frey, Eric C.

    2014-01-01

    Purpose: Three-dimensional (3D) dosimetry has the potential to provide better prediction of response of normal tissues and tumors and is based on 3D estimates of the activity distribution in the patient obtained from emission tomography. Dose–volume histograms (DVHs) are an important summary measure of 3D dosimetry and a widely used tool for treatment planning in radiation therapy. Accurate estimates of the radioactivity distribution in space and time are desirable for accurate 3D dosimetry. The purpose of this work was to develop and demonstrate the potential of penalized SPECT image reconstruction methods to improve DVHs estimates obtained from 3D dosimetry methods. Methods: The authors developed penalized image reconstruction methods, using maximum a posteriori (MAP) formalism, which intrinsically incorporate regularization in order to control noise and, unlike linear filters, are designed to retain sharp edges. Two priors were studied: one is a 3D hyperbolic prior, termed single-time MAP (STMAP), and the second is a 4D hyperbolic prior, termed cross-time MAP (CTMAP), using both the spatial and temporal information to control noise. The CTMAP method assumed perfect registration between the estimated activity distributions and projection datasets from the different time points. Accelerated and convergent algorithms were derived and implemented. A modified NURBS-based cardiac-torso phantom with a multicompartment kidney model and organ activities and parameters derived from clinical studies were used in a Monte Carlo simulation study to evaluate the methods. Cumulative dose-rate volume histograms (CDRVHs) and cumulative DVHs (CDVHs) obtained from the phantom and from SPECT images reconstructed with both the penalized algorithms and OS-EM were calculated and compared both qualitatively and quantitatively. The STMAP method was applied to patient data and CDRVHs obtained with STMAP and OS-EM were compared qualitatively. Results: The results showed that the