Effect of Material Homogeneity on the Performance of DSA for Even-Parity S_{n} Methods
Azmy, Y.Y.; Morel, J.; Wareing, T.
1999-09-27
A spectral analysis is conducted for the Source Iteration (SI), and Diffusion Synthetic Acceleration (DSA) operators previously formulated for solving the Even-Parity Method (EPM) equations. In order to accommodate material heterogenity, the analysis is performed for the Periodic Horizontal Interface (PHI) configuration. The dependence of the spectral radius on the optical thickness of the two PHI layers illustrates the deterioration in the rate of convergence with increasing material discontinuity, especially when one of the layers approaches a void. The rate at which this deterioration occurs is determined for a specific material discontinuity in order to demonstrate the conditional robustness of the EPM-DSA iterations. The results of the analysis are put in perspective via numerical tests with the DANTE code (McGhee, et. al., 1997) which exhibits a deterioration in the spectral radius consistent with the theory.
NASA Astrophysics Data System (ADS)
Sarma, Chandra; Bunday, Benjamin; Cepler, Aron; Dziura, Ted; Kim, JiHoon; Lin, Guanyang; Yin, Jian
2014-04-01
One of the major challenges associated with insertion of a directed self-assembly (DSA) patterning process in high volume manufacturing (HVM) is finding a non-destructive, yield-compatible, consistent critical dimension (CD) metrology process. Current CD scanning electron microscopy (CD-SEM) top-down approaches do not give the profile information for DSA patterns, which is paramount in determining the subsequent pattern transfer process (etch, for example). SEMATECH, in cooperation with some of the leaders of the metrology and DSA materials supply chain, has led an effort to address such metrology challenges in DSA. We have developed and evaluated several techniques (including a scatterometry-based method) that are potentially very attractive in determining DSA pattern profiles and have embedded bridging in such patterns without resorting to destructive cross-section imaging. We show how such processes could be fine-tuned to enable their insertion for DSA pattern characterization in an HVM environment.
Final report on DSA methods for monitoring alumina in aluminum reduction cells with cermet anodes
NASA Astrophysics Data System (ADS)
Windisch, C. F., Jr.
1992-04-01
The Sensors Development Program was conducted at the Pacific Northwest Laboratory (PNL) for the US Department of Energy, Office of Industrial Processes. The work was performed in conjunction with the Inert Electrodes Program at PNL. The objective of the Sensors Development Program in FY 1990 through FY 1992 was to determine whether methods based on digital signal analysis (DSA) could be used to measure alumina concentration in aluminum reduction cells. Specifically, this work was performed to determine whether useful correlations exist between alumina concentration and various DSA-derived quantification parameters, calculated for current and voltage signals from laboratory and field aluminum reduction cells. If appropriate correlations could be found, then the quantification parameters might be used to monitor and, consequently, help control the alumina concentration in commercial reduction cells. The control of alumina concentration is especially important for cermet anodes, which have exhibited instability and excessive wear at alumina concentrations removed from saturation.
NASA Astrophysics Data System (ADS)
Xu, Jing; Wu, Jian; Feng, Daming; Cui, Zhiming
Serious types of vascular diseases such as carotid stenosis, aneurysm and vascular malformation may lead to brain stroke, which are the third leading cause of death and the number one cause of disability. In the clinical practice of diagnosis and treatment of cerebral vascular diseases, how to do effective detection and description of the vascular structure of two-dimensional angiography sequence image that is blood vessel skeleton extraction has been a difficult study for a long time. This paper mainly discussed two-dimensional image of blood vessel skeleton extraction based on the level set method, first do the preprocessing to the DSA image, namely uses anti-concentration diffusion model for the effective enhancement and uses improved Otsu local threshold segmentation technology based on regional division for the image binarization, then vascular skeleton extraction based on GMM (Group marching method) with fast sweeping theory was actualized. Experiments show that our approach not only improved the time complexity, but also make a good extraction results.
Windisch, C.F. Jr.
1992-04-01
The Sensors Development Program was conducted at the Pacific Northwest Laboratory (PNL) for the US Department of Energy, Office of Industrial Processes. The work was performed in conjunction with the Inert Electrodes Program at PNL. The objective of the Sensors Development Program in FY 1990 through FY 1992 was to determine whether methods based on digital signal analysis (DSA) could be used to measure alumina concentration in aluminum reduction cells. Specifically, this work was performed to determine whether useful correlations exist between alumina concentration and various DSA-derived quantification parameters, calculated for current and voltage signals from laboratory and field aluminum reduction cells. If appropriate correlations could be found, then the quantification parameters might be used to monitor and, consequently, help control the alumina concentration in commercial reduction cells. The control of alumina concentration is especially important for cermet anodes, which have exhibited instability and excessive wear at alumina concentrations removed from saturation.
Accelerator system and method of accelerating particles
NASA Technical Reports Server (NTRS)
Wirz, Richard E. (Inventor)
2010-01-01
An accelerator system and method that utilize dust as the primary mass flux for generating thrust are provided. The accelerator system can include an accelerator capable of operating in a self-neutralizing mode and having a discharge chamber and at least one ionizer capable of charging dust particles. The system can also include a dust particle feeder that is capable of introducing the dust particles into the accelerator. By applying a pulsed positive and negative charge voltage to the accelerator, the charged dust particles can be accelerated thereby generating thrust and neutralizing the accelerator system.
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
A grey diffusion acceleration method for time-dependent radiative transfer calculations
Nowak, P.F.
1991-07-01
The equations of thermal radiative transfer describe the emission, absorption and transport of photons in a material. As photons travel through the material they are absorbed and re-emitted in a Planckian distribution characterized by the material temperature. As a result of these processes, the material can change resulting in a change in the Planckian emission spectrum. When the coupling between the material and radiation is strong, as occurs when the material opacity or the time step is large, standard iterative techniques converge very slowly. As a result, nested iterative algorithms have been applied to the problem. One algorithm, is to use multifrequency DSA to accelerate the convergence of the multifrequency transport iteration and a grey transport acceleration (GTA) followed by a single group DSA. Here we summarize a new method which uses a grey diffusion equation (GDA) to directly solve the multifrequency transport (S{sub N}) problem. Results of Fourier analysis for both the continuous and discretized equations are discussed and the computational efficiency of GDA is compared with the DSA and GTA nested algorithms. 5 refs., 1 fig., 1 tab.
A full-chip DSA correction framework
NASA Astrophysics Data System (ADS)
Wang, Wei-Long; Latypov, Azat; Zou, Yi; Coskun, Tamer
2014-03-01
The graphoepitaxy DSA process relies on lithographically created confinement wells to perform directed self-assembly in the thin film of the block copolymer. These self-assembled patterns are then etch transferred into the substrate. The conventional DUV immersion or EUV lithography is still required to print these confinement wells, and the lithographic patterning residual errors propagate to the final patterns created by DSA process. DSA proximity correction (PC), in addition to OPC, is essential to obtain accurate confinement well shapes that resolve the final DSA patterns precisely. In this study, we proposed a novel correction flow that integrates our co-optimization algorithms, rigorous 2-D DSA simulation engine, and OPC tool. This flow enables us to optimize our process and integration as well as provides a guidance to design optimization. We also showed that novel RET techniques such as DSA-Aware assist feature generation can be used to improve the process window. The feasibility of our DSA correction framework on large layout with promising correction accuracy has been demonstrated. A robust and efficient correction algorithm is also determined by rigorous verification studies. We also explored how the knowledge of DSA natural pitches and lithography printing constraints provide a good guidance to establish DSA-Friendly designs. Finally application of our DSA full-chip computational correction framework to several real designs of contact-like holes is discussed. We also summarize the challenges associated with computational DSA technology.
Few Projections Astrotomography: 2-CLEAN Dsa Reconstruction
NASA Astrophysics Data System (ADS)
Agafonov, Michail
The radioastronomical approach to the tomographical problem the main elements of which were published by us in 1989-1990 was developed and presented as 2-CLEAN DSA Method of Reconstruction [1 2]. Radio Images of the Crab Nebula from the Lunar occultations data were reconstructed at 750 and 178 MHz [3]. The basis makes the solution with the use of Synthesized Beam (SB). To remove the distortion occurring because of the SB sidelobes influence the CLEAN algorithm realizations are used. The research of solution convergence the use of ST-CLEAN and TC-CLEAN algorithms to determine the permissible solution area are being summarized. The Method allows to carry out 2D recognition in a wide space frequency band {0 Fb} when the number of projections contains only 0.1 of the number needed for the usual tomographycal approach. It is the foundation of the information-computerized technology of the few projections tomography ICT 2-CLEAN DSA and is based exclusively on the radioastronomers papers. 1. Agafonov M.I. 1997 In ASP Conf. Ser. v.125 ADASS YI ed. G.Hunt & H.E.Payne 202. 2. Agafonov M.I. 1998 In ASP Conf. Ser. v.145 ADASS YII ed. R.Albrecht R.N.Hook & H.A.Bushouse 58. 3. Agafonov M.I. Ivanov V.P. Podvojskaya O.A. 1990 SvA. v.34 p.275.
Incorporating DSA in multipatterning semiconductor manufacturing technologies
NASA Astrophysics Data System (ADS)
Badr, Yasmine; Torres, J. A.; Ma, Yuansheng; Mitra, Joydeep; Gupta, Puneet
2015-03-01
Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.
NASA Astrophysics Data System (ADS)
Latypov, Azat; Coskun, Tamer H.; Garner, Grant; Preil, Moshe; Schmid, Gerard; Xu, Ji; Zou, Yi
2014-03-01
Further enhancements to Monte Carlo and Self-Consistent Field Theory Directed Self-Assembly (DSA) simulation capabilities implemented in GLOBALFOUNDRIES are presented and discussed, along with the results of their applications. We present the simulation studies of DSA in graphoepitaxy confinement wells, where the DSA process parameters are varied in order to determine the optimal set of parameters resulting in a robust and etch transferrable phase morphology. A novel concept of DSA-aware assist features for the optical lithography process is presented and demonstrated in simulations. The results of the DSA simulations and studies for the DSA process using a blend of homopolymers and diblock copolymers are also presented and compared with the simulated diblock copolymer systems.
Tracking of Acceleration with HNJ Method
Ruggiero,A.
2008-02-01
After reviewing the principle of operation of acceleration with the method of Harmonic Number Jump (HNJ) in a Fixed-Field Alternating Gradient (FFAG) accelerator for protons and heavy ions, we report in this talk the results of computer simulations performed to assess the capability and the limits of the method in a variety of practical situations. Though the study is not yet completed, and there still remain other cases to be investigated, nonetheless the tracking results so far obtained are very encouraging, and confirm the validity of the method.
TRACKING OF ACCELERATION WITH HNJ METHOD.
RUGGIERO,A.G.
2007-11-05
After reviewing the principle of operation of acceleration with the method of Harmonic Number Jump (HNJ) in a Fixed-Field Alternating-Gradient (FFAG) accelerator for protons and heavy ions, we report in this talk the results of computer simulations performed to assess the capability and the limits of the method in a variety of practical situations. Though the study is not yet completed, and there still remain other cases to be investigated, nonetheless the tracking results so far obtained are very encouraging, and confirm the validity of the method.
Improved cost-effectiveness of the block co-polymer anneal process for DSA
NASA Astrophysics Data System (ADS)
Pathangi, Hari; Stokhof, Maarten; Knaepen, Werner; Vaid, Varun; Mallik, Arindam; Chan, Boon Teik; Vandenbroeck, Nadia; Maes, Jan Willem; Gronheid, Roel
2016-04-01
This manuscript first presents a cost model to compare the cost of ownership of DSA and SAQP for a typical front end of line (FEoL) line patterning exercise. Then, we proceed to a feasibility study of using a vertical furnace to batch anneal the block co-polymer for DSA applications. We show that the defect performance of such a batch anneal process is comparable to the process of record anneal methods. This helps in increasing the cost benefit for DSA compared to the conventional multiple patterning approaches.
A simple method of accelerating monotonic sequences
NASA Astrophysics Data System (ADS)
Sarkar, B.; Bhattacharyya, K.
1993-03-01
A converse of the well known Cesaro method has been demonstrated to accelerate successfully various monotonic sequences of practical concern. The method is simple, regular and particular apt for low-order data. Pilot calculations highlighting the workability in varying practical contexts involve atomic lattice constants (cubic), typical nuclear attraction integrals in molecular calculations and critical parameters in phase transitions.
What promotes derected self assembly (DSA)?
NASA Astrophysics Data System (ADS)
Nakagawa, S. T.
2016-09-01
A low-energy electron beam (EB) can create self-interstitial atoms (SIA) in a solid and can cause directed self-assembly (DSA), e.g. {3 1 1}SIA platelets in c-Si. The crystalline structure of this planar defect is known from experiment to be made up of SIAs that form well aligned <1 1 0> atomic rows on each (3 1 1) plane. To simulate the experiment we distributed Frenkel pairs (FP) randomly in bulk c-Si. Then making use of a molecular dynamic (MD) simulation, we have reproduced the experimental result, where SIAs are trapped at metastable sites in bulk. With increasing pre-doped FP concentration, the number of SIAs that participate in DSA tends to be increased but soon slightly supressed. On the other hand, when the FP concentration is less than 3%, a cooperative motion of target atoms was characterized from the long-range-order (LRO) parameter. Here we investigated the correlation between DSA and that cooperative motion, by adding a case of intrinsic c-Si. We confirmed that the cooperative motion slightly promote DSA by assisting migration of SIAs toward metastable sites as long as the FP concentration is less than 3%, however, it is essentially independent of DSA.
Accelerated simulation methods for plasma kinetics
NASA Astrophysics Data System (ADS)
Caflisch, Russel
2016-11-01
Collisional kinetics is a multiscale phenomenon due to the disparity between the continuum (fluid) and the collisional (particle) length scales. This paper describes a class of simulation methods for gases and plasmas, and acceleration techniques for improving their speed and accuracy. Starting from the Landau-Fokker-Planck equation for plasmas, the focus will be on a binary collision model that is solved using a Direct Simulation Monte Carlo (DSMC) method. Acceleration of this method is achieved by coupling the particle method to a continuum fluid description. The velocity distribution function f is represented as a combination of a Maxwellian M (the thermal component) and a set of discrete particles fp (the kinetic component). For systems that are close to (local) equilibrium, this reduces the number N of simulated particles that are required to represent f for a given level of accuracy. We present two methods for exploiting this representation. In the first method, equilibration of particles in fp, as well as disequilibration of particles from M, due to the collision process, is represented by a thermalization/dethermalization step that employs an entropy criterion. Efficiency of the representation is greatly increased by inclusion of particles with negative weights. This significantly complicates the simulation, but the second method is a tractable approach for negatively weighted particles. The accelerated simulation method is compared with standard PIC-DSMC method for both spatially homogeneous problems such as a bump-on-tail and inhomogeneous problems such as nonlinear Landau damping.
Accelerated Learning: Madness with a Method.
ERIC Educational Resources Information Center
Zemke, Ron
1995-01-01
Accelerated learning methods have evolved into a variety of holistic techniques that involve participants in the learning process and overcome negative attitudes about learning. These components are part of the mix: the brain, learning environment, music, imaginative activities, suggestion, positive mental state, the arts, multiple intelligences,…
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
NASA Astrophysics Data System (ADS)
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained
Influence of template fill in graphoepitaxy DSA
NASA Astrophysics Data System (ADS)
Doise, Jan; Bekaert, Joost; Chan, Boon Teik; Hong, SungEun; Lin, Guanyang; Gronheid, Roel
2016-03-01
Directed self-assembly (DSA) of block copolymers (BCP) is considered a promising patterning approach for the 7 nm node and beyond. Specifically, a grapho-epitaxy process using a cylindrical phase BCP may offer an efficient solution for patterning randomly distributed contact holes with sub-resolution pitches, such as found in via and cut mask levels. In any grapho-epitaxy process, the pattern density impacts the template fill (local BCP thickness inside the template) and may cause defects due to respectively over- or underfilling of the template. In order to tackle this issue thoroughly, the parameters that determine template fill and the influence of template fill on the resulting pattern should be investigated. In this work, using three process flow variations (with different template surface energy), template fill is experimentally characterized as a function of pattern density and film thickness. The impact of these parameters on template fill is highly dependent on the process flow, and thus pre-pattern surface energy. Template fill has a considerable effect on the pattern transfer of the DSA contact holes into the underlying layer. Higher fill levels give rise to smaller contact holes and worse critical dimension uniformity. These results are important towards DSA-aware design and show that fill is a crucial parameter in grapho-epitaxy DSA.
Projected discrete ordinates methods for numerical transport problems
Larsen, E.W.
1985-01-01
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Methods of geometrical integration in accelerator physics
NASA Astrophysics Data System (ADS)
Andrianov, S. N.
2016-12-01
In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.
N7 logic via patterning using templated DSA: implementation aspects
NASA Astrophysics Data System (ADS)
Bekaert, J.; Doise, J.; Gronheid, R.; Ryckaert, J.; Vandenberghe, G.; Fenger, G.; Her, Y. J.; Cao, Y.
2015-07-01
In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCP). Insertion of DSA for IC fabrication is seriously considered for the 7 nm node. At this node the DSA technology could alleviate costs for multiple patterning and limit the number of masks that would be required per layer. At imec, multiple approaches for inserting DSA into the 7 nm node are considered. One of the most straightforward approaches for implementation would be for via patterning through templated DSA; a grapho-epitaxy flow using cylindrical phase BCP material resulting in contact hole multiplication within a litho-defined pre-pattern. To be implemented for 7 nm node via patterning, not only the appropriate process flow needs to be available, but also DSA-aware mask decomposition is required. In this paper, several aspects of the imec approach for implementing templated DSA will be discussed, including experimental demonstration of density effect mitigation, DSA hole pattern transfer and double DSA patterning, creation of a compact DSA model. Using an actual 7 nm node logic layout, we derive DSA-friendly design rules in a logical way from a lithographer's view point. A concrete assessment is provided on how DSA-friendly design could potentially reduce the number of Via masks for a place-and-routed N7 logic pattern.
NASA Astrophysics Data System (ADS)
Cuccoli, Fabrizio; Facheris, Luca; Vaselli, Orlando
2006-09-01
A simple method for estimating the gas emission flux by spot source fields based on IR laser measurements and atmospheric diffusion models is presented. The method is based on a proper arrangement of the optical links around the emission area, over which the determination of the gas integral concentration is required. The first objective of such measurements is to tune the parameters of a basic diffusion model in order to estimate, as second objective, the gas emission flux by applying the tuned model to experimental measurements. After discussing the proposed model and method, experimental data obtained from some CO II-rich natural discharges in Tuscany (Central Italy) are presented
SU-E-T-270: Diffusion Synthetic Acceleration for Linear Boltzmann Transport Equation
Chen, G; Hong, X; Gao, H
2015-06-15
Purpose: Linear Boltzmann transport equation (LBTE) is as accurate as the Monte Carlo method (MC) for dose calculation in photon/particle therapy (LBTE is a deterministic and Eulerian formulation and MC is a statistical and Lagrangian description). An advantage of LBTE is that numerous acceleration techniques can be utilized for acceleration. This work is to explore the acceleration of LBTE via diffusion synthetic acceleration (DSA). Methods: For simplicity, two-dimensional, steady-state, and within-group LBTE is considered with two angular dimensions and two spatial dimensions. The discrete ordinate method is developed for solving this integro-differential equation. The angular variables are discretized using a level-symmetric quadrature set on the unit sphere. The spatial variables are discretized on the structured grid based on the diamond scheme. The source-iteration method (SI) is used to solve the discretized system.Since SI is slow in optically thick and highly scattering regime. DSA is developed to accelerate SI. The motivation for DSA is that diffusion equation (DE) is a good approximation of LBTE in the above regime. However, DE is much cheaper than LBTE computationally since DE only involves spatial variables. Thus, in each DSA iteration, DSA adds to the SI step a computationally-negligible DE step, i.e., to first solve DE with the SI residual as source term, and then compensate the SI solution with DE solution. Results: DSA was benchmarked and compared with SI. The difference between two methods was within 0.12% which verifies the accuracy of DSA, while DSA demonstrated the great advantage in speed, e.g., the reduction of iteration number to 6% and 4% respectively for cases with 100 and 1,000 scattering-absorption ratio that commonly occur in clinical dose calculation. Conclusion: DSA has been developed as one of many possible means for accelerating the numerical solver of LBTE for dose calculation. The authors were partially supported by the NSFC
A variational perspective on accelerated methods in optimization.
Wibisono, Andre; Wilson, Ashia C; Jordan, Michael I
2016-11-22
Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. Although many generalizations and extensions of Nesterov's original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods corresponds to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov's technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.
NASA Astrophysics Data System (ADS)
Zhu, Ying; Prummer, Simone; Chen, Terrence; Ostermeier, Martin; Comaniciu, Dorin
2009-02-01
Digital subtraction angiography (DSA) is a well-known technique for improving the visibility and perceptibility of blood vessels in the human body. Coronary DSA extends conventional DSA to dynamic 2D fluoroscopic sequences of coronary arteries which are subject to respiratory and cardiac motion. Effective motion compensation is the main challenge for coronary DSA. Without a proper treatment, both breathing and heart motion can cause unpleasant artifacts in coronary subtraction images, jeopardizing the clinical value of coronary DSA. In this paper, we present an effective method to separate the dynamic layer of background structures from a fluoroscopic sequence of the heart, leaving a clean layer of moving coronary arteries. Our method combines the techniques of learning-based vessel detection and robust motion estimation to achieve reliable motion compensation for coronary sequences. Encouraging results have been achieved on clinically acquired coronary sequences, where the proposed method considerably improves the visibility and perceptibility of coronary arteries undergoing breathing and cardiac movement. Perceptibility improvement is significant especially for very thin vessels. The potential clinical benefit is expected in the context of obese patients and deep angulation, as well as in the reduction of contrast dose in normal size patients.
Azmy, Y.Y.
1999-06-10
The author proposes preconditioning as a viable acceleration scheme for the inner iterations of transport calculations in slab geometry. In particular he develops Adjacent-Cell Preconditioners (AP) that have the same coupling stencil as cell-centered diffusion schemes. For lowest order methods, e.g., Diamond Difference, Step, and 0-order Nodal Integral Method (ONIM), cast in a Weighted Diamond Difference (WDD) form, he derives AP for thick (KAP) and thin (NAP) cells that for model problems are unconditionally stable and efficient. For the First-Order Nodal Integral Method (INIM) he derives a NAP that possesses similarly excellent spectral properties for model problems. The two most attractive features of the new technique are:(1) its cell-centered coupling stencil, which makes it more adequate for extension to multidimensional, higher order situations than the standard edge-centered or point-centered Diffusion Synthetic Acceleration (DSA) methods; and (2) its decreasing spectral radius with increasing cell thickness to the extent that immediate pointwise convergence, i.e., in one iteration, can be achieved for problems with sufficiently thick cells. He implemented these methods, augmented with appropriate boundary conditions and mixing formulas for material heterogeneities, in the test code APID that he uses to successfully verify the analytical spectral properties for homogeneous problems. Furthermore, he conducts numerical tests to demonstrate the robustness of the KAP and NAP in the presence of sharp mesh or material discontinuities. He shows that the AP for WDD is highly resilient to such discontinuities, but for INIM a few cases occur in which the scheme does not converge; however, when it converges, AP greatly reduces the number of iterations required to achieve convergence.
NASA Astrophysics Data System (ADS)
Vermandel, Maximilien; Kulik, Carine; Leclerc, Xavier; Rousseau, Jean; Vasseur, Christian
2002-05-01
This study proposes a new method for matching vascular imaging modalities without the use of external frame or external landmarks. We first perform a 3D reconstruction of a piece of the cerebral vascular tree using Magnetic Resonance Angiography (MRA). Then, this structure is projected on the Digital Subtracted Angiography (DSA) images until its best position and orientation are found. As the 3D structure is known in the MRA referential, this method enables us to match information from DSA and MRA. The complete matching of all the DSA images in many incidences and the MRA set have been obtained. For the DSA images, the epipolar constraint has been verified between all the incidences. This new approach in medical imaging brings a very original method, making easier and more efficient visualization and quantification of vascular information.
Determination of higher order accelerations by a functional method
NASA Astrophysics Data System (ADS)
Tudosie, C.
A functional method is developed for the simultaneous determination of all the linear accelerations which exist in the differential equation of a material system dynamics. The method introduces variable angular accelerations of different orders, called direct connection functions, which allow the passing from a linear acceleration of a certain order to that of a higher order. Feedback functions are also introduced which allow the passing from a linear acceleration of a certain order to that of lower orders. This method is applicable to accelerations which occur when passenger trains move rapidly around a curve and at the vertical vibrations of trucks and tractors.
PARTICLE ACCELERATOR AND METHOD OF CONTROLLING THE TEMPERATURE THEREOF
Neal, R.B.; Gallagher, W.J.
1960-10-11
A method and means for controlling the temperature of a particle accelerator and more particularly to the maintenance of a constant and uniform temperature throughout a particle accelerator is offered. The novel feature of the invention resides in the provision of two individual heating applications to the accelerator structure. The first heating application provided is substantially a duplication of the accelerator heat created from energization, this first application being employed only when the accelerator is de-energized thereby maintaining the accelerator temperature constant with regard to time whether the accelerator is energized or not. The second heating application provided is designed to add to either the first application or energization heat in a manner to create the same uniform temperature throughout all portions of the accelerator.
Grisham, Larry R
2013-12-17
The present invention provides systems and methods for the magnetic insulation of accelerator electrodes in electrostatic accelerators. Advantageously, the systems and methods of the present invention improve the practically obtainable performance of these electrostatic accelerators by addressing, among other things, voltage holding problems and conditioning issues. The problems and issues are addressed by flowing electric currents along these accelerator electrodes to produce magnetic fields that envelope the accelerator electrodes and their support structures, so as to prevent very low energy electrons from leaving the surfaces of the accelerator electrodes and subsequently picking up energy from the surrounding electric field. In various applications, this magnetic insulation must only produce modest gains in voltage holding capability to represent a significant achievement.
Wang, Mao Qiang; Duan, Feng; Yuan, Kai; Zhang, Guo Dong; Yan, Jieyu; Wang, Yan
2017-01-01
Purpose To describe findings in prostatic arteries (PAs) at digital subtraction angiography (DSA) and cone-beam computed tomography (CT) that allow identification of benign prostatic hyperplasia and to determine the value added with the use of cone-beam CT. Materials and Methods This retrospective single-institution study was approved by the institutional review board, and the requirement for written informed consent was waived. From February 2009 to December 2014, a total of 148 patients (mean age ± standard deviation, 70.5 years ± 14.5) underwent DSA of the internal iliac arteries and cone-beam CT with a flat-detector angiographic system before they underwent prostate artery embolization. Both the DSA and cone-beam CT images were evaluated by two interventional radiologists to determine the number of independent PAs and their origins and anastomoses with adjacent arteries. The exact McNemar test was used to compare the detection rate of the PAs and the anastomoses with DSA and with cone-beam CT. Results The PA anatomy was evaluated successfully by means of cone-beam CT in conjunction with DSA in all patients. Of the 296 pelvic sides, 274 (92.6%) had only one PA. The most frequent PA origin was the common gluteal-pudendal trunk with the superior vesicular artery in 118 (37.1%), followed by the anterior division of the internal iliac artery in 99 (31.1%), and the internal pudendal artery in 77 (24.2%) pelvic sides. In 67 (22.6%) pelvic sides, anastomoses to adjacent arteries were documented. The numbers of PA origins and anastomoses, respectively, that could be identified were significantly higher with cone-beam CT (301 of 318 [94.7%] and 65 of 67 [97.0%]) than with DSA (237 [74.5%] and 39 [58.2%], P < .05). Cone-beam CT provided essential information that was not available with DSA in 90 of 148 (60.8%) patients. Conclusion Cone-beam CT is a useful adjunctive technique to DSA for identification of the PA anatomy and provides information to help treatment planning
Image registration for DSA quality enhancement.
Buzug, T M; Weese, J
1998-01-01
A generalized framework for histogram-based similarity measures is presented and applied to the image-enhancement task in digital subtraction angiography (DSA). The class of differentiable, strictly convex weighting functions is identified as suitable weightings of histograms for measuring the degree of clustering that goes along with registration. With respect to computation time, the energy similarity measure is the function of choice for the registration of mask and contrast image prior to subtraction. The robustness of the energy measure is studied for geometrical image distortions like rotation and scaling. Additionally, it is investigated how the histogram binning and inhomogeneous motion inside the templates influence the quality of the similarity measure. Finally, the registration success for the automated procedure is compared with the manually shift-corrected image pair of the head.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
A variational perspective on accelerated methods in optimization
Wibisono, Andre; Wilson, Ashia C.; Jordan, Michael I.
2016-01-01
Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. Although many generalizations and extensions of Nesterov’s original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods corresponds to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov’s technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms. PMID:27834219
EUV patterned templates with grapho-epitaxy DSA at the N5/N7 logic nodes
NASA Astrophysics Data System (ADS)
Gronheid, Roel; Boeckx, Carolien; Doise, Jan; Bekaert, Joost; Karageorgos, Ioannis; Ruckaert, Julien; Chan, Boon Teik; Lin, Chenxi; Zou, Yi
2016-03-01
In this paper, approaches are explored for combining EUV with DSA for via layer patterning at the N7 and N5 logic nodes. Simulations indicate opportunity for significant LCDU improvement at the N7 node without impacting the required exposure dose. A templated DSA process based on NXE:3300 exposed EUV pre-patterns has been developed and supports the simulations. The main point of improvement concerns pattern placement accuracy with this process. It is described how metrology contributes to the measured placement error numbers. Further optimization of metrology methods for determining local placement errors is required. Next, also via layer patterning at the N5 logic node is considered. On top of LCDU improvement, the combination of EUV with DSA also allows for maintaining a single mask solution at this technology node, due to the ability of the DSA process to repair merging vias. It is experimentally shown, how shaping of templates for such via multiplication helps in placement accuracy control. Peanut-shaped pre-patterns, which can be printed using EUV lithography, give significantly better placement accuracy control compared to elliptical pre-patterns.
Wave propagation in turbulent media: use of convergence acceleration methods.
Baram, A; Tsadka, S; Azar, Z; Tur, M
1988-06-01
We propose the use of convergence acceleration methods for the evaluation of integral expressions of an oscillatory nature, often encountered in the study of optical wave propagation in the turbulent atmosphere. These techniques offer substantial savings in computation time with appreciable gain in accuracy. As an example, we apply the Levin u acceleration scheme to the problem of remote sensing of transversal wind profiles.
Advanced CD-SEM metrology for pattern roughness and local placement of lamellar DSA
NASA Astrophysics Data System (ADS)
Kato, Takeshi; Sugiyama, Akiyuki; Ueda, Kazuhiro; Yoshida, Hiroshi; Miyazaki, Shinji; Tsutsumi, Tomohiko; Kim, JiHoon; Cao, Yi; Lin, Guanyang
2014-04-01
Directed self-assembly (DSA) applying chemical epitaxy is one of the promising lithographic solutions for next generation semiconductor device manufacturing. We introduced Fingerprint Edge Roughness (FER) as an index to evaluate edge roughness of non-guided lamella finger print pattern, and found its correlation with the Line Edge Roughness (LER) of the lines assembled on the chemical guiding patterns. In this work, we have evaluated both FER and LER at each process steps of the LiNe DSA flow utilizing PS-b-PMMA block copolymers (BCP) assembled on chemical template wafers fabricated with Focus Exposure Matrix (FEM). As a result, we found the followings. (1) Line widths and space distances of the DSA patterns slightly differ to each other depending on their relative position against the chemical guide patterns. Appropriate condition that all lines are in the same dimensions exists, but the condition is not always same for the spaces. (2) LER and LWR (Line Width Roughness) of DSA patterns neither depend on width nor LER of the guide patterns. (3) LWR of DSA patterns are proportional to the width roughness of fingerprint pattern. (4) FER is influenced not only by the BCP formulation, but also by its film thickness. We introduced new methods to optimize the BCP formulation and process conditions by using FER measurement and local CD valuation measurement. Publisher's Note: This paper, originally published on 2 April 2014, was replaced with a corrected/revised version on 14 May 2014. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
Accelerated Test Method for Corrosion Protective Coatings Project
NASA Technical Reports Server (NTRS)
Falker, John; Zeitlin, Nancy; Calle, Luz
2015-01-01
This project seeks to develop a new accelerated corrosion test method that predicts the long-term corrosion protection performance of spaceport structure coatings as accurately and reliably as current long-term atmospheric exposure tests. This new accelerated test method will shorten the time needed to evaluate the corrosion protection performance of coatings for NASA's critical ground support structures. Lifetime prediction for spaceport structure coatings has a 5-year qualification cycle using atmospheric exposure. Current accelerated corrosion tests often provide false positives and negatives for coating performance, do not correlate to atmospheric corrosion exposure results, and do not correlate with atmospheric exposure timescales for lifetime prediction.
NASA Astrophysics Data System (ADS)
Peters, Andrew J.; Lawson, Richard A.; Nation, Benjamin D.; Ludovice, Peter J.; Henderson, Clifford L.
2014-03-01
Directed self-assembly (DSA) of block copolymers (BCPs) is a promising method for producing the sub-20nm features required for future semiconductor device scaling, but many questions still surround the issue of defect levels in DSA processes. Knowledge of the free energy associated with a defect is critical to estimating the limiting equilibrium defect density that may be achievable in such a process. In this work, a coarse grained molecular dynamics (MD) model is used to study the free energy of a dislocation pair defect via thermodynamic integration. MD models with realistic potentials allow for more accurate simulations of the inherent polymer behavior without the need to guess modes of molecular movement and without oversimplifying atomic interactions. The free energy of such a defect as a function of the Flory- Huggins parameter (χ) and the total degree of polymerization (N) for the block copolymer is also calculated. It is found that high pitch multiplying underlayers do not show significant decreases in defect free energy relative to a simple pitch doubling underlayer. It is also found that χN is not the best descriptor for correlating defect free energy since simultaneous variation in chain length (N) and χ value while maintaining a constant χN product produces significantly different defect free energies. Instead, the defect free energy seems to be directly correlated to the χ value of the diblock copolymer used. This means that as higher χ systems are produced and utilized for DSA, the limiting defect level will likely decrease even though DSA processes may still operate at similar χN values to achieve ever smaller feature sizes.
Accelerated panel methods using the fast multipole method
NASA Technical Reports Server (NTRS)
Leathrum, James F., Jr.
1994-01-01
potential, and the doublet potential is related to the z-component of the source velocity, all values can be derived from the same expansion by taking a series of partial derivatives. This requires more expansion terms to be kept since terms are lost in the process of taking partial derivatives. Thus to maintain accuracy for the doublet computation, more terms are required than if just evaluating for sources. The resulting Fast Multipole code should then parallelize better than classical panel methods due to the locality of data dependencies found in the Fast Multipole Method. Theoretically the parallelized code should execute in O(log N) time with O(N) processors, though this is not practical. Ongoing work includes implementing the parallel accelerated panel method, including methods to improve the load balancing of the problem by taking advantage of the known geometry of panels, and to encorporate sensitivity analysis into the algorithm.
Method Accelerates Training Of Some Neural Networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O.
1992-01-01
Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.
Miniature plasma accelerating detonator and method of detonating insensitive materials
Bickes, Jr., Robert W.; Kopczewski, Michael R.; Schwarz, Alfred C.
1986-01-01
The invention is a detonator for use with high explosives. The detonator comprises a pair of parallel rail electrodes connected to a power supply. By shorting the electrodes at one end, a plasma is generated and accelerated toward the other end to impact against explosives. A projectile can be arranged between the rails to be accelerated by the plasma. An alternative arrangement is to a coaxial electrode construction. The invention also relates to a method of detonating explosives.
Miniature plasma accelerating detonator and method of detonating insensitive materials
Bickes, R.W. Jr.; Kopczewski, M.R.; Schwarz, A.C.
1985-01-04
The invention is a detonator for use with high explosives. The detonator comprises a pair of parallel rail electrodes connected to a power supply. By shorting the electrodes at one end, a plasma is generated and accelerated toward the other end to impact against explosives. A projectile can be arranged between the rails to be accelerated by the plasma. An alternative arrangement is to a coaxial electrode construction. The invention also relates to a method of detonating explosives. 3 figs.
Process highlights to enhance DSA contact patterning performances
NASA Astrophysics Data System (ADS)
Gharbi, A.; Tiron, R.; Argoud, M.; Chamiot-Maitral, G.; Fouquet, A.; Lapeyre, C.; Pimenta Barros, P.; Sarrazin, A.; Servin, I.; Delachat, F.; Bos, S.; Bérard-Bergery, S.; Hazart, J.; Chevalier, X.; Nicolet, C.; Navarro, C.; Cayrefourcq, I.; Bouanani, S.; Monget, C.
2016-03-01
In this paper, we focus on the directed-self-assembly (DSA) application for contact hole (CH) patterning using polystyrene-b-poly(methyl methacrylate) (PS-b-PMMA) block copolymers (BCPs). By employing the DSA planarization process, we highlight the DSA advantages for CH shrink, repair and multiplication which are extremely needed to push forward the limits of currently used lithography. Meanwhile, we overcome the issue of pattern densityrelated- defects that are encountered with the commonly-used graphoepitaxy process flow. Our study also aims to evaluate DSA performances as function of material properties and process conditions by monitoring main key manufacturing process parameters: CD uniformity (CDU), placement error (PE) and defectivity (Hole Open Yield = HOY). Concerning process, it is shown that the control of surface affinity and the optimization of self-assembly annealing conditions enable to significantly enhance CDU and PE. Regarding materials properties, we show that the best BCP composition for CH patterning should be set at 70/30 of PS/PMMA total weight ratio. Moreover, it is found that increasing the PS homopolymer content from 0.2% to 1% has no impact on DSA performances. Using a C35 BCP (cylinder-forming BCP of natural period L0 = 35nm), high DSA performances are achieved: CDU-3σ = 1.2nm, PE-3σ = 1.2nm and HOY = 100%. The stability of DSA process is also demonstrated through the process follow-up on both patterned and unpatterned surfaces over several weeks. Finally, simulation results, using a phase field model based on Ohta-Kawasaki energy functional are presented and discussed with regards to experiments.
Method for phosphate-accelerated bioremediation
Looney, Brian B.; Lombard, Kenneth H.; Hazen, Terry C.; Pfiffner, Susan M.; Phelps, Tommy J.; Borthen, James W.
1996-01-01
An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.
Nonlinear Acceleration Methods for Even-Parity Neutron Transport
W. J. Martin; C. R. E. De Oliveira; H. Park
2010-05-01
Convergence acceleration methods for even-parity transport were developed that have the potential to speed up transport calculations and provide a natural avenue for an implicitly coupled multiphysics code. An investigation was performed into the acceleration properties of the introduction of a nonlinear quasi-diffusion-like tensor in linear and nonlinear solution schemes. Using the tensor reduced matrix as a preconditioner for the conjugate gradients method proves highly efficient and effective. The results for the linear and nonlinear case serve as the basis for further research into the application in a full three-dimensional spherical-harmonics even-parity transport code. Once moved into the nonlinear solution scheme, the implicit coupling of the convergence accelerated transport method into codes for other physics can be done seamlessly, providing an efficient, fully implicitly coupled multiphysics code with high order transport.
NASA Astrophysics Data System (ADS)
Morita, Hiroshi; Norizoe, Yuki
2015-03-01
Recently, directed self-assembly (DSA) method is focused on as a next generation lithography technique. We performed the DPD simulations to analyze the self-assembling process of block copolymer in DSA using OCTA (in detail, see http://octa.jp) system. Using DPD simulation, we can obtain the phase separated structures at each moment consisted by block copolymer chains. As those structures are consisted by polymer chains, an analysis can be done on those structures. In this paper, we study the dynamics of end particles in the defect annihilation process to understand the dynamics of self-assembling of block copolymer in DSA. From our analysis, the end particles moves in advance than the change of domain structure in the defect annihilation process.
Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins
NASA Astrophysics Data System (ADS)
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2014-03-01
A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.
Ultrasonic agitation method for accelerating batch leaching tests
Caldwell, R.J.; Stegemann, J.A.; Chao, C.C.
1996-12-31
A method has been developed which uses ultrasonic cavitation to accelerate batch leaching tests. Batch leaching tests, in which attainment of an equilibrium between the solid sample and liquid leachant is desired, usually involve particle size reduction and mixing to hasten mass transfer of soluble compounds. In the study discussed here, mixing in the form of ultrasonic cavitation was used to supply an intense level of agitation. Breaking the liquid boundary layer surrounding individual waste particles ensured a maximum concentration gradient between the solid and liquid phases and accelerated attainment of steady state concentrations. Evaluation of the acceleration technique was made through comparison of leachate quality of stabilized/solidified (S/S) residue samples tested using the Wastewater Technology Centre`s (WTC) equilibrium extraction (EE) and an ultrasonically agitated version of the same test method (UEE). The sample preparation, liquid-to-solid ratio, extraction fluid, etc., specified in the EE method were held constant for the EE and UEE samples, while the duration and method of agitation was altered for the UEE samples. To date, this evaluation has been made using five metal finishing residues, which were selected based on their elevated concentrations of regulated contaminants. The results of the evaluations are presented and suggestions are made as to the applicability of this accelerated test method.
Template affinity role in CH shrink by DSA planarization
NASA Astrophysics Data System (ADS)
Tiron, R.; Gharbi, A.; Pimenta Barros, P.; Bouanani, S.; Lapeyre, C.; Bos, S.; Fouquet, A.; Hazart, J.; Chevalier, X.; Argoud, M.; Chamiot-Maitral, G.; Barnola, S.; Monget, C.; Farys, V.; Berard-Bergery, S.; Perraud, L.; Navarro, C.; Nicolet, C.; Hadziioannou, G.; Fleury, G.
2015-03-01
Density multiplication and contact shrinkage of patterned templates by directed self-assembly (DSA) of block copolymers (BCP) stands out as a promising alternative to overcome the limitations of conventional lithography. The main goal of this paper is to investigate the potential of DSA to address contact and via levels patterning with high resolution by performing either CD shrink or contact multiplication. Different DSA processes are benchmarked based on several success criteria such as: CD control, defectivity (missing holes) as well as placement control. More specifically, the methodology employed to measure DSA contact overlay and the impact of process parameters on placement error control is detailed. Using the 300mm pilot line available in LETI and Arkema's materials, our approach is based on the graphoepitaxy of PS-b-PMMA block copolymers. Our integration scheme, depicted in figure 1, is based on BCP self-assembly inside organic hard mask guiding patterns obtained using 193i nm lithography. The process is monitored at different steps: the generation of guiding patterns, the directed self-assembly of block copolymers and PMMA removal, and finally the transfer of PS patterns into the metallic under layer by plasma etching. Furthermore, several process flows are investigated, either by tuning different material related parameters such as the block copolymer intrinsic period or the interaction with the guiding pattern surface (sidewall and bottom-side affinity). The final lithographic performances are finely optimized as a function of the self-assembly process parameters such as the film thickness and bake (temperature and time). Finally, DSA performances as a function of guiding patterns density are investigated. Thus, for the best integration approach, defect-free isolated and dense patterns for both contact shrink and multiplication (doubling and more) have been achieved on the same processed wafer. These results show that contact hole shrink and
Chloride-free set accelerated cement compositions and methods
Fry, S.E.; Totten, P.L.; Childs, J.D.; Lindsey, D.W.
1992-07-07
This patent describes a method of cementing a conduit in a well bore penetrating a subterranean formation. It comprises introducing a cement composition into the space between the conduit and the walls of the well bore, the cement composition consisting essentially of hydraulic cement, water and a set tine accelerator.
5 CFR 1315.5 - Accelerated payment methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... services. For interim payments under cost-reimbursement service contracts, agency heads may make payments... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT...
5 CFR 1315.5 - Accelerated payment methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... services. For interim payments under cost-reimbursement service contracts, agency heads may make payments... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT...
5 CFR 1315.5 - Accelerated payment methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... services. For interim payments under cost-reimbursement service contracts, agency heads may make payments... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT...
5 CFR 1315.5 - Accelerated payment methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... services. For interim payments under cost-reimbursement service contracts, agency heads may make payments... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT...
5 CFR 1315.5 - Accelerated payment methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... services. For interim payments under cost-reimbursement service contracts, agency heads may make payments... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Accelerated payment methods. 1315.5 Section 1315.5 Administrative Personnel OFFICE OF MANAGEMENT AND BUDGET OMB DIRECTIVES PROMPT...
Measurement of acceleration: a new method of monitoring neuromuscular function.
Viby-Mogensen, J; Jensen, E; Werner, M; Nielsen, H K
1988-01-01
A new method for monitoring neuromuscular function based on measurement of acceleration is presented. The rationale behind the method is Newton's second law, stating that the acceleration is directly proportional to the force. For measurement of acceleration, a piezo-electric ceramic wafer was used. When this piezo electrode was fixed to the thumb, an electrical signal proportional to the acceleration was produced whenever the thumb moved in response to nerve stimulation. The electrical signal was registered and analysed in a Myograph 2000 neuromuscular transmission monitor. In 35 patients anaesthetized with halothane, train-of-four ratios measured with the accelerometer (ACT-TOF) were compared with simultaneous mechanical train-of-four ratios (FDT-TOF). Control ACT-TOF ratios were significantly higher than control FDT-TOF ratios: 116 +/- 12 and 98 +/- 4 (mean +/- s.d.), respectively. In five patients not given any relaxant during the anaesthetic procedure (20-60 min), both responses were remarkably constant. In 30 patients given vecuronium, a close linear relationship was found during recovery between ACT-TOF and FDT-TOF ratios. It is concluded that the method fulfils the basic requirements for a simple and reliable clinical monitoring tool.
Accelerated augmented Lagrangian method for few-view CT reconstruction
NASA Astrophysics Data System (ADS)
Wu, Junfeng; Mou, Xuanqin
2012-03-01
Recently iterative reconstruction algorithms with total variation (TV) regularization have shown its tremendous power in image reconstruction from few-view projection data, but it is much more demanding in computation. In this paper, we propose an accelerated augmented Lagrangian method (ALM) for few-view CT reconstruction with total variation regularization. Experimental phantom results demonstrate that the proposed method not only reconstruct high quality image from few-view projection data but also converge fast to the optimal solution.
Bilateral thalamic infarction and DSA demonstrated AOP after thrombosis.
Cao, Wenjie; Dong, Qiang; Li, Linxin; Dong, Yi
2012-01-01
Bilateral paramedian thalamic stroke is a special ischemic pattern that results from occlusion of the artery of Percheron (AOP), a rare anatomic variant of the paramedian arteries. We report a case of bilateral thalamic infarctions, with a dramatic improvement after thrombolysis. DSA demonstrated recanalization of AOP with possible unreported variation.
Bilateral thalamic infarction and DSA demonstrated AOP after thrombosis
Cao, Wenjie; Dong, Qiang; Li, Linxin; Dong, Yi
2012-01-01
Bilateral paramedian thalamic stroke is a special ischemic pattern that results from occlusion of the artery of Percheron (AOP), a rare anatomic variant of the paramedian arteries. We report a case of bilateral thalamic infarctions, with a dramatic improvement after thrombolysis. DSA demonstrated recanalization of AOP with possible unreported variation. PMID:23986825
Distributed Minimal Residual (DMR) method for acceleration of iterative algorithms
NASA Technical Reports Server (NTRS)
Lee, Seungsoo; Dulikravich, George S.
1991-01-01
A new method for enhancing the convergence rate of iterative algorithms for the numerical integration of systems of partial differential equations was developed. It is termed the Distributed Minimal Residual (DMR) method and it is based on general Krylov subspace methods. The DMR method differs from the Krylov subspace methods by the fact that the iterative acceleration factors are different from equation to equation in the system. At the same time, the DMR method can be viewed as an incomplete Newton iteration method. The DMR method was applied to Euler equations of gas dynamics and incompressible Navier-Stokes equations. All numerical test cases were obtained using either explicit four stage Runge-Kutta or Euler implicit time integration. The formulation for the DMR method is general in nature and can be applied to explicit and implicit iterative algorithms for arbitrary systems of partial differential equations.
Reproduction of natural corrosion by accelerated laboratory testing methods
Luo, J.S.; Wronkiewicz, D.J.; Mazer, J.J.; Bates, J.K.
1996-05-01
Various laboratory corrosion tests have been developed to study the behavior of glass waste forms under conditions similar to those expected in an engineered repository. The data generated by laboratory experiments are useful for understanding corrosion mechanisms and for developing chemical models to predict the long-term behavior of glass. However, it is challenging to demonstrate that these test methods produce results that can be directly related to projecting the behavior of glass waste forms over time periods of thousands of years. One method to build confidence in the applicability of the test methods is to study the natural processes that have been taking place over very long periods in environments similar to those of the repository. In this paper, we discuss whether accelerated testing methods alter the fundamental mechanisms of glass corrosion by comparing the alteration patterns that occur in naturally altered glasses with those that occur in accelerated laboratory environments. This comparison is done by (1) describing the alteration of glasses reacted in nature over long periods of time and in accelerated laboratory environments and (2) establishing the reaction kinetics of naturally altered glass and laboratory reacted glass waste forms.
Spectral methods and sum acceleration algorithms. Final report
Boyd, J.
1995-03-01
The principle investigator pursued his investigation of numerical algorithms during the period of the grant. The attached list of publications is so lengthy that it is impossible to describe them in detail. However, the author calls attention to the four articles on sequence acceleration and fourteen more on spectral methods, which fulfill the goals of the original proposal. He also continued his research on nonlinear waves, and wrote a dozen papers on this, too.
Half-range acceleration for one-dimensional transport problems
Zika, M.R.; Larsen, E.W.
1998-12-31
Researchers have devoted considerable effort to developing acceleration techniques for transport iterations in highly diffusive problems. The advantages and disadvantages of source iteration, rebalance, diffusion synthetic acceleration (DSA), transport synthetic acceleration (TSA), and projection acceleration methods are documented in the literature and will not be discussed here except to note that no single method has proven to be applicable to all situations. Here, the authors describe a new acceleration method that is based solely on transport sweeps, is algebraically linear (and is therefore amenable to a Fourier analysis), and yields a theoretical spectral radius bounded by one-third for all cases. This method does not introduce spatial differencing difficulties (as is the case for DSA) nor does its theoretical performance degrade as a function of mesh and material properties (as is the case for TSA). Practical simulations of the new method agree with the theoretical predictions, except for scattering ratios very close to unity. At this time, they believe that the discrepancy is due to the effect of boundary conditions. This is discussed further.
Method for generating a plasma wave to accelerate electrons
Umstadter, D.; Esarey, E.; Kim, J.K.
1997-06-10
The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention. 21 figs.
Method for generating a plasma wave to accelerate electrons
Umstadter, Donald; Esarey, Eric; Kim, Joon K.
1997-01-01
The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention.
GPU Accelerated Spectral Element Methods: 3D Euler equations
NASA Astrophysics Data System (ADS)
Abdi, D. S.; Wilcox, L.; Giraldo, F.; Warburton, T.
2015-12-01
A GPU accelerated nodal discontinuous Galerkin method for the solution of three dimensional Euler equations is presented. The Euler equations are nonlinear hyperbolic equations that are widely used in Numerical Weather Prediction (NWP). Therefore, acceleration of the method plays an important practical role in not only getting daily forecasts faster but also in obtaining more accurate (high resolution) results. The equation sets used in our atomospheric model NUMA (non-hydrostatic unified model of the atmosphere) take into consideration non-hydrostatic effects that become more important with high resolution. We use algorithms suitable for the single instruction multiple thread (SIMT) architecture of GPUs to accelerate solution by an order of magnitude (20x) relative to CPU implementation. For portability to heterogeneous computing environment, we use a new programming language OCCA, which can be cross-compiled to either OpenCL, CUDA or OpenMP at runtime. Finally, the accuracy and performance of our GPU implementations are veried using several benchmark problems representative of different scales of atmospheric dynamics.
Particle acceleration at shocks - A Monte Carlo method
NASA Technical Reports Server (NTRS)
Kirk, J. G.; Schneider, P.
1987-01-01
A Monte Carlo method is presented for the problem of acceleration of test particles at relativistic shocks. The particles are assumed to diffuse in pitch angle as a result of scattering off magnetic irregularities frozen into the fluid. Several tests are performed using the analytic results available for both relativistic and nonrelativistic shock speeds. The acceleration at relativistic shocks under the influence of radiation losses is investigated, including the effects of a momentum dependence in the diffusion coefficient. The results demonstrate the usefulness of the technique in those situations in which the diffusion approximation cannot be employed, such as when relativistic bulk motion is considered, when particles are permitted to escape at the boundaries, and when the effects of the finite length of the particle mean free path are important.
GPU Accelerated Discontinuous Galerkin Methods for Shallow Water Equations
NASA Astrophysics Data System (ADS)
Gandham, Rajesh; Medina, David; Warburton, Timothy
2015-07-01
We discuss the development, verification, and performance of a GPU accelerated discontinuous Galerkin method for the solutions of two dimensional nonlinear shallow water equations. The shallow water equations are hyperbolic partial differential equations and are widely used in the simulation of tsunami wave propagations. Our algorithms are tailored to take advantage of the single instruction multiple data (SIMD) architecture of graphic processing units. The time integration is accelerated by local time stepping based on a multi-rate Adams-Bashforth scheme. A total variational bounded limiter is adopted for nonlinear stability of the numerical scheme. This limiter is coupled with a mass and momentum conserving positivity preserving limiter for the special treatment of a dry or partially wet element in the triangulation. Accuracy, robustness and performance are demonstrated with the aid of test cases. We compare the performance of the kernels expressed in a portable threading language OCCA, when cross compiled with OpenCL, CUDA, and OpenMP at runtime.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
The Re-acceleration of Galactic Electrons at the Heliospheric Termination Shock
NASA Astrophysics Data System (ADS)
Prinsloo, P. L.; Potgieter, M. S.; Strauss, R. D.
2017-02-01
Observations by the Voyager spacecraft in the outer heliosphere presented several challenges for the paradigm of diffusive shock acceleration (DSA) at the solar wind termination shock (TS). In this study, the viability of DSA as a re-acceleration mechanism for galactic electrons is investigated using a comprehensive cosmic-ray modulation model. The results demonstrate that the efficiency of DSA depends strongly on the shape of the electron spectra incident at the TS, which in turn depends on the features of the local interstellar spectrum. Modulation processes such as drifts therefore also influence the re-acceleration process. It is found that re-accelerated electrons make appreciable contributions to intensities in the heliosphere and that increases caused by DSA at the TS are comparable to intensity enhancements observed by Voyager 1 ahead of the TS crossing. The modeling results are interpreted as support for DSA as a re-acceleration mechanism for galactic electrons at the TS.
Ladd, Lauren M.; Tirkes, Temel; Tann, Mark; Agarwal, David M.; Johnson, Matthew S.; Tahir, Bilal; Sandrasegaran, Kumaresan
2016-01-01
Background/Aims The diagnosis and treatment plan for hepatocellular carcinoma (HCC) can be made from radiologic imaging. However, lesion detection may vary depending on the imaging modality. This study aims to evaluate the sensitivities of hepatic multidetector computed tomography (MDCT), magnetic resonance imaging (MRI), and digital subtraction angiography (DSA) in the detection of HCC and the consequent management impact on potential liver transplant patients. Methods One hundred and sixteen HCC lesions were analyzed in 41 patients who received an orthotopic liver transplant (OLT). All of the patients underwent pretransplantation hepatic DSA, MDCT, and/or MRI. The imaging results were independently reviewed retrospectively in a blinded fashion by two interventional and two abdominal radiologists. The liver explant pathology was used as the gold standard for assessing each imaging modality. Results The sensitivity for overall HCC detection was higher for cross-sectional imaging using MRI (51.5%, 95% confidence interval [CI]=36.2-58.4%) and MDCT (49.8%, 95% CI=43.7-55.9%) than for DSA (41.7%, 95% CI=36.2-47.3%) (P=0.05). The difference in false-positive rate was not statistically significant between MRI (22%), MDCT (29%), and DSA (29%) (P=0.67). The sensitivity was significantly higher for detecting right lobe lesions than left lobe lesions for all modalities (MRI: 56.1% vs. 43.1%, MDCT: 55.0% vs. 42.0%, and DSA: 46.9% vs. 33.9%; all P<0.01). The sensitivities of the three imaging modalities were also higher for lesions ≥2 cm vs. <2 cm (MRI: 73.4% vs. 32.7%, MDCT: 66.9% vs. 33.8%, and DSA: 62.2% vs. 24.1%; all P<0.01). The interobserver correlation was rated as very good to excellent. Conclusion The sensitivity for detecting HCC is higher for MRI and MDCT than for DSA, and so cross-sectional imaging modalities should be used to evaluate OLT candidacy. PMID:27987537
A complex family of class-II restriction endonucleases, DsaI-VI, in Dactylococcopsis salina.
Laue, F; Evans, L R; Jarsch, M; Brown, N L; Kessler, C
1991-01-02
A series of class-II restriction endonucleases (ENases) was discovered in the halophilic, phototrophic, gas-vacuolated cyanobacterium Dactylococcopsis salina sp. nov. The six novel enzymes are characterized by the following recognition sequences and cut positions: 5'-C decreases CRYGG-3' (DsaI); 5'-GG decreases CC-3' (DsaII); 5'-R decreases GATCY-3' (DsaIII); 5'-G decreases GWCC-3' (DsaIV); 5'-decreases CCNGG-3' (DsaV); and 5'-GTMKAC-3' (DsaVI), where W = A or T, M = A or C, K = G or T, and N = A, G, C or T. In addition, traces of further possible activity were detected. DsaI has a novel sequence specificity and DsaV is an isoschizomer of ScrFI, but with a novel cut specificity. A purification procedure was established to separate all six ENases, resulting in their isolation free of contaminating nuclease activities. DsaI cleavage is influenced by N6-methyladenine residues [derived from the Escherichia coli-encoded DNA methyltransferase (MTase) M.Eco damI] within the overlapping sequence, 5'-CCRYMGGATC-3'; DsaV hydrolysis is inhibited by a C-5-methylcytosine residue in its recognition sequence (5'-CMCNGG-3'), generated in some DsaV sites by the E. coli-encoded MTase, M.Eco dcmI.
Design strategy for integrating DSA via patterning in sub-7 nm interconnects
NASA Astrophysics Data System (ADS)
Karageorgos, Ioannis; Ryckaert, Julien; Tung, Maryann C.; Wong, H.-S. P.; Gronheid, Roel; Bekaert, Joost; Karageorgos, Evangelos; Croes, Kris; Vandenberghe, Geert; Stucchi, Michele; Dehaene, Wim
2016-03-01
In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCPs). As a result, the insertion of DSA for IC fabrication is being actively considered for the sub-7nm nodes. At these nodes the DSA technology could alleviate costs for multiple patterning and limit the number of litho masks that would be required per metal layer. One of the most straightforward approaches for DSA implementation would be for via patterning through templated DSA, where hole patterns are readily accessible through templated confinement of cylindrical phase BCP materials. Our in-house studies show that decomposition of via layers in realistic circuits below the 7nm node would require at least many multi-patterning steps (or colors), using 193nm immersion lithography. Even the use of EUV might require double patterning in these dimensions, since the minimum via distance would be smaller than EUV resolution. The grouping of vias through templated DSA can resolve local conflicts in high density areas. This way, the number of required colors can be significantly reduced. For the implementation of this approach, a DSA-aware mask decomposition is required. In this paper, our design approach for DSA via patterning in sub-7nm nodes is discussed. We propose options to expand the list of DSA-compatible via patterns (DSA letters) and we define matching cost formulas for the optimal DSA-aware layout decomposition. The flowchart of our proposed approach tool is presented.
Analytic Method to Estimate Particle Acceleration in Flux Ropes
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
NASA Astrophysics Data System (ADS)
Kim, JiHoon; Yin, Jian; Cao, Yi; Her, YoungJun; Petermann, Claire; Wu, Hengpeng; Shan, Jianhui; Tsutsumi, Tomohiko; Lin, Guanyang
2015-03-01
Significant progresses on 300 mm wafer level DSA (Directed Self-Assembly) performance stability and pattern quality were demonstrated in recent years. DSA technology is now widely regarded as a leading complementary patterning technique for future node integrated circuit (IC) device manufacturing. We first published SMARTTM DSA flow in 2012. In 2013, we demonstrated that SMARTTM DSA pattern quality is comparable to that generated using traditional multiple patterning technique for pattern uniformity on a 300 mm wafer. In addition, we also demonstrated that less than 1.5 nm/3σ LER (line edge roughness) for 16 nm half pitch DSA line/space pattern is achievable through SMARTTM DSA process. In this publication, we will report impacts on SMARTTM DSA performances of key pre-pattern features and processing conditions. 300mm wafer performance process window, CD uniformity and pattern LER/LWR after etching transfer into carbon-hard mask will be discussed as well.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Method and apparatus for varying accelerator beam output energy
Young, Lloyd M.
1998-01-01
A coupled cavity accelerator (CCA) accelerates a charged particle beam with rf energy from a rf source. An input accelerating cavity receives the charged particle beam and an output accelerating cavity outputs the charged particle beam at an increased energy. Intermediate accelerating cavities connect the input and the output accelerating cavities to accelerate the charged particle beam. A plurality of tunable coupling cavities are arranged so that each one of the tunable coupling cavities respectively connect an adjacent pair of the input, output, and intermediate accelerating cavities to transfer the rf energy along the accelerating cavities. An output tunable coupling cavity can be detuned to variably change the phase of the rf energy reflected from the output coupling cavity so that regions of the accelerator can be selectively turned off when one of the intermediate tunable coupling cavities is also detuned.
NASA Astrophysics Data System (ADS)
Cheng, Jing; Lawson, Richard A.; Yeh, Wei-Ming; Tolbert, Laren M.; Henderson, Clifford L.
2011-04-01
Directed self-assembly (DSA) of block copolymers has gained significant attention in recent years as a possible alternative for large area fabrication of future sub-30 nm lithographic patterns. To achieve this patterning, at least three critical pieces are needed: (1) a block copolymer with sufficient immiscibility of the two blocks to drive phase separation at the low molecular weights required to achieve such small phase domains, (2) a method for selectively removing one of the blocks after phase separation to achieve formation of a relief pattern, and (3) a method for producing the templated surfaces used to guide and register the phase separated patterns on the substrate of interest. Current methods for achieving the patterned substrate template, whether they are of chemoepitaxial or graphoepitaxial nature, are generally complex involving a large number of steps that are not easily applied to a variety of different substrate surfaces. For example, numerous substrates have been studied to provide neutral wettability to the styrene-methacrylate (PS-b- PMMA) block copolymers, such as random styrene-methacrylate copolymer films (PS-r-PMMA) or self-assembled monolayer (SAM) modified surfaces, which induce perpendicularly oriented morphologies for PS-b-PMMA self-assembly. In the case of chemical epitaxy processes, a layer of photoresist is generally then coated on such neutral substrate films and patterned to render commensurability to the periodicity of the PS-b-PMMA being used. The open (i.e. space) regions in the resist are then exposed to alter their chemistry, e.g. soft X-ray or oxygen plasma exposures have been used, to achieve hydrophilicity which should preferentially wet PMMA. Finally, the resist is stripped and the block copolymer is coated and assembled on the template surface. Obviously such multi-step processes would not be preferred if alternatives existed. As a step toward that goal of making DSA processes simpler, a photodefinable substrate film that
Turcksin, Bruno Ragusa, Jean C.
2014-10-01
In this paper, a Diffusion Synthetic Acceleration (DSA) technique applied to the S{sub n} radiation transport equation is developed using Piece-Wise Linear Discontinuous (PWLD) finite elements on arbitrary polygonal grids. The discretization of the DSA equations employs an Interior Penalty technique, as is classically done for the stabilization of the diffusion equation using discontinuous finite element approximations. The penalty method yields a system of linear equations that is Symmetric Positive Definite (SPD). Thus, solution techniques such as Preconditioned Conjugate Gradient (PCG) can be effectively employed. Algebraic MultiGrid (AMG) and Symmetric Gauss–Seidel (SGS) are employed as conjugate gradient preconditioners for the DSA system. AMG is shown to be significantly more efficient than SGS. Fourier analyses are carried out and we show that this discontinuous finite element DSA scheme is always stable and effective at reducing the spectral radius for iterative transport solves, even for grids with high-aspect ratio cells. Numerical results are presented for different grid types: quadrilateral, hexagonal, and polygonal grids as well as grids with local mesh adaptivity.
Discontinuous diffusion synthetic acceleration for Sn transport on 2D arbitrary polygonal meshes
NASA Astrophysics Data System (ADS)
Turcksin, Bruno; Ragusa, Jean C.
2014-10-01
In this paper, a Diffusion Synthetic Acceleration (DSA) technique applied to the Sn radiation transport equation is developed using Piece-Wise Linear Discontinuous (PWLD) finite elements on arbitrary polygonal grids. The discretization of the DSA equations employs an Interior Penalty technique, as is classically done for the stabilization of the diffusion equation using discontinuous finite element approximations. The penalty method yields a system of linear equations that is Symmetric Positive Definite (SPD). Thus, solution techniques such as Preconditioned Conjugate Gradient (PCG) can be effectively employed. Algebraic MultiGrid (AMG) and Symmetric Gauss-Seidel (SGS) are employed as conjugate gradient preconditioners for the DSA system. AMG is shown to be significantly more efficient than SGS. Fourier analyses are carried out and we show that this discontinuous finite element DSA scheme is always stable and effective at reducing the spectral radius for iterative transport solves, even for grids with high-aspect ratio cells. Numerical results are presented for different grid types: quadrilateral, hexagonal, and polygonal grids as well as grids with local mesh adaptivity.
Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method
NASA Astrophysics Data System (ADS)
Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han
2015-12-01
Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.
METHOD OF PRODUCING AND ACCELERATING AN ION BEAM
NASA Technical Reports Server (NTRS)
Foster, John E. (Inventor)
2005-01-01
A method of producing and accelerating an ion beam comprising the steps of providing a magnetic field with a cusp that opens in an outward direction along a centerline that passes through a vertex of the cusp: providing an ionizing gas that sprays outward through at least one capillary-like orifice in a plenum that is positioned such that the orifice is on the centerline in the cusp, outward of the vortex of the cusp; providing a cathode electron source, and positioning it outward of the orifice and off of the centerline; and positively charging the plenum relative to the cathode electron source such that the plenum functions as m anode. A hot filament may be used as the cathode electron source, and permanent magnets may be used to provide the magnetic field.
Just in Time DSA-The Hanford Nuclear Safety Basis Strategy
Olinger, S. J.; Buhl, A. R.
2002-02-26
The U.S. Department of Energy, Richland Operations Office (RL) is responsible for 30 hazard category 2 and 3 nuclear facilities that are operated by its prime contractors, Fluor Hanford Incorporated (FHI), Bechtel Hanford, Incorporated (BHI) and Pacific Northwest National Laboratory (PNNL). The publication of Title 10, Code of Federal Regulations, Part 830, Subpart B, Safety Basis Requirements (the Rule) in January 2001 imposed the requirement that the Documented Safety Analyses (DSA) for these facilities be reviewed against the requirements of the Rule. Those DSA that do not meet the requirements must either be upgraded to satisfy the Rule, or an exemption must be obtained. RL and its prime contractors have developed a Nuclear Safety Strategy that provides a comprehensive approach for supporting RL's efforts to meet its long term objectives for hazard category 2 and 3 facilities while also meeting the requirements of the Rule. This approach will result in a reduction of the total number of safety basis documents that must be developed and maintained to support the remaining mission and closure of the Hanford Site and ensure that the documentation that must be developed will support: compliance with the Rule; a ''Just-In-Time'' approach to development of Rule-compliant safety bases supported by temporary exemptions; and consolidation of safety basis documents that support multiple facilities with a common mission (e.g. decontamination, decommissioning and demolition [DD&D], waste management, surveillance and maintenance). This strategy provides a clear path to transition the safety bases for the various Hanford facilities from support of operation and stabilization missions through DD&D to accelerate closure. This ''Just-In-Time'' Strategy can also be tailored for other DOE Sites, creating the potential for large cost savings and schedule reductions throughout the DOE complex.
NASA Astrophysics Data System (ADS)
Davis, Brian; Oberstar, Erick; Royalty, Kevin; Schafer, Sebastian; Strother, Charles; Mistretta, Charles
2015-03-01
Static C-Arm CT 3D FDK baseline reconstructions (3D-DSA) are unable to provide temporal information to radiologists. 4D-DSA provides a time series of 3D volumes implementing a constrained image, thresholded 3D-DSA, reconstruction utilizing temporal dynamics in the 2D projections. Volumetric limiting spatial resolution (VLSR) of 4DDSA is quantified and compared to a 3D-DSA reconstruction using the same 3D-DSA parameters. Investigated were the effects of varying over significant ranges the 4D-DSA parameters of 2D blurring kernel size applied to the projection and threshold applied to the 3D-DSA when generating the constraining image of a scanned phantom (SPH) and an electronic phantom (EPH). The SPH consisted of a 76 micron tungsten wire encased in a 47 mm O.D. plastic radially concentric thin walled support structure. An 8-second/248-frame/198° scan protocol acquired the raw projection data. VLSR was determined from averaged MTF curves generated from each 2D transverse slice of every (248) 4D temporal frame (3D). 4D results for SPH and EPH were compared to the 3D-DSA. Analysis of the 3D-DSA resulted in a VLSR of 2.28 and 1.69 lp/mm for the EPH and SPH respectively. Kernel (2D) sizes of either 10x10 or 20x20 pixels with a threshold of 10% of the 3D-DSA as a constraining image provided 4D-DSA VLSR nearest to the 3D-DSA. 4D-DSA algorithms yielded 2.21 and 1.67 lp/mm with a percent error of 3.1% and 1.2% for the EPH and SPH respectively as compared to the 3D-DSA. This research indicates 4D-DSA is capable of retaining the resolution of the 3D-DSA.
Improved image fusion method based on NSCT and accelerated NMF.
Wang, Juan; Lai, Siyu; Li, Mingdong
2012-01-01
In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.
Improved Image Fusion Method Based on NSCT and Accelerated NMF
Wang, Juan; Lai, Siyu; Li, Mingdong
2012-01-01
In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms. PMID:22778618
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
Accelerated molecular dynamics methods: introduction and recent developments
Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G
2009-01-01
reaction pathways may be important, we return instead to a molecular dynamics treatment, in which the trajectory itself finds an appropriate way to escape from each state of the system. Since a direct integration of the trajectory would be limited to nanoseconds, while we are seeking to follow the system for much longer times, we modify the dynamics in some way to cause the first escape to happen much more quickly, thereby accelerating the dynamics. The key is to design the modified dynamics in a way that does as little damage as possible to the probability for escaping along a given pathway - i.e., we try to preserve the relative rate constants for the different possible escape paths out of the state. We can then use this modified dynamics to follow the system from state to state, reaching much longer times than we could reach with direct MD. The dynamics within any one state may no longer be meaningful, but the state-to-state dynamics, in the best case, as we discuss in the paper, can be exact. We have developed three methods in this accelerated molecular dynamics (AMD) class, in each case appealing to TST, either implicitly or explicitly, to design the modified dynamics. Each of these methods has its own advantages, and we and others have applied these methods to a wide range of problems. The purpose of this article is to give the reader a brief introduction to how these methods work, and discuss some of the recent developments that have been made to improve their power and applicability. Note that this brief review does not claim to be exhaustive: various other methods aiming at similar goals have been proposed in the literature. For the sake of brevity, our focus will exclusively be on the methods developed by the group.
A Novel Method for Vertical Acceleration Noise Suppression of a Thrust-Vectored VTOL UAV.
Li, Huanyu; Wu, Linfeng; Li, Yingjie; Li, Chunwen; Li, Hangyu
2016-12-02
Acceleration is of great importance in motion control for unmanned aerial vehicles (UAVs), especially during the takeoff and landing stages. However, the measured acceleration is inevitably polluted by severe noise. Therefore, a proper noise suppression procedure is required. This paper presents a novel method to reduce the noise in the measured vertical acceleration for a thrust-vectored tail-sitter vertical takeoff and landing (VTOL) UAV. In the new procedure, a Kalman filter is first applied to estimate the UAV mass by using the information in the vertical thrust and measured acceleration. The UAV mass is then used to compute an estimate of UAV vertical acceleration. The estimated acceleration is finally fused with the measured acceleration to obtain the minimum variance estimate of vertical acceleration. By doing this, the new approach incorporates the thrust information into the acceleration estimate. The method is applied to the data measured in a VTOL UAV takeoff experiment. Two other denoising approaches developed by former researchers are also tested for comparison. The results demonstrate that the new method is able to suppress the acceleration noise substantially. It also maintains the real-time performance in the final estimated acceleration, which is not seen in the former denoising approaches. The acceleration treated with the new method can be readily used in the motion control applications for UAVs to achieve improved accuracy.
A Novel Method for Vertical Acceleration Noise Suppression of a Thrust-Vectored VTOL UAV
Li, Huanyu; Wu, Linfeng; Li, Yingjie; Li, Chunwen; Li, Hangyu
2016-01-01
Acceleration is of great importance in motion control for unmanned aerial vehicles (UAVs), especially during the takeoff and landing stages. However, the measured acceleration is inevitably polluted by severe noise. Therefore, a proper noise suppression procedure is required. This paper presents a novel method to reduce the noise in the measured vertical acceleration for a thrust-vectored tail-sitter vertical takeoff and landing (VTOL) UAV. In the new procedure, a Kalman filter is first applied to estimate the UAV mass by using the information in the vertical thrust and measured acceleration. The UAV mass is then used to compute an estimate of UAV vertical acceleration. The estimated acceleration is finally fused with the measured acceleration to obtain the minimum variance estimate of vertical acceleration. By doing this, the new approach incorporates the thrust information into the acceleration estimate. The method is applied to the data measured in a VTOL UAV takeoff experiment. Two other denoising approaches developed by former researchers are also tested for comparison. The results demonstrate that the new method is able to suppress the acceleration noise substantially. It also maintains the real-time performance in the final estimated acceleration, which is not seen in the former denoising approaches. The acceleration treated with the new method can be readily used in the motion control applications for UAVs to achieve improved accuracy. PMID:27918422
Apparatus and method for the acceleration of projectiles to hypervelocities
Hertzberg, Abraham; Bruckner, Adam P.; Bogdanoff, David W.
1990-01-01
A projectile is initially accelerated to a supersonic velocity and then injected into a launch tube filled with a gaseous propellant. The projectile outer surface and launch tube inner surface form a ramjet having a diffuser, a combustion chamber and a nozzle. A catalytic coated flame holder projecting from the projectile ignites the gaseous propellant in the combustion chamber thereby accelerating the projectile in a subsonic combustion mode zone. The projectile then enters an overdriven detonation wave launch tube zone wherein further projectile acceleration is achieved by a formed, controlled overdriven detonation wave capable of igniting the gaseous propellant in the combustion chamber. Ultrahigh velocity projectile accelerations are achieved in a launch tube layered detonation zone having an inner sleeve filled with hydrogen gas. An explosive, which is disposed in the annular zone between the inner sleeve and the launch tube, explodes responsive to an impinging shock wave emanating from the diffuser of the accelerating projectile thereby forcing the inner sleeve inward and imparting an acceleration to the projectile. For applications wherein solid or liquid high explosives are employed, the explosion thereof forces the inner sleeve inward, forming a throat behind the projectile. This throat chokes flow behind, thereby imparting an acceleration to the projectile.
Accelerating ab initio molecular dynamics simulations by linear prediction methods
NASA Astrophysics Data System (ADS)
Herr, Jonathan D.; Steele, Ryan P.
2016-09-01
Acceleration of ab initio molecular dynamics (AIMD) simulations can be reliably achieved by extrapolation of electronic data from previous timesteps. Existing techniques utilize polynomial least-squares regression to fit previous steps' Fock or density matrix elements. In this work, the recursive Burg 'linear prediction' technique is shown to be a viable alternative to polynomial regression, and the extrapolation-predicted Fock matrix elements were three orders of magnitude closer to converged elements. Accelerations of 1.8-3.4× were observed in test systems, and in all cases, linear prediction outperformed polynomial extrapolation. Importantly, these accelerations were achieved without reducing the MD integration timestep.
Demonstration recommendations for accelerated testing of concrete decontamination methods
Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.
1995-12-01
A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are {sup 137}Cs, {sup 238}U (and its daughters), {sup 60}Co, {sup 90}Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 {times} 10{sup 8} ft{sup 2}or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling.
Lu, Dong; Li, Cheng-Li; Lv, Wei-Fu; Ni, Ming; Deng, Ke-Xue; Zhou, Chun-Ze; Xiao, Jing-Kun; Zhang, Zhen-Feng; Zhang, Xing-Ming
2017-01-01
The aim of the present study was to compare multislice computed tomography angiography (MSCTA) and digital subtraction angiography (DSA) in the diagnosis of aortic dissection. In total, 49 patients with aortic lesions received enhanced computed tomography scanning, and three-dimensional (3D) images were reconstructed by volume rendering (VR), maximum intensity projection (MIP), multiplanar reformation (MPR) and curved planar reconstruction (CPR). The display rate of the entry tear site, intimal flap, true and false lumen from each reconstruction method was calculated. For 30 patients with DeBakey type III aortic dissection, the entry tear site and size of the first intimal flap, aortic maximum diameter at the orifice of left subclavian artery (LSCA), distance between the first entry tear site and the orifice of LSCA, and maximum diameter of aortic true and false lumens were measured prior to implantation of endovascular covered stent-grafts. Data obtained by MSCTA and DSA were then compared. For the entry tear site, MPR, CPR and VR provided a display rate of 95.92, 95.92 and 18.37%, respectively, and the display rate of the intimal flap was 100% in the three methods. MIP did not directly display the entry tear site and intimal flap. For true and false lumens, MPR, CPR, and VR showed a display rate of 100%, while MIP only provided a display rate of 67.35%. When MSCTA was compared with DSA, there was a significant difference in the display of entry site number and position (P<0.05), whereas no significant difference was shown in the measurement of aortic maximum diameter at the orifice of LSCA and the maximum diameter of true and false lumens (P>0.05). In conclusion, among the 3D post-processing reconstruction methods of MSCTA used, MPR and CPR were optimal, followed by VR, and MIP. MSCTA may be the preferable imaging method to diagnose aortic dissection and evaluate treatment of endovascular-covered stent-grafting, preoperatively. PMID:28352308
Third order TRANSPORT with MAD (Methodical Accelerator Design) input
Carey, D.C.
1988-09-20
This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix. (LSP)
Method of accelerating photons by a relativistic plasma wave
Dawson, John M.; Wilks, Scott C.
1990-01-01
Photons of a laser pulse have their group velocity accelerated in a plasma as they are placed on a downward density gradient of a plasma wave of which the phase velocity nearly matches the group velocity of the photons. This acceleration results in a frequency upshift. If the unperturbed plasma has a slight density gradient in the direction of propagation, the photon frequencies can be continuously upshifted to significantly greater values.
Development of wide area environment accelerator operation and diagnostics method
NASA Astrophysics Data System (ADS)
Uchiyama, Akito; Furukawa, Kazuro
2015-08-01
Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.
Diamant, Kevin David; Raitses, Yevgeny; Fisch, Nathaniel Joseph
2014-05-13
Systems and methods may be provided for cylindrical Hall thrusters with independently controllable ionization and acceleration stages. The systems and methods may include a cylindrical channel having a center axial direction, a gas inlet for directing ionizable gas to an ionization section of the cylindrical channel, an ionization device that ionizes at least a portion of the ionizable gas within the ionization section to generate ionized gas, and an acceleration device distinct from the ionization device. The acceleration device may provide an axial electric field for an acceleration section of the cylindrical channel to accelerate the ionized gas through the acceleration section, where the axial electric field has an axial direction in relation to the center axial direction. The ionization section and the acceleration section of the cylindrical channel may be substantially non-overlapping.
Development of a fast voltage control method for electrostatic accelerators
NASA Astrophysics Data System (ADS)
Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios
2014-12-01
The concept of a novel fast voltage control loop for tandem electrostatic accelerators is described. This control loop utilises high-frequency components of the ion beam current intercepted by the image slits to generate a correction voltage that is applied to the first few gaps of the low- and high-energy acceleration tubes adjoining the high voltage terminal. New techniques for the direct measurement of the transfer function of an ultra-high impedance structure, such as an electrostatic accelerator, have been developed. For the first time, the transfer function for the fast feedback loop has been measured directly. Slow voltage variations are stabilised with common corona control loop and the relationship between transfer functions for the slow and new fast control loops required for optimum operation is discussed. The main source of terminal voltage instabilities, which are due to variation of the charging current caused by mechanical oscillations of charging chains, has been analysed.
Method of and apparatus for accelerating a projectile
Goldstein, Yeshayahu S. A.; Tidman, Derek A.
1986-01-01
A projectile is accelerated along a confined path by supplying a pulsed high pressure, high velocity plasma jet to the rear of the projectile as the projectile traverses the path. The jet enters the confined path at a non-zero angle relative to the projectile path. The pulse is derived from a dielectric capillary tube having an interior wall from which plasma forming material is ablated in response to a discharge voltage. The projectile can be accelerated in response to the kinetic energy in the plasma jet or in response to a pressure increase of gases in the confined path resulting from the heat added to the gases by the plasma.
Comparative Oxidative Stability of Fatty Acid Alkyl Esters by Accelerated Methods
Technology Transfer Automated Retrieval System (TEKTRAN)
Several fatty acid alkyl esters were subjected to accelerated methods of oxidation, including EN 14112 (Rancimat method) and pressurized differential scanning calorimetry (PDSC). Structural trends elucidated from both methods that improved oxidative stability included decreasing the number of doubl...
Ultrahigh impedance method to assess electrostatic accelerator performance
NASA Astrophysics Data System (ADS)
Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios
2015-06-01
This paper describes an investigation of problem-solving procedures to troubleshoot electrostatic accelerators. A novel technique to diagnose issues with high-voltage components is described. The main application of this technique is noninvasive testing of electrostatic accelerator high-voltage grading systems, measuring insulation resistance, or determining the volume and surface resistivity of insulation materials used in column posts and acceleration tubes. In addition, this technique allows verification of the continuity of the resistive divider assembly as a complete circuit, revealing if an electrical path exists between equipotential rings, resistors, tube electrodes, and column post-to-tube conductors. It is capable of identifying and locating a "microbreak" in a resistor and the experimental validation of the transfer function of the high impedance energy control element. A simple and practical fault-finding procedure has been developed based on fundamental principles. The experimental distributions of relative resistance deviations (Δ R /R ) for both accelerating tubes and posts were collected during five scheduled accelerator maintenance tank openings during 2013 and 2014. Components with measured Δ R /R >±2.5 % were considered faulty and put through a detailed examination, with faults categorized. In total, thirty four unique fault categories were identified and most would not be identifiable without the new technique described. The most common failure mode was permanent and irreversible insulator current leakage that developed after being exposed to the ambient environment. As a result of efficient in situ troubleshooting and fault-elimination techniques, the maximum values of |Δ R /R | are kept below 2.5% at the conclusion of maintenance procedures. The acceptance margin could be narrowed even further by a factor of 2.5 by increasing the test voltage from 40 V up to 100 V. Based on experience over the last two years, resistor and insulator
Enablement of DSA for VIA layer with a metal SIT process flow
NASA Astrophysics Data System (ADS)
Schneider, L.; Farys, V.; Serret, E.; Fenouillet-Beranger, C.
2016-03-01
For technologies beyond 10 nm, 1D gridded designs are commonly used. This practice is common particularly in the case of Self-Aligned Double Patterning (SADP) metal processes where Vertical Interconnect Access (VIA) connecting metal line layers are placed along a discrete grid thus limiting the number of VIA pitches. In order to create a Vertical Interconnect Access (VIA) layer, graphoepitaxy Directed Self-Assembly (DSA) is the prevailing candidate. The technique relies on the creation of a confinement guide using optical microlithography methods, in which the BCP is allowed to separate into distinct regions. The resulting patterns are etched to obtain an ordered VIA layer. Guiding pattern variations impact directly on the placement of the target and one must ensure that it does not interfere with circuit performance. To prevent flaws, design rules are set. In this study, for the first time, an original framework is presented to find a consistent set of design rules for enabling the use of DSA in a production flow using Self Aligned Double Patterning (SADP) for metal line layer printing. In order to meet electrical requirements, the intersecting area between VIA and metal lines must be sufficient to ensure correct electrical connection. The intersecting area is driven by both VIA placement variability and metal line printing variability. Based on multiple process assumptions for a 10 nm node, the Monte Carlo method is used to set a maximum threshold for VIA placement error. In addition, to determine a consistent set of design rules, representative test structures have been created and tested with our in-house placement estimator: the topological skeleton of the guiding pattern [1]. Using this technique, structures with deviation above the maximum tolerated threshold are considered as infeasible and the appropriate set of design rules is extracted. In a final step, the design rules are verified with further test structures that are randomly generated using
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Implementation of templated DSA for via layer patterning at the 7nm node
NASA Astrophysics Data System (ADS)
Gronheid, Roel; Doise, Jan; Bekaert, Joost; Chan, Boon Teik; Karageorgos, Ioannis; Ryckaert, Julien; Vandenberghe, Geert; Cao, Yi; Lin, Guanyang; Somervell, Mark; Fenger, Germain; Fuchimoto, Daisuke
2015-03-01
In recent years major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCP). Insertion of DSA for IC fabrication is seriously considered for the 7nm node. At this node the DSA technology could alleviate costs for double patterning and limit the number of masks that would be required per layer. At imec multiple approaches for inserting DSA into the 7nm node are considered. One of the most straightforward approaches for implementation would be for via patterning through templated DSA (grapho-epitaxy), since hole patterns are readily accessible through templated hole patterning of cylindrical phase BCP materials. Here, the pre-pattern template is first patterned into a spin-on hardmask stack. After optimizing the surface properties of the template the desired hole patterns can be obtained by the BCP DSA process. For implementation of this approach to be implemented for 7nm node via patterning, not only the appropriate process flow needs to be available, but also appropriate metrology (including for pattern placement accuracy) and DSA-aware mask decomposition are required. In this paper the imec approach for 7nm node via patterning will be discussed.
NASA Astrophysics Data System (ADS)
Zank, G. P.; Hunana, P.; Mostafavi, P.; le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.
2015-09-01
As a consequence of the evolutionary conditions [28; 29], shock waves can generate high levels of downstream vortical turbulence. Simulations [32-34] and observations [30; 31] support the idea that downstream magnetic islands (also called plasmoids or flux ropes) result from the interaction of shocks with upstream turbulence. Zank et al. [18] speculated that a combination of diffusive shock acceleration (DSA) and downstream reconnection-related effects associated with the dynamical evolution of a “sea of magnetic islands” would result in the energization of charged particles. Here, we utilize the transport theory [18; 19] for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets to investigate a combined DSA and downstream multiple magnetic island charged particle acceleration mechanism. We consider separately the effects of the anti-reconnection electric field that is a consequence of magnetic island merging [17], and magnetic island contraction [14]. For the merging plasmoid reconnection- induced electric field only, we find i) that the particle spectrum is a power law in particle speed, flatter than that derived from conventional DSA theory, and ii) that the solution is constant downstream of the shock. For downstream plasmoid contraction only, we find that i) the accelerated particle spectrum is a power law in particle speed, flatter than that derived from conventional DSA theory; ii) for a given energy, the particle intensity peaks downstream of the shock, and the peak location occurs further downstream of the shock with increasing particle energy, and iii) the particle intensity amplification for a particular particle energy, f(x, c/c0)/f(0, c/c0), is not 1, as predicted by DSA theory, but increases with increasing particle energy. These predictions can be tested against observations of electrons and ions accelerated at interplanetary shocks and the heliospheric
Comparative imaging study in ultrasound, MRI, CT, and DSA using a multimodality renal artery phantom
King, Deirdre M.; Fagan, Andrew J.; Moran, Carmel M.; Browne, Jacinta E.
2011-02-15
Purpose: A range of anatomically realistic multimodality renal artery phantoms consisting of vessels with varying degrees of stenosis was developed and evaluated using four imaging techniques currently used to detect renal artery stenosis (RAS). The spatial resolution required to visualize vascular geometry and the velocity detection performance required to adequately characterize blood flow in patients suffering from RAS are currently ill-defined, with the result that no one imaging modality has emerged as a gold standard technique for screening for this disease. Methods: The phantoms, which contained a range of stenosis values (0%, 30%, 50%, 70%, and 85%), were designed for use with ultrasound, magnetic resonance imaging, x-ray computed tomography, and x-ray digital subtraction angiography. The construction materials used were optimized with respect to their ultrasonic speed of sound and attenuation coefficient, MR relaxometry (T{sub 1},T{sub 2}) properties, and Hounsfield number/x-ray attenuation coefficient, with a design capable of tolerating high-pressure pulsatile flow. Fiducial targets, incorporated into the phantoms to allow for registration of images among modalities, were chosen to minimize geometric distortions. Results: High quality distortion-free images of the phantoms with good contrast between vessel lumen, fiducial markers, and background tissue to visualize all stenoses were obtained with each modality. Quantitative assessments of the grade of stenosis revealed significant discrepancies between modalities, with each underestimating the stenosis severity for the higher-stenosed phantoms (70% and 85%) by up to 14%, with the greatest discrepancy attributable to DSA. Conclusions: The design and construction of a range of anatomically realistic renal artery phantoms containing varying degrees of stenosis is described. Images obtained using the main four diagnostic techniques used to detect RAS were free from artifacts and exhibited adequate contrast
Electron acceleration with advanced injection methods at the ASTRA laser
NASA Astrophysics Data System (ADS)
Poder, Kristjan; Carreira-Lopes, Nelson; Wood, Jonathan; Cole, Jason; Dangor, Bucker; Foster, Peta; Gopal, Ram; Kamperidis, Christos; Kononenko, Olena; Mangles, Stuart; Olgun, Halil; Palmer, Charlotte; Symes, Daniel; Pattathil, Rajeev; Najmudin, Zulfikar; Imperial College London Team; Central Laser Facility Collaboration; Tata InsituteFundamental Research Collaboration; DESY Collaboration
2015-11-01
Recent electron acceleration results from the ASTRA laser facility are presented. Experiments were performed using both the 40 TW ASTRA and the 350 TW ASTRA-Gemini laser. Fundamental electron beam properties relating to its quality were investigated both experimentally and with PIC simulations. For increased control over such parameters, various injection mechanisms such as self-injection and ionization injection were employed. Particular interest is given to the dynamics of ionization injected electrons in strongly driven wakes.
GPU-accelerated discontinuous Galerkin methods on hybrid meshes
NASA Astrophysics Data System (ADS)
Chan, Jesse; Wang, Zheng; Modave, Axel; Remacle, Jean-Francois; Warburton, T.
2016-08-01
We present a time-explicit discontinuous Galerkin (DG) solver for the time-domain acoustic wave equation on hybrid meshes containing vertex-mapped hexahedral, wedge, pyramidal and tetrahedral elements. Discretely energy-stable formulations are presented for both Gauss-Legendre and Gauss-Legendre-Lobatto (Spectral Element) nodal bases for the hexahedron. Stable timestep restrictions for hybrid meshes are derived by bounding the spectral radius of the DG operator using order-dependent constants in trace and Markov inequalities. Computational efficiency is achieved under a combination of element-specific kernels (including new quadrature-free operators for the pyramid), multi-rate timestepping, and acceleration using Graphics Processing Units.
METHODS AND MEANS FOR OBTAINING HYDROMAGNETICALLY ACCELERATED PLASMA JET
Marshall, J. Jr.
1960-11-22
A hydromagnetic plasma accelerator is described comprising in combination a center electrode, an outer electrode coaxial with the center electrode and defining an annular vacuum chamber therebetween, insulating closure means between the electrodes at one end, means for iniroducing an ionizable gas into the annular vacuum chamber near one end thereof, and means including a power supply for applying a voltage between the electrodes at the end having the closure means, the open ends of the electrodes being adapted for connection to a vacuumed atilization chamber.
Impact of BCP asymmetry on DSA patterning performance
NASA Astrophysics Data System (ADS)
Williamson, Lance; Kim, JiHoon; Cao, Yi; Lin, Guanyang; Gronheid, Roel; Nealey, Paul F.
2015-03-01
Directed self-assembly (DSA) of lamellae-forming block copolymers (BCP) via chemo-epitaxy is a potential lithographic solution to achieve patterns of dense features. Progress to date demonstrates encouraging results, but in order to better understand the role of all parameters, systematic analysis of each factor needs to be assessed. Small changes in the volume fraction of a lamellae-forming BCP have been shown to change the connectivity of unguided domains. When an asymmetric lamellae-forming BCP is assembled on chemical patterns generated with the LiNe flow, the patterning performance and defect modes change depending on whether the majority or minority volume fraction phase is guided by the chemical pattern. Asymmetric BCP formulations were generated by blending homopolymer with a symmetric BCP. The patterning performance of the BCP formulations was assessed for different pattern pitches, guide stripe widths, backfill materials and annealing times. Optical defect inspection and SEM review are used to track the majority defect mode for each formulation. Formulation-dependent trends in defect modes show the importance of optimizing the BCP formulation in order to minimize the defectivity.
NASA Astrophysics Data System (ADS)
le Roux, J. A.; Zank, G. P.; Webb, G. M.; Khabarova, O. V.
2016-08-01
Computational and observational evidence is accruing that heliospheric shocks, as emitters of vorticity, can produce downstream magnetic flux ropes and filaments. This led Zank et al. to investigate a new paradigm whereby energetic particle acceleration near shocks is a combination of diffusive shock acceleration (DSA) with downstream acceleration by many small-scale contracting and reconnecting (merging) flux ropes. Using a model where flux-rope acceleration involves a first-order Fermi mechanism due to the mean compression of numerous contracting flux ropes, Zank et al. provide theoretical support for observations that power-law spectra of energetic particles downstream of heliospheric shocks can be harder than predicted by DSA theory and that energetic particle intensities should peak behind shocks instead of at shocks as predicted by DSA theory. In this paper, a more extended formalism of kinetic transport theory developed by le Roux et al. is used to further explore this paradigm. We describe how second-order Fermi acceleration, related to the variance in the electromagnetic fields produced by downstream small-scale flux-rope dynamics, modifies the standard DSA model. The results show that (i) this approach can qualitatively reproduce observations of particle intensities peaking behind the shock, thus providing further support for the new paradigm, and (ii) stochastic acceleration by compressible flux ropes tends to be more efficient than incompressible flux ropes behind shocks in modifying the DSA spectrum of energetic particles.
High chi block copolymer DSA to improve pattern quality for FinFET device fabrication
NASA Astrophysics Data System (ADS)
Tsai, HsinYu; Miyazoe, Hiroyuki; Vora, Ankit; Magbitang, Teddie; Arellano, Noel; Liu, Chi-Chun; Maher, Michael J.; Durand, William J.; Dawes, Simon J.; Bucchignano, James J.; Gignac, Lynne; Sanders, Daniel P.; Joseph, Eric A.; Colburn, Matthew E.; Willson, C. Grant; Ellison, Christopher J.; Guillorn, Michael A.
2016-03-01
Directed self-assembly (DSA) with block-copolymers (BCP) is a promising lithography extension technique to scale below 30nm pitch with 193i lithography. Continued scaling toward 20nm pitch or below will require material system improvements from PS-b-PMMA. Pattern quality for DSA features, such as line edge roughness (LER), line width roughness (LWR), size uniformity, and placement, is key to DSA manufacturability. In this work, we demonstrate finFET devices fabricated with DSA-patterned fins and compare several BCP systems for continued pitch scaling. Organic-organic high chi BCPs at 24nm and 21nm pitches show improved low to mid-frequency LER/LWR after pattern transfer.
[2011 Shanghai customer satisfaction report of DSA/X-ray equipment's after-service].
Li, Bin; Qian, Jianguo; Cao, Shaoping; Zheng, Yunxin; Xu, Zitian; Wang, Lijun
2012-11-01
To improve the manufacturer's medical equipment after-sale service, the fifth Shanghai zone customer satisfaction survey was launched by the end of 2011. The DSA/X-ray equipment was setup as an independent category for the first time. From the survey we can show that the DSA/X-ray equipment's CSI is higher than last year, the customer satisfaction scores of preventive maintenance and service contract are lower than others, and CSI of local brand is lower than imported brand.
Clean Slate Environmental Remediation DSA for 10 CFR 830 Compliance
James L. Traynor, Stephen L. Nicolosi, Michael L. Space, Louis F. Restrepo
2006-08-01
Clean Slate Sites II and III are scheduled for environmental remediation (ER) to remove elevated levels of radionuclides in soil. These sites are contaminated with legacy remains of non-nuclear yield nuclear weapons experiments at the Nevada Test Site, that involved high explosive, fissile, and related materials. The sites may also hold unexploded ordnance (UXO) from military training activities in the area over the intervening years. Regulation 10 CFR 830 (Ref. 1) identifies DOE-STD-1120-98 (Ref. 2) and 29 CFR 1910.120 (Ref. 3) as the safe harbor methodologies for performing these remediation operations. Of these methodologies, DOE-STD-1120-98 has been superseded by DOE-STD-1120-2005 (Ref. 4). The project adopted DOE-STD-1120-2005, which includes an approach for ER projects, in combination with 29 CFR 1910.120, as the basis documents for preparing the documented safety analysis (DSA). To securely implement the safe harbor methodologies, we applied DOE-STD-1027-92 (Ref. 5) and DOE-STD-3009-94 (Ref. 6), as needed, to develop a robust hazard classification and hazards analysis that addresses non-standard hazards such as radionuclides and UXO. The hazard analyses provided the basis for identifying Technical Safety Requirements (TSR) level controls. The DOE-STD-1186-2004 (Ref. 7) methodology showed that some controls warranted elevation to Specific Administrative Control (SAC) status. In addition to the Evaluation Guideline (EG) of DOE-STD-3009-94, we also applied the DOE G 420.1 (Ref. 8) annual, radiological dose, siting criterion to define a controlled area around the operation to protect the maximally exposed offsite individual (MOI).
Computer control of large accelerators design concepts and methods
Beck, F.; Gormley, M.
1984-05-01
Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.
ACCELERATED METHOD FOR THE DETERMINATION OF E. COLI,
A method was developed for preparing indicator paper slips for the determination of E . coli by the ’Bactostrip’ method (Foerg) when appraising the...quality of milk. The essence of the method is the exposure of E . coli on a strip of paper, impregnated with nutrient medium containing an indicator (triphenyltetrazolium chloride or other). (Author)
Accelerated in vitro release testing methods for extended release parenteral dosage forms
Shen, Jie; Burgess, Diane J.
2012-01-01
Objectives This review highlights current methods and strategies for accelerated in vitro drug release testing of extended release parenteral dosage forms such as polymeric microparticulate systems, lipid microparticulate systems, in situ depot-forming systems, and implants. Key findings Extended release parenteral dosage forms are typically designed to maintain the effective drug concentration over periods of weeks, months or even years. Consequently, “real-time” in vitro release tests for these dosage forms are often run over a long time period. Accelerated in vitro release methods can provide rapid evaluation and therefore are desirable for quality control purposes. To this end, different accelerated in vitro release methods using United States Pharmacopoeia (USP) apparatus have been developed. Different mechanisms of accelerating drug release from extended release parenteral dosage forms, along with the accelerated in vitro release testing methods currently employed are discussed. Conclusions Accelerated in vitro release testing methods with good discriminatory ability are critical for quality control of extended release parenteral products. Methods that can be used in the development of in vitro-in vivo correlation (IVIVC) are desirable, however for complex parenteral products this may not always be achievable. PMID:22686344
An Accelerated Linearized Alternating Direction Method of Multipliers
2014-02-01
The idea of analyzing (1.8) in order to solve (1.1) is essentially the augmented Lagrangian method ( ALM ) by Hestenes [26] and Powell [44] (It is...originally called the method of multipliers in [26, 44]; see also the textbooks, e.g., [5, 41, 6]). The ALM is a special case of the Douglas-Rachford...splitting method [19, 16, 32], which is also an instance of the proximal point algorithm [17, 46]. The iteration complexity of an inexact version of ALM
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
Willert, Jeffrey; Park, H.; Knoll, D.A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton–Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
NASA Astrophysics Data System (ADS)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-01
We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Sequential electrochemical treatment of dairy wastewater using aluminum and DSA-type anodes.
Borbón, Brenda; Oropeza-Guzman, Mercedes Teresita; Brillas, Enric; Sirés, Ignasi
2014-01-01
Dairy wastewater is characterized by a high content of hardly biodegradable dissolved, colloidal, and suspended organic matter. This work firstly investigates the performance of two individual electrochemical treatments, namely electrocoagulation (EC) and electro-oxidation (EO), in order to finally assess the mineralization ability of a sequential EC/EO process. EC with an Al anode was employed as a primary pretreatment for the conditioning of 800 mL of wastewater. A complete reduction of turbidity, as well as 90 and 81% of chemical oxygen demand (COD) and total organic carbon (TOC) removal, respectively, were achieved after 120 min of EC at 9.09 mA cm(-2). For EO, two kinds of dimensionally stable anodes (DSA) electrodes (Ti/IrO₂-Ta₂O₅ and Ti/IrO₂-SnO₂-Sb₂O₅) were prepared by the Pechini method, obtaining homogeneous coatings with uniform composition and high roughness. The (·)OH formed at the DSA surface from H₂O oxidation were not detected by electron spin resonance. However, their indirect determination by means of H₂O₂ measurements revealed that Ti/IrO₂-SnO₂-Sb₂O₅ is able to produce partially physisorbed radicals. Since the characterization of the wastewater revealed the presence of indole derivatives, preliminary bulk electrolyses were done in ultrapure water containing 1 mM indole in sulfate and/or chloride media. The performance of EO with the Ti/IrO₂-Ta₂O₅ anode was evaluated from the TOC removal and the UV/Vis absorbance decay. The mineralization was very poor in 0.05 M Na₂SO₄, whereas it increased considerably at a greater Cl(-) content, meaning that the oxidation mediated by electrogenerated species such as Cl₂, HClO, and/or ClO(-) competes and even predominates over the (·)OH-mediated oxidation. The EO treatment of EC-pretreated dairy wastewater allowed obtaining a global 98 % TOC removal, decreasing from 1,062 to <30 mg L(-1).
Constraint methods that accelerate free-energy simulations of biomolecules
MacCallum, Justin L.; Dill, Ken A.
2015-01-01
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628
Constraint methods that accelerate free-energy simulations of biomolecules
Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Constraint methods that accelerate free-energy simulations of biomolecules
NASA Astrophysics Data System (ADS)
Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.
2015-12-01
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
GPU Acceleration of Particle-In-Cell Methods
NASA Astrophysics Data System (ADS)
Cowan, Benjamin; Cary, John; Sides, Scott
2016-10-01
Graphics processing units (GPUs) have become key components in many supercomputing systems, as they can provide more computations relative to their cost and power consumption than conventional processors. However, to take full advantage of this capability, they require a strict programming model which involves single-instruction multiple-data execution as well as significant constraints on memory accesses. To bring the full power of GPUs to bear on plasma physics problems, we must adapt the computational methods to this new programming model. We have developed a GPU implementation of the particle-in-cell (PIC) method, one of the mainstays of plasma physics simulation. This framework is highly general and enables advanced PIC features such as high order particles and absorbing boundary conditions. The main elements of the PIC loop, including field interpolation and particle deposition, are designed to optimize memory access. We describe the performance of these algorithms and discuss some of the methods used. Work supported by DARPA Contract No. W31P4Q-16-C-0009.
GPU acceleration of particle-in-cell methods
NASA Astrophysics Data System (ADS)
Cowan, Benjamin; Cary, John; Meiser, Dominic
2015-11-01
Graphics processing units (GPUs) have become key components in many supercomputing systems, as they can provide more computations relative to their cost and power consumption than conventional processors. However, to take full advantage of this capability, they require a strict programming model which involves single-instruction multiple-data execution as well as significant constraints on memory accesses. To bring the full power of GPUs to bear on plasma physics problems, we must adapt the computational methods to this new programming model. We have developed a GPU implementation of the particle-in-cell (PIC) method, one of the mainstays of plasma physics simulation. This framework is highly general and enables advanced PIC features such as high order particles and absorbing boundary conditions. The main elements of the PIC loop, including field interpolation and particle deposition, are designed to optimize memory access. We describe the performance of these algorithms and discuss some of the methods used. Work supported by DARPA contract W31P4Q-15-C-0061 (SBIR).
Time Acceleration Methods for Advection on the Cubed Sphere
Archibald, Richard K; Evans, Katherine J; White III, James B; Drake, John B
2009-01-01
Climate simulation will not grow to the ultrascale without new algorithms to overcome the scalability barriers blocking existing implementations. Until recently, climate simulations concentrated on the question of whether the climate is changing. The emphasis is now shifting to impact assessments, mitigation and adaptation strategies, and regional details. Such studies will require significant increases in spatial resolution and model complexity while maintaining adequate throughput. The barrier to progress is the resulting decrease in time step without increasing single-thread performance. In this paper we demonstrate how to overcome this time barrier for the first standard test defined for the shallow-water equations on a sphere. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact linear part time-evolution schemes can overcome the time barrier for advection equations on a sphere. The discontinuous Galerkin method is a high-order method that is conservative, flexible, and scalable. The addition of multiwavelets to discontinuous Galerkin provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. Exact linear part time-evolution schemes are explicit schemes that remain stable for implicit-size time steps.
Multifunctional hardmask neutral layer for directed self-assembly (DSA) patterning
NASA Astrophysics Data System (ADS)
Guerrero, Douglas J.; Hockey, Mary Ann; Wang, Yubao; Calderas, Eric
2013-03-01
Micro-phase separation for directed self-assembly (DSA) can be executed successfully only when the substrate surface on which the block co-polymer (BCP) is coated has properties that are ideal for attraction to each polymer type. The neutral underlayer (NUL) is an essential and critical component in DSA feasibility. Properties conducive for BCP patterning are primarily dependent on "brush" or "crosslinked" random co-polymer underlayers. Most DSA flows also require a lithography step (reflection control) and pattern transfer schemes at the end of the patterning process. A novel multifunctional hardmask neutral layer (HM NL) was developed to provide reflection control, surface energy matching, and pattern transfer capabilities in a grapho-epitaxy DSA process flow. It was found that the ideal surface energy for the HM NL is in the range of 38-45 dyn/cm. The robustness of the HM NL against exposure to process solvents and developers was identified. Process characteristics of the BCP (thickness, bake time and temperature) on the HM NL were defined. Using the HM NL instead of three distinct layers - bottom anti-reflective coating (BARC) and neutral and hardmask layers - in DSA line-space pitch tripling and contact hole shrinking processes was demonstrated. Finally, the capability of the HM NL to transfer a pattern into a 100-nm spin-on carbon (SOC) layer was shown.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.
Investigation on Accelerating Dust Storm Simulation via Domain Decomposition Methods
NASA Astrophysics Data System (ADS)
Yu, M.; Gui, Z.; Yang, C. P.; Xia, J.; Chen, S.
2014-12-01
Dust storm simulation is a data and computing intensive process, which requires high efficiency and adequate computing resources. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. However, it is still a question worthy of consideration that how to allocate these subdomain processes into computing nodes without introducing imbalanced task loads and unnecessary communications among computing nodes. Here we propose a domain decomposition and allocation framework that can carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. The framework is tested in the NMM (Nonhydrostatic Mesoscale Model)-dust model, where a 72-hour processes of the dust load are simulated. Performance result using the proposed scheduling method is compared with the one using default scheduling methods of MPI. Results demonstrate that the system improves the performance of simulation by 20% up to 80%.
Kinetic Simulations of Particle Acceleration at Shocks
Caprioli, Damiano; Guo, Fan
2015-07-16
Collisionless shocks are mediated by collective electromagnetic interactions and are sources of non-thermal particles and emission. The full particle-in-cell approach and a hybrid approach are sketched, simulations of collisionless shocks are shown using a multicolor presentation. Results for SN 1006, a case involving ion acceleration and B field amplification where the shock is parallel, are shown. Electron acceleration takes place in planetary bow shocks and galaxy clusters. It is concluded that acceleration at shocks can be efficient: >15%; CRs amplify B field via streaming instability; ion DSA is efficient at parallel, strong shocks; ions are injected via reflection and shock drift acceleration; and electron DSA is efficient at oblique shocks.
[Accelerated radioisotope method of determining cholera vibrios' sensitivity to antibiotics].
Korol', V V; Podosinnikova, L S; Golubinskiĭ, E P; Rublev, B D
1980-05-01
Estimation of protein biosynthesis rate was used for rapid determination of Vibrio cholerae sensitivity to tetracycline and chloramphenicol by comparison of the bacterial cell radioactivity in samples with and without the antibiotics. For the sensitivity determination the strains were grown for 30 minutes at a temperature of 37 degrees C in nutrient media with a 14C-amino acid and antibiotic. The data of the determination were indicative of at least 10-fold difference in the amount of the amino acid assimilated by the sensitive strains in the presence and absence of the antibiotic. The value of the label incorporation into the antibiotic resistant strains under the above conditions did not differ. Complete coincidence of the results obtained upon parallel testing of the strain antibiotic sensitivity by the rapid and routine methods was observed.
Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers
Danby, G.T.; Jackson, J.W.
1990-03-19
A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations (dB/dt) in the particle beam.
Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers
Danby, Gordon T.; Jackson, John W.
1991-01-01
A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.
Apparatus and method for phosphate-accelerated bioremediation
Looney, B.B.; Pfiffner, S.M.; Phelps, T.J.; Lombard, K.H.; Hazen, T.C.; Borthen, J.W.
1998-05-19
An apparatus and method are provided for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site and provides for the use of a passive delivery system. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate. 8 figs.
Apparatus and method for phosphate-accelerated bioremediation
Looney, B.B.; Phelps, T.J.; Hazen, T.C.; Pfiffner, S.M.; Lombard, K.H.; Borthen, J.W.
1994-01-01
An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.
Apparatus and method for phosphate-accelerated bioremediation
Looney, Brian B.; Pfiffner, Susan M.; Phelps, Tommy J.; Lombard, Kenneth H.; Hazen, Terry C.; Borthen, James W.
1998-01-01
An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site and provides for the use of a passive delivery system. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.
Graphics processing unit acceleration of computational electromagnetic methods
NASA Astrophysics Data System (ADS)
Inman, Matthew
The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.
Pattern fidelity improvement of chemo-epitaxy DSA process for high-volume manufacturing
NASA Astrophysics Data System (ADS)
Muramatsu, Makoto; Nishi, Takanori; You, Gen; Saito, Yusuke; Ido, Yasuyuki; Ito, Kiyohito; Tobana, Toshikatsu; Hosoya, Masanori; Chen, Weichien; Nakamura, Satoru; Somervell, Mark; Kitano, Takahiro
2016-03-01
Directed self-assembly (DSA) is one of the candidates for next generation lithography. Over the past few years, cylindrical and lamellar structures dictated by the block co-polymer (BCP) composition have been investigated for use in patterning contact holes or lines, and, Tokyo Electron Limited (TEL is a registered trademark or a trademark of Tokyo Electron Limited in Japan and /or other countries.) has presented the evaluation results and the advantages of each-1-5. In this report, we will present the latest results regarding the defect reduction work on a model line/space system. Especially it is suggested that the defectivity of the neutral layer has a large impact on the defectivity of the DSA patterns. Also, LER/LWR reduction results will be presented with a focus on the improvements made during the etch transferring the DSA patterns into the underlayer.
Detecting chaos in particle accelerators through the frequency map analysis method.
Papaphilippou, Yannis
2014-06-01
The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.
Grey transport acceleration method for time-dependent radiative transfer problems
Larsen, E.
1988-10-01
A new iterative method for solving hte time-dependent multifrequency radiative transfer equations is described. The method is applicable to semi-implicit time discretizations that generate a linear steady-state multifrequency transport problem with pseudo-scattering within each time step. The standard ''lambda'' iteration method is shown to often converge slowly for such problems, and the new grey transport acceleration (GTA) method, based on accelerating the lambda method by employing a grey, or frequency-independent transport equation, is developed. The GTA method is shown, theoretically by an iterative Fourier analysis, and experimentally by numerical calculations, to converge significantly faster than the lambda method. In addition, the GTA method is conceptually simple to implement for general differencing schemes, on either Eulerian or Lagrangian meshes. copyright 1988 Academic Press, Inc.
Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods
Luskin, Mitchell; James, Richard; Tadmor, Ellad
2014-03-30
This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.
NASA Astrophysics Data System (ADS)
Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick; Ortensi, Javier; Baker, Benjamin; Laboure, Vincent; Wang, Congjian; DeHart, Mark; Martineau, Richard
2017-06-01
This work presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the SN transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form is based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. While NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in
Razmjoo, Hasan; Peyman, Alireza; Rahimi, Ali; Modrek, Hoda Jafari
2017-01-01
Background: Keratoconus is a progressive degenerative disorder of the cornea in which structural changes in the cornea cause it to become thin and conical in shape. Recently, collagen cross-linking (CXL) has been introduced as an effective intervention in management of progressive keratoconus. Accelerated CXL is a new protocol of this procedure which reduces corneal ultraviolet irradiation exposure time to 5 min. This study aimed to compare visual acuity, keratometry and topographic criteria of keratoconic eyes after conventional and accelerated CXL with a six-month follow-up. Materials and Methods: In this prospective interventional study we assessed eyes of 40 patients. Patients were divided into two groups randomly. One group underwent accelerated (5 min) CXL and the other underwent conventional (30 min) CXL. Visual acuity, topographic criteria and keratometry were assessed preoperatively and 6 months postoperatively. Results: In the present study we assessed 40 patients, 50% of which were right eye (OD) and 50% were left eye (OS). Mean age of patients in the accelerated group was 22.10 and in the conventional group was 22.80 years. Our results showed no significant differences between visual acuity, keratometric and topographic criteria in the two groups before intervention. Likewise our results manifested no significant difference between visual acuity, keratometric, refractive and topographic criteria after intervention. Conclusion: According to our survey topographic criteria and keratometry improvement in the accelerated and conventional protocol are the same. So accelerated protocol is suggested as a safe and effective option for management of progressive keratoconus. PMID:28299302
Accelerated GPU simulation of compressible flow by the discontinuous evolution Galerkin method
NASA Astrophysics Data System (ADS)
Block, B. J.; Lukáčová-Medvid'ová, M.; Virnau, P.; Yelash, L.
2012-08-01
The aim of the present paper is to report on our recent results for GPU accelerated simulations of compressible flows. For numerical simulation the adaptive discontinuous Galerkin method with the multidimensional bicharacteristic based evolution Galerkin operator has been used. For time discretization we have applied the explicit third order Runge-Kutta method. Evaluation of the genuinely multidimensional evolution operator has been accelerated using the GPU implementation. We have obtained a speedup up to 30 (in comparison to a single CPU core) for the calculation of the evolution Galerkin operator on a typical discretization mesh consisting of 16384 mesh cells.
Three dimensional finite element methods: Their role in the design of DC accelerator systems
NASA Astrophysics Data System (ADS)
Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.
2013-04-01
High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 × 300 mm2. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.
A reproducible accelerated in vitro release testing method for PLGA microspheres.
Shen, Jie; Lee, Kyulim; Choi, Stephanie; Qu, Wen; Wang, Yan; Burgess, Diane J
2016-02-10
The objective of the present study was to develop a discriminatory and reproducible accelerated in vitro release method for long-acting PLGA microspheres with inner structure/porosity differences. Risperidone was chosen as a model drug. Qualitatively and quantitatively equivalent PLGA microspheres with different inner structure/porosity were obtained using different manufacturing processes. Physicochemical properties as well as degradation profiles of the prepared microspheres were investigated. Furthermore, in vitro release testing of the prepared risperidone microspheres was performed using the most common in vitro release methods (i.e., sample-and-separate and flow through) for this type of product. The obtained compositionally equivalent risperidone microspheres had similar drug loading but different inner structure/porosity. When microsphere particle size appeared similar, porous risperidone microspheres showed faster microsphere degradation and drug release compared with less porous microspheres. Both in vitro release methods investigated were able to differentiate risperidone microsphere formulations with differences in porosity under real-time (37 °C) and accelerated (45 °C) testing conditions. Notably, only the accelerated USP apparatus 4 method showed good reproducibility for highly porous risperidone microspheres. These results indicated that the accelerated USP apparatus 4 method is an appropriate fast quality control tool for long-acting PLGA microspheres (even with porous structures).
Tsalafoutas, I; Xenofos, S; Stamatelatos, I E
1997-01-01
A semiempirical method for the calculation of the relative crossbeam dose profiles at depth is described. The parameters required to set up the formulae and their dependence with field size and depth are investigated. Using the above method, measured crossbeam dose profiles at depth from two linear accelerators, Philips (SL-18) and AEC (Therac-6) are reproduced. The results indicate that this method is applicable within a wide range of depths and field sizes.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
New estimation method of neutron skyshine for a high-energy particle accelerator
NASA Astrophysics Data System (ADS)
Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook
2016-09-01
A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.
Amans, Matthew R.; Cooke, Daniel L.; Vella, Maya; Dowd, Christopher F.; Halbach, Van V.; Higashida, Randall T.; Hetts, Steven W.
2014-01-01
Summary Contrast staining of brain parenchyma identified on non-contrast CT performed after DSA in patients with acute ischemic stroke (AIS) is an incompletely understood imaging finding. We hypothesize contrast staining to be an indicator of brain injury and suspect the fate of involved parenchyma to be cerebral infarction. Seventeen years of AIS data were retrospectively analyzed for contrast staining. Charts were reviewed and outcomes of the stained parenchyma were identified on subsequent CT and MRI. Thirty-six of 67 patients meeting inclusion criteria (53.7%) had contrast staining on CT obtained within 72 hours after DSA. Brain parenchyma with contrast staining in patients with AIS most often evolved into cerebral infarction (81%). Hemorrhagic transformation was less likely in cases with staining compared with hemorrhagic transformation in the cohort that did not have contrast staining of the parenchyma on post DSA CT (6% versus 25%, respectively, OR 0.17, 95% CI 0.017 – 0.98, p = 0.02). Brain parenchyma with contrast staining on CT after DSA in AIS patients was likely to infarct and unlikely to hemorrhage. PMID:24556308
Means and method for the focusing and acceleration of parallel beams of charged particles
Maschke, Alfred W.
1983-07-05
A novel apparatus and method for focussing beams of charged particles comprising planar arrays of electrostatic quadrupoles. The quadrupole arrays may comprise electrodes which are shared by two or more quadrupoles. Such quadrupole arrays are particularly adapted to providing strong focussing forces for high current, high brightness, beams of charged particles, said beams further comprising a plurality of parallel beams, or beamlets, each such beamlet being focussed by one quadrupole of the array. Such arrays may be incorporated in various devices wherein beams of charged particles are accelerated or transported, such as linear accelerators, klystron tubes, beam transport lines, etc.
Billa, Nanditha; Hubin-Barrows, Dylan; Lahren, Tylor; Burkhard, Lawrence P
2014-02-01
Two common laboratory extraction techniques were evaluated for routine use with the micro-colorimetric lipid determination method developed by Van Handel (1985) [2] and recently validated for small samples by Inouye and Lotufo (2006) [1]. With the accelerated solvent extraction method using chloroform:methanol solvent and the colorimetric lipid determination method, 28 of 30 samples had significant proportional bias (α=1%, determined using standard additions) and 1 of 30 samples had significant constant bias (α=1%, determined using Youden Blank measurements). With sonic extraction, 0 of 6 samples had significant proportional bias (α=1%) and 1 of 6 samples had significant constant bias (α=1%). These demonstrate that the accelerated solvent extraction method with chloroform:methanol solvent system creates an interference with the colorimetric assay method, and without accounting for the bias in the analysis, inaccurate measurements would be obtained.
Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA
NASA Astrophysics Data System (ADS)
Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.
2015-03-01
In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.
A review of vector convergence acceleration methods, with applications to linear algebra problems
NASA Astrophysics Data System (ADS)
Brezinski, C.; Redivo-Zaglia, M.
In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.
Acceleration of Multidimensional Discrete Ordinates Methods Via Adjacent-Cell Preconditioners
Azmy, Y.Y.
2000-10-15
The adjacent-cell preconditioner (AP) formalism originally derived in slab geometry is extended to multidimensional Cartesian geometry for generic fixed-weight, weighted diamond difference neutron transport methods. This is accomplished for the thick-cell regime (KAP) and thin-cell regime (NAP). A spectral analysis of the resulting acceleration schemes demonstrates their excellent spectral properties for model problem configurations, characterized by a uniform mesh of infinite extent and homogeneous material composition, each in its own cell-size regime. Thus, the spectral radius of KAP vanishes as the computational cell size approaches infinity, but it exceeds unity for very thin cells, thereby implying instability. In contrast, NAP is stable and robust for all cell sizes, but its spectral radius vanishes more slowly as the cell size increases. For this reason, and to avoid potential complication in the case of cells that are thin in one dimension and thick in another, NAP is adopted in the remainder of this work. The most important feature of AP for practical implementation in production level codes is that it is cell centered, reducing the size of the algebraic system comprising the acceleration stage compared to face-centered schemes. Boundary conditions for finite extent problems and a mixing formula across material and cell-size discontinuity are derived and used to implement NAP in a test code, AHOT, and a production code, TORT. Numerical testing for algebraically linear iterative schemes for the cases embodied in Burre's Suite of Test Problems demonstrates the high efficiency of the new method in reducing the number of iterations required to achieve convergence, especially for optically thick cells where acceleration is most needed. Also, for algebraically nonlinear (adaptive) methods, AP generally performs better than the partial current rebalance method in TORT and the diffusion synthetic acceleration method in TWODANT. Finally, application of the AP
Donor-specific antibodies accelerate arteriosclerosis after kidney transplantation.
Hill, Gary S; Nochy, Dominique; Bruneval, Patrick; Duong van Huyen, J P; Glotz, Denis; Suberbielle, Caroline; Zuber, Julien; Anglicheau, Dany; Empana, Jean-Philippe; Legendre, Christophe; Loupy, Alexandre
2011-05-01
In biopsies of renal allografts, arteriosclerosis is often more severe than expected based on the age of the donor, even without a history of rejection vasculitis. To determine whether preformed donor-specific antibodies (DSAs) may contribute to the severity of arteriosclerosis, we examined protocol biopsies from patients with (n=40) or without (n=59) DSA after excluding those with any evidence of vasculitis. Among DSA-positive patients, arteriosclerosis significantly progressed between month 3 and month 12 after transplant (mean Banff cv score 0.65 ± 0.11 to 1.12 ± 0.10, P=0.014); in contrast, among DSA-negative patients, we did not detect a statistically significant progression during the same timeframe (mean Banff cv score 0.65 ± 0.11 to 0.81 ± 0.10, P=not significant). Available biopsies at later time points supported a rate of progression of arteriosclerosis in DSA-negative patients that was approximately one third that in DSA-positive patients. Accelerated arteriosclerosis was significantly associated with peritubular capillary leukocytic infiltration, glomerulitis, subclinical antibody-mediated rejection, and interstitial inflammation. In conclusion, these data support the hypothesis that donor-specific antibodies dramatically accelerate post-transplant progression of arteriosclerosis.
Centrifugal accelerator, system and method for removing unwanted layers from a surface
Foster, Christopher A.; Fisher, Paul W.
1995-01-01
A cryoblasting process having a centrifugal accelerator for accelerating frozen pellets of argon or carbon dioxide toward a target area utilizes an accelerator throw wheel designed to induce, during operation, the creation of a low-friction gas bearing within internal passages of the wheel which would otherwise retard acceleration of the pellets as they move through the passages. An associated system and method for removing paint from a surface with cryoblasting techniques involves the treating, such as a preheating, of the painted surface to soften the paint prior to the impacting of frozen pellets thereagainst to increase the rate of paint removal. A system and method for producing large quantities of frozen pellets from a liquid material, such as liquid argon or carbon dioxide, for use in a cryoblasting process utilizes a chamber into which the liquid material is introduced in the form of a jet which disintegrates into droplets. A non-condensible gas, such as inert helium or air, is injected into the chamber at a controlled rate so that the droplets freeze into bodies of relatively high density.
Tondu, B.; Bazaz, S.A.
1999-09-01
An original method called the three-cubic method is proposed to generate online robot joint trajectories interpolating given position points with associated velocities. The method is based on an acceleration profile composed of three cubic polynomial segments, which ensure a zero acceleration at each intermediate point. Velocity and acceleration continuity is obtained, and this three-cubics combination allows the analytical solution to the minimum time trajectory problem under maximum velocity and acceleration constraints. Possible wandering is detected and can be overcome. Furthermore, the analytical solution to the minimum time trajectory problem leads to an online trajectory computation.
Hu, Zhen; Melton, Genevieve B.; Moeller, Nathan D.; Arsoniadis, Elliot G.; Wang, Yan; Kwaan, Mary R.; Jensen, Eric H.; Simon, Gyorgy J.
2016-01-01
Manual Chart Review (MCR) is an important but labor-intensive task for clinical research and quality improvement. In this study, aiming to accelerate the process of extracting postoperative outcomes from medical charts, we developed an automated postoperative complications detection application by using structured electronic health record (EHR) data. We applied several machine learning methods to the detection of commonly occurring complications, including three subtypes of surgical site infection, pneumonia, urinary tract infection, sepsis, and septic shock. Particularly, we applied one single-task and five multi-task learning methods and compared their detection performance. The models demonstrated high detection performance, which ensures the feasibility of accelerating MCR. Specifically, one of the multi-task learning methods, propensity weighted observations (PWO) demonstrated the highest detection performance, with single-task learning being a close second. PMID:28269941
Defect reduction and defect stability in IMEC's 14nm half-pitch chemo-epitaxy DSA flow
NASA Astrophysics Data System (ADS)
Gronheid, Roel; Rincon Delgadillo, Paulina; Pathangi, Hari; Van den Heuvel, Dieter; Parnell, Doni; Chan, Boon Teik; Lee, Yu-Tsung; Van Look, Lieve; Cao, Yi; Her, YoungJun; Lin, Guanyang; Harukawa, Ryota; Nagaswami, Venkat; D'Urzo, Lucia; Somervell, Mark; Nealey, Paul
2014-03-01
Directed Self-Assembly (DSA) of Block Co-Polymers (BCP) has become an intense field of study as a potential patterning solution for future generation devices. The most critical challenges that need to be understood and controlled include pattern placement accuracy, achieving low defectivity in DSA patterns and how to make chip designs DSA-friendly. The DSA program at imec includes efforts on these three major topics. Specifically, in this paper the progress in DSA defectivity within the imec program will be discussed. In previous work, defectivity levels of ~560 defects/cm2 were reported and the root causes for these defects were identified, which included particle sources, material interactions and pre-pattern imperfections. The specific efforts that have been undertaken to reduce defectivity in the line/space chemoepitaxy DSA flow that is used for the imec defectivity studies are discussed. Specifically, control of neutral layer material and improved filtration during the block co-polymer manufacturing have enabled a significant reduction in the defect performance. In parallel, efforts have been ongoing to enhance the defect inspection capabilities and allow a high capture rate of the small defects. It is demonstrated that transfer of the polystyrene patterns into the underlying substrate is critical for detecting the DSA-relevant defect modes including microbridges and small dislocations. Such pattern transfer enhances the inspection sensitivity by ~10x. Further improvement through process optimization allows for substantial defectivity reduction.
Schell, Stefan; Wilkens, Jan J.
2010-10-15
Purpose: Laser plasma acceleration can potentially replace large and expensive cyclotrons or synchrotrons for radiotherapy with protons and ions. On the way toward a clinical implementation, various challenges such as the maximum obtainable energy still remain to be solved. In any case, laser accelerated particles exhibit differences compared to particles from conventional accelerators. They typically have a wide energy spread and the beam is extremely pulsed (i.e., quantized) due to the pulsed nature of the employed lasers. The energy spread leads to depth dose curves that do not show a pristine Bragg peak but a wide high dose area, making precise radiotherapy impossible without an additional energy selection system. Problems with the beam quantization include the limited repetition rate and the number of accelerated particles per laser shot. This number might be too low, which requires a high repetition rate, or it might be too high, which requires an additional fluence selection system to reduce the number of particles. Trying to use laser accelerated particles in a conventional way such as spot scanning leads to long treatment times and a high amount of secondary radiation produced when blocking unwanted particles. Methods: The authors present methods of beam delivery and treatment planning that are specifically adapted to laser accelerated particles. In general, it is not necessary to fully utilize the energy selection system to create monoenergetic beams for the whole treatment plan. Instead, within wide parts of the target volume, beams with broader energy spectra can be used to simultaneously cover multiple axially adjacent spots of a conventional dose delivery grid as applied in intensity modulated particle therapy. If one laser shot produces too many particles, they can be distributed over a wider area with the help of a scattering foil and a multileaf collimator to cover multiple lateral spot positions at the same time. These methods are called axial and
NASA Technical Reports Server (NTRS)
Lathrop, J. W.
1985-01-01
If thin film cells are to be considered a viable option for terrestrial power generation their reliability attributes will need to be explored and confidence in their stability obtained through accelerated testing. Development of a thin film accelerated test program will be more difficult than was the case for crystalline cells because of the monolithic construction nature of the cells. Specially constructed test samples will need to be fabricated, requiring committment to the concept of accelerated testing by the manufacturers. A new test schedule appropriate to thin film cells will need to be developed which will be different from that used in connection with crystalline cells. Preliminary work has been started to seek thin film schedule variations to two of the simplest tests: unbiased temperature and unbiased temperature humidity. Still to be examined are tests which involve the passage of current during temperature and/or humidity stress, either by biasing in the forward (or reverse) directions or by the application of light during stress. Investigation of these current (voltage) accelerated tests will involve development of methods of reliably contacting the thin conductive films during stress.
NASA Astrophysics Data System (ADS)
Takei, K.; Kumai, K.; Kobayashi, Y.; Miyashiro, H.; Terada, N.; Iwahori, T.; Tanaka, T.
The testing methods to estimate the life cycles of lithium ion batteries for a short period, have been developed using a commercialized cell with LiCoO 2/hard carbon cell system. The degradation reactions with increasing cycles were suggested to occur predominantly above 4 V from the results of operating voltage range divided tests. In the case of the extrapolation method using limited cycle data, the straight line approximation was useful as the cycle performance has the linearity, but the error is at most 40% in using the initial short cycle data. In the case of the accelerated aging tests using the following stress factors, the charge and/or discharge rate, large accelerated coefficients were obtained in the high charge rate and the high temperature thermal stress.
A multipole accelerated desingularized method for computing nonlinear wave forces on bodies
Scorpio, S.M.; Beck, R.F.
1996-12-31
Nonlinear wave forces on offshore structures are investigated. The fluid motion is computed using an Euler-Lagrange time domain approach. Nonlinear free surface boundary conditions are stepped forward in time using an accurate and stable integration technique. The field equation with mixed boundary conditions that result at each time step are solved at N nodes using a desingularized boundary integral method with multipole acceleration. Multipole accelerated solutions require O(N) computational effort and computer storage while conventional solvers require O(N{sup 2}) effort and storage for an iterative solution and O(N{sup 3}) effort for direct inversion of the influence matrix. These methods are applied to the three dimensional problem of wave diffraction by a vertical cylinder.
Quan, Li-Di; Xue, Chao; Shao, Cheng-Gang; Yang, Shan-Qing; Tu, Liang-Cheng; Wang, Yong-Ji; Luo, Jun
2014-01-01
The performance of the feedback control system is of central importance in the measurement of the Newton's gravitational constant G with angular acceleration method. In this paper, a PID (Proportion-Integration-Differentiation) feedback loop is discussed in detail. Experimental results show that, with the feedback control activated, the twist angle of the torsion balance is limited to [Formula: see text] at the signal frequency of 2 mHz, which contributes a [Formula: see text] uncertainty to the G value.
On the Use of Accelerated Aging Methods for Screening High Temperature Polymeric Composite Materials
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Grayson, Michael A.
1999-01-01
A rational approach to the problem of accelerated testing of high temperature polymeric composites is discussed. The methods provided are considered tools useful in the screening of new materials systems for long-term application to extreme environments that include elevated temperature, moisture, oxygen, and mechanical load. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for specific aging mechanisms.
Fattebert, J
2008-07-29
We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.
Kauffman, R.
1993-04-01
This report presents results of a literature search performed to identify analytical techniques suitable for accelerated screening of chemical and thermal stabilities of different refrigerant/lubricant combinations. Search focused on three areas: Chemical stability data of HFC-134a and other non-chlorine containing refrigerant candidates; chemical stability data of CFC-12, HCFC-22, and other chlorine containing refrigerants; and accelerated thermal analytical techniques. Literature was catalogued and an abstract was written for each journal article or technical report. Several thermal analytical techniques were identified as candidates for development into accelerated screening tests. They are easy to operate, are common to most laboratories, and are expected to produce refrigerant/lubricant stability evaluations which agree with the current stability test ANSI/ASHRAE (American National Standards Institute/American Society of Heating, Refrigerating, and Air-Conditioning Engineers) Standard 97-1989, ``Sealed Glass Tube Method to Test the Chemical Stability of Material for Use Within Refrigerant Systems.`` Initial results of one accelerated thermal analytical candidate, DTA, are presented for CFC-12/mineral oil and HCFC-22/mineral oil combinations. Also described is research which will be performed in Part II to optimize the selected candidate.
A hybrid data acquisition system for magnetic measurements of accelerator magnets
Wang, X.; Hafalia, R.; Joseph, J.; Lizarazo, J.; Martchevsky, M.; Sabbi, G. L.
2011-06-03
A hybrid data acquisition system was developed for magnetic measurement of superconducting accelerator magnets at LBNL. It consists of a National Instruments dynamic signal acquisition (DSA) card and two Metrolab fast digital integrator (FDI) cards. The DSA card records the induced voltage signals from the rotating probe while the FDI cards records the flux increment integrated over a certain angular step. This allows the comparison of the measurements performed with two cards. In this note, the setup and test of the system is summarized. With a probe rotating at a speed of 0.5 Hz, the multipole coefficients of two magnets were measured with the hybrid system. The coefficients from the DSA and FDI cards agree with each other, indicating that the numerical integration of the raw voltage acquired by the DSA card is comparable to the performance of the FDI card in the current measurement setup.
Acceleration of curing of resin composite at the bottom surface using slow-start curing methods.
Yoshikawa, Takako; Morigami, Makoto; Sadr, Alireza; Tagami, Junji
2013-01-01
The aim of this study was to evaluate the effect of two slow-start curing methods on acceleration of the curing of resin composite specimens at the bottom surface. The light-cured resin composite was polymerized using one of three curing techniques: (1) 600 mW/cm(2) for 60 s, (2) 270 mW/cm(2) for 10 s+0-s interval+600 mW/cm(2) for 50 s, and (3) 270 mW/cm(2) for 10 s+5-s interval+600 mW/cm(2) for 50 s. After light curing, Knoop hardness number was measured at the top and bottom surfaces of the resin specimens. The slow-start curing method with the 5-s interval caused greater acceleration of curing of the resin composite at the bottom surface of the specimens than the slow-start curing method with the 0-s interval. The light-cured resin composite, which had increased contrast ratios during polymerization, showed acceleration of curing at the bottom surface.
Verification of directed self-assembly (DSA) guide patterns through machine learning
NASA Astrophysics Data System (ADS)
Shim, Seongbo; Cai, Sibo; Yang, Jaewon; Yang, Seunghune; Choi, Byungil; Shin, Youngsoo
2015-03-01
Verification of full-chip DSA guide patterns (GPs) through simulations is not practical due to long runtime. We develop a decision function (or functions), which receives n geometry parameters of a GP as inputs and predicts whether the GP faithfully produces desired contacts (good) or not (bad). We take a few sample GPs to construct the function; DSA simulations are performed for each GP to decide whether it is good or bad, and the decision is marked in n-dimensional space. The hyper-plane that separates good marks and bad marks in that space is determined through machine learning process, and corresponds to our decision function. We try a single global function that can be applied to any GP types, and a series of functions in which each function is customized for different GP type; they are then compared and assessed in 10nm technology.
Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method
NASA Astrophysics Data System (ADS)
Gao, N.; Yang, L.; Gao, F.; Kurtz, R. J.; West, D.; Zhang, S.
2017-04-01
A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.
Novel methods in the Particle-In-Cell accelerator Code-Framework Warp
Vay, J-L; Grote, D. P.; Cohen, R. H.; Friedman, A.
2012-12-26
The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.
Long-time atomistic dynamics through a new self-adaptive accelerated molecular dynamics method.
Gao, N; Yang, L; Gao, F; Kurtz, R J; West, D; Zhang, S
2017-04-12
A self-adaptive accelerated molecular dynamics method is developed to model infrequent atomic-scale events, especially those events that occur on a rugged free-energy surface. Key in the new development is the use of the total displacement of the system at a given temperature to construct a boost-potential, which is slowly increased to accelerate the dynamics. The temperature is slowly increased to accelerate the dynamics. By allowing the system to evolve from one steady-state configuration to another by overcoming the transition state, this self-evolving approach makes it possible to explore the coupled motion of species that migrate on vastly different time scales. The migrations of single vacancy (V) and small He-V clusters, and the growth of nano-sized He-V clusters in Fe for times in the order of seconds are studied by this new method. An interstitial-assisted mechanism is first explored for the migration of a helium-rich He-V cluster, while a new two-component Ostwald ripening mechanism is suggested for He-V cluster growth.
Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method.
Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim
2016-08-01
In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
Dependence of the Spectrum of Shock-Accelerated Ions on the Dynamics at the Shock Crossing
NASA Astrophysics Data System (ADS)
Gedalin, M.; Dröge, W.; Kartavykh, Y. Y.
2016-12-01
Diffusive shock acceleration (DSA) of ions occurs due to pitch-angle diffusion in the upstream and downstream regions of the shock and multiple crossing of the shock by these ions. The classical DSA theory implies continuity of the distribution at the shock transition and predicts a universal spectrum of accelerated particles, depending only on the ratio of the upstream and downstream fluid speeds. However, the ion dynamics at the shock front occurs within a collision-free region and is gyrophase dependent. The ions fluxes have to be continuous at the shock front. The matching conditions for the gyrophase-averaged distribution functions at the shock transition are formulated in terms of the transition and reflection probabilities. These probabilities depend on the shock angle and the magnetic compression as does the power spectrum of accelerated ions. Their spectral index is expressed in terms of the reflectivity. The spectrum is typically harder than the spectrum predicted by the classical DSA theory.
A chain-of-states acceleration method for the efficient location of minimum energy paths
Hernández, E. R. Herrero, C. P.; Soler, J. M.
2015-11-14
We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.
ERIC Educational Resources Information Center
Parks, Paula L.
2014-01-01
Most developmental community college students are not completing the composition sequence successfully. This mixed-methods study examined acceleration as a way to help developmental community college students complete the composition sequence more quickly and more successfully. Acceleration is a curricular redesign that includes challenging…
MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA
Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D
2013-01-01
Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.
Computational method to estimate Single Event Upset rates in an accelerator environment
NASA Astrophysics Data System (ADS)
Huhtinen, M.; Faccio, F.
2000-08-01
We present a new method to estimate Single Event Upsets (SEU) in a hadron accelerator environment, which is characterized by a complicated radiation spectrum. Our method is based on first principles, i.e. an explicit generation and transport of nuclear fragments and detailed accounting for energy loss by ionization. However, instead of simulating also the behaviour of the circuit, we use a Weibull fit to experimental heavy-ion SEU data in order to quantify the SEU sensitivity of the circuit. Thus, in principle, we do not need to know details about the circuit and our method is almost free of adjustable parameters - we only need a reasonable guess for the Sensitive Volume (SV) size. We show by a comparison with experimental data that our method predicts the SEU cross sections for protons rather accurately. We then indicate with an example how our method could be applied to predict SEU rates at the forthcoming LHC experiments.
NASA Astrophysics Data System (ADS)
Fridrichová, Marcela; Dvořák, Karel; Gazdič, Dominik
2016-03-01
The single most reliable indicator of a material's durability is its performance in long-term tests, which cannot always be carried out due to a limited time budget. The second option is to perform some kind of accelerated durability tests. The aim of the work described in this article was to develop a method for the accelerated durability testing of binders. It was decided that the Arrhenius equation approach and the theory of chemical reaction kinetics would be applied in this case. The degradation process has been simplified to a single quantifiable parameter, which became compressive strength. A model hydraulic binder based on fluidised bed combustion ash (FBC ash) was chosen as the test subject for the development of the method. The model binder and its hydration products were tested by high-temperature X-ray diffraction analysis. The main hydration product of this binder was ettringite. Due to the thermodynamic instability of this mineral, it was possible to verify the proposed method via long term testing. In order to accelerate the chemical reactions in the binder, four combinations of two temperatures (65 and 85°C) and two different relative humidities (14 and 100%) were used. The upper temperature limit was chosen because of the results of the high-temperature x-ray testing of the ettringite's decomposition. The calculation formulae for the accelerated durability tests were derived on the basis of data regarding the decrease in compressive strength under the conditions imposed by the four above-mentioned combinations. The mineralogical composition of the binder after degradation was also described. The final degradation product was gypsum under dry conditions and monosulphate under wet conditions. The validity of the method and formula was subsequently verified by means of long-term testing. A very good correspondence between the calculated and real values was achieved. The deviation of these values did not exceed 5 %. The designed and verified method
Vibration-Based Method Developed to Detect Cracks in Rotors During Acceleration Through Resonance
NASA Technical Reports Server (NTRS)
Sawicki, Jerzy T.; Baaklini, George Y.; Gyekenyesi, Andrew L.
2004-01-01
In recent years, there has been an increasing interest in developing rotating machinery shaft crack-detection methodologies and online techniques. Shaft crack problems present a significant safety and loss hazard in nearly every application of modern turbomachinery. In many cases, the rotors of modern machines are rapidly accelerated from rest to operating speed, to reduce the excessive vibrations at the critical speeds. The vibration monitoring during startup or shutdown has been receiving growing attention (ref. 1), especially for machines such as aircraft engines, which are subjected to frequent starts and stops, as well as high speeds and acceleration rates. It has been recognized that the presence of angular acceleration strongly affects the rotor's maximum response to unbalance and the speed at which it occurs. Unfortunately, conventional nondestructive evaluation (NDE) methods have unacceptable limits in terms of their application for online crack detection. Some of these techniques are time consuming and inconvenient for turbomachinery service testing. Almost all of these techniques require that the vicinity of the damage be known in advance, and they can provide only local information, with no indication of the structural strength at a component or system level. In addition, the effectiveness of these experimental techniques is affected by the high measurement noise levels existing in complex turbomachine structures. Therefore, the use of vibration monitoring along with vibration analysis has been receiving increasing attention.
Melendez, Johan H.; Santaus, Tonya M.; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A.; Geddes, Chris D.
2016-01-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by the detection of the genomic target often involving PCR-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (GC) DNA. Our approach is based on the use of highly-focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the present study, we show that highly focused microwaves at 2.45 GHz, using 12.3 mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification in less than 10 minutes total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward towards the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503
Melendez, Johan H; Santaus, Tonya M; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A; Geddes, Chris D
2016-10-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by detection of the genomic target often involving polymerase chain reaction (PCR)-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (gonorrhea, GC) DNA. Our approach is based on the use of highly focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the current study, we show that highly focused microwaves at 2.45 GHz, using 12.3-mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification, in less than 10 min total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward toward the development of a point-of-care (POC) platform for detection of gonorrhea infections.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
NASA Astrophysics Data System (ADS)
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
NASA Astrophysics Data System (ADS)
Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander
2016-05-01
Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).
Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids.
Deng, Jie; Zimmerman, Jonathan A.; Thompson, Aidan Patrick; Brown, William Michael; Plimpton, Steven James; Zhou, Xiao Wang; Wagner, Gregory John; Erickson, Lindsay Crowl
2011-09-01
Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.
Acceleration of low-energy ions at parallel shocks with a focused transport model
Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.
2013-03-19
Here we present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by parallel shocks with a focused transport model. The focused transport equation contains all necessary physics of shock acceleration, but avoids the limitation of diffusive shock acceleration (DSA) that requires a small pitch angle anisotropy. This simulation verifies that the particles with speeds of a fraction of to a few times the shock speed can indeed be directly injected and accelerated into the DSA regime by parallel shocks. At higher energies starting from a few times the shock speed, the energy spectrum of accelerated particles is a power law with the same spectral index as the solution of standard DSA theory, although the particles are highly anisotropic in the upstream region. The intensity, however, is different from that predicted by DSA theory, indicating a different level of injection efficiency. It is found that the shock strength, the injection speed, and the intensity of an electric cross-shock potential (CSP) jump can affect the injection efficiency of the low-energy particles. A stronger shock has a higher injection efficiency. In addition, if the speed of injected particles is above a few times the shock speed, the produced power-law spectrum is consistent with the prediction of standard DSA theory in both its intensity and spectrum index with an injection efficiency of 1. CSP can increase the injection efficiency through direct particle reflection back upstream, but it has little effect on the energetic particle acceleration once the speed of injected particles is beyond a few times the shock speed. Finally, this test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Dynamic inversion method based on the time-staggered stereo-modeling scheme and its acceleration
NASA Astrophysics Data System (ADS)
Jing, Hao; Yang, Dinghui; Wu, Hao
2016-12-01
A set of second-order differential equations describing the space-time behaviour of derivatives of displacement with respect to model parameters (i.e. waveform sensitivities) is obtained via taking the derivative of the original wave equations. The dynamic inversion method obtains sensitivities of the seismic displacement field with respect to earth properties directly by solving differential equations for them instead of constructing sensitivities from the displacement field itself. In this study, we have taken a new perspective on the dynamic inversion method and used acceleration approaches to reduce the computational time and memory usage to improve its ability of performing high-resolution imaging. The dynamic inversion method, which can simultaneously use different waves and multicomponent observation data, is appropriate for directly inverting elastic parameters, medium density or wave velocities. Full wavefield information is utilized as much as possible at the expense of a larger amount of calculations. To mitigate the computational burden, two ways are proposed to accelerate the method from a computer-implementation point of view. One is source encoding which uses a linear combination of all shots, and the other is to reduce the amount of calculations on forward modeling. We applied a new finite-difference (FD) method to the dynamic inversion to improve the computational accuracy and speed up the performance. Numerical experiments indicated that the new FD method can effectively suppress the numerical dispersion caused by the discretization of wave equations, resulting in enhanced computational efficiency with less memory cost for seismic modeling and inversion based on the full wave equations. We present some inversion results to demonstrate the validity of this method through both checkerboard and Marmousi models. It shows that this method is also convergent even with big deviations for the initial model. Besides, parallel calculations can be easily
An improved method for calibrating the gantry angles of linear accelerators.
Higgins, Kyle; Treas, Jared; Jones, Andrew; Fallahian, Naz Afarin; Simpson, David
2013-11-01
Linear particle accelerators (linacs) are widely used in radiotherapy procedures; therefore, accurate calibrations of gantry angles must be performed to prevent the exposure of healthy tissue to excessive radiation. One of the common methods for calibrating these angles is the spirit level method. In this study, a new technique for calibrating the gantry angle of a linear accelerator was examined. A cubic phantom was constructed of Styrofoam with small lead balls, embedded at specific locations in this foam block. Several x-ray images were taken of this phantom at various gantry angles using an electronic portal imaging device on the linac. The deviation of the gantry angles were determined by analyzing the images using a customized computer program written in ImageJ (National Institutes of Health). Gantry angles of 0, 90, 180, and 270 degrees were chosen and the results of both calibration methods were compared for each of these angles. The results revealed that the image method was more precise than the spirit level method. For the image method, the average of the measured values for the selected angles of 0, 90, 180, and 270 degrees were found to be -0.086 ± 0.011, 90.018 ± 0.011, 180.178 ± 0.015, and 269.972 ± 0.006 degrees, respectively. The corresponding average values using the spirit level method were 0.2 ± 0.03, 90.2 ± 0.04, 180.1 ± 0.01, and 269.9 ± 0.05 degrees, respectively. Based on these findings, the new method was shown to be a reliable technique for calibrating the gantry angle.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-ups that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.
Contemporary methods of radiosurgery treatment with the Novalis linear accelerator system.
Chen, Joseph C T; Rahimian, Javad; Girvigian, Michael R; Miller, Michael J
2007-01-01
Radiosurgery has emerged as an indispensable component of the multidisciplinary approach to neoplastic, functional, and vascular diseases of the central nervous system. In recent years, a number of newly developed integrated systems have been introduced for radiosurgery and fractionated stereotactic radiotherapy treatments. These modern systems extend the flexibility of radiosurgical treatment in allowing the use of frameless image-guided radiation delivery as well as high-precision fractionated treatments. The Novalis linear accelerator system demonstrates adequate precision and reliability for cranial and extracranial radiosurgery, including functional treatments utilizing either frame-based or frameless image-guided methods.
Practical method and device for enhancing pulse contrast ratio for lasers and electron accelerators
Zhang, Shukui; Wilson, Guy
2014-09-23
An apparatus and method for enhancing pulse contrast ratios for drive lasers and electron accelerators. The invention comprises a mechanical dual-shutter system wherein the shutters are placed sequentially in series in a laser beam path. Each shutter of the dual shutter system has an individually operated trigger for opening and closing the shutter. As the triggers are operated individually, the delay between opening and closing first shutter and opening and closing the second shutter is variable providing for variable differential time windows and enhancement of pulse contrast ratio.
Gumerov, Nail A; Duraiswami, Ramani
2009-01-01
The development of a fast multipole method (FMM) accelerated iterative solution of the boundary element method (BEM) for the Helmholtz equations in three dimensions is described. The FMM for the Helmholtz equation is significantly different for problems with low and high kD (where k is the wavenumber and D the domain size), and for large problems the method must be switched between levels of the hierarchy. The BEM requires several approximate computations (numerical quadrature, approximations of the boundary shapes using elements), and these errors must be balanced against approximations introduced by the FMM and the convergence criterion for iterative solution. These different errors must all be chosen in a way that, on the one hand, excess work is not done and, on the other, that the error achieved by the overall computation is acceptable. Details of translation operators for low and high kD, choice of representations, and BEM quadrature schemes, all consistent with these approximations, are described. A novel preconditioner using a low accuracy FMM accelerated solver as a right preconditioner is also described. Results of the developed solvers for large boundary value problems with 0.0001 less, similarkD less, similar500 are presented and shown to perform close to theoretical expectations.
New accelerated charge methods using early destratification applied on flooded lead acid batteries
NASA Astrophysics Data System (ADS)
Mamadou, K.; Nguyen, T. M. P.; Lemaire-Potteau, E.; Glaize, C.; Alzieu, J.
A traditional charge process for flooded lead acid batteries (FLABs) lasts generally from 8 to 14 h. Nowadays, many applications of FLABs require reduction of the charge duration, for instance, a 4 h-charge for FLABs in grid energy storage or 1 h-charge for FLABs in electric buses. These are called accelerated charge and fast charge. Such reductions of charge time imply the use of a new charge process. One way to reduce the charge duration is to perform an early destratification step without waiting for the end of charge. The new charge method proposed in this paper (early destratification method - ED) focuses on the reduction of the charge time for FLABs using early destratification, which is performed and controlled using charge acceptance measurement during the charge. Laboratory experiments presented here aim first to develop charge acceptance measurements followed by an ED charge method compared to an IUi traditional charge process.
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
Dental movement acceleration: Literature review by an alternative scientific evidence method
Camacho, Angela Domínguez; Cujar, Sergio Andres Velásquez
2014-01-01
The aim of this study was to analyze the majority of publications using effective methods to speed up orthodontic treatment and determine which publications carry high evidence-based value. The literature published in Pubmed from 1984 to 2013 was reviewed, in addition to well-known reports that were not classified under this database. To facilitate evidence-based decision making, guidelines such as the Consolidation Standards of Reporting Trials, Preferred Reporting items for systematic Reviews and Meta-analyses, and Transparent Reporting of Evaluations with Non-randomized Designs check list were used. The studies were initially divided into three groups: local application of cell mediators, physical stimuli, and techniques that took advantage of the regional acceleration phenomena. The articles were classified according to their level of evidence using an alternative method for orthodontic scientific article classification. 1a: Systematic Reviews (SR) of randomized clinical trials (RCTs), 1b: Individual RCT, 2a: SR of cohort studies, 2b: Individual cohort study, controlled clinical trials and low quality RCT, 3a: SR of case-control studies, 3b: Individual case-control study, low quality cohort study and short time following split mouth designs. 4: Case-series, low quality case-control study and non-systematic review, and 5: Expert opinion. The highest level of evidence for each group was: (1) local application of cell mediators: the highest level of evidence corresponds to a 3B level in Prostaglandins and Vitamin D; (2) physical stimuli: vibratory forces and low level laser irradiation have evidence level 2b, Electrical current is classified as 3b evidence-based level, Pulsed Electromagnetic Field is placed on the 4th level on the evidence scale; and (3) regional acceleration phenomena related techniques: for corticotomy the majority of the reports belong to level 4. Piezocision, dentoalveolar distraction, alveocentesis, monocortical tooth dislocation and ligament
Acceleration of low-energy ions at parallel shocks with a focused transport model
Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.
2013-03-19
Here we present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by parallel shocks with a focused transport model. The focused transport equation contains all necessary physics of shock acceleration, but avoids the limitation of diffusive shock acceleration (DSA) that requires a small pitch angle anisotropy. This simulation verifies that the particles with speeds of a fraction of to a few times the shock speed can indeed be directly injected and accelerated into the DSA regime by parallel shocks. At higher energies starting from a few times the shock speed, the energy spectrum of acceleratedmore » particles is a power law with the same spectral index as the solution of standard DSA theory, although the particles are highly anisotropic in the upstream region. The intensity, however, is different from that predicted by DSA theory, indicating a different level of injection efficiency. It is found that the shock strength, the injection speed, and the intensity of an electric cross-shock potential (CSP) jump can affect the injection efficiency of the low-energy particles. A stronger shock has a higher injection efficiency. In addition, if the speed of injected particles is above a few times the shock speed, the produced power-law spectrum is consistent with the prediction of standard DSA theory in both its intensity and spectrum index with an injection efficiency of 1. CSP can increase the injection efficiency through direct particle reflection back upstream, but it has little effect on the energetic particle acceleration once the speed of injected particles is beyond a few times the shock speed. Finally, this test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection.« less
Acute Effect of Different Combined Stretching Methods on Acceleration and Speed in Soccer Players.
Amiri-Khorasani, Mohammadtaghi; Calleja-Gonzalez, Julio; Mogharabi-Manzari, Mansooreh
2016-04-01
The purpose of this study was to investigate the acute effect of different stretching methods, during a warm-up, on the acceleration and speed of soccer players. The acceleration performance of 20 collegiate soccer players (body height: 177.25 ± 5.31 cm; body mass: 65.10 ± 5.62 kg; age: 16.85 ± 0.87 years; BMI: 20.70 ± 5.54; experience: 8.46 ± 1.49 years) was evaluated after different warm-up procedures, using 10 and 20 m tests. Subjects performed five types of a warm-up: static, dynamic, combined static + dynamic, combined dynamic + static, and no-stretching. Subjects were divided into five groups. Each group performed five different warm-up protocols in five non-consecutive days. The warm-up protocol used for each group was randomly assigned. The protocols consisted of 4 min jogging, a 1 min stretching program (except for the no-stretching protocol), and 2 min rest periods, followed by the 10 and 20 m sprint test, on the same day. The current findings showed significant differences in the 10 and 20 m tests after dynamic stretching compared with static, combined, and no-stretching protocols. There were also significant differences between the combined stretching compared with static and no-stretching protocols. We concluded that soccer players performed better with respect to acceleration and speed, after dynamic and combined stretching, as they were able to produce more force for a faster execution.
Acute Effect of Different Combined Stretching Methods on Acceleration and Speed in Soccer Players
Calleja-Gonzalez, Julio; Mogharabi-Manzari, Mansooreh
2016-01-01
Abstract The purpose of this study was to investigate the acute effect of different stretching methods, during a warm-up, on the acceleration and speed of soccer players. The acceleration performance of 20 collegiate soccer players (body height: 177.25 ± 5.31 cm; body mass: 65.10 ± 5.62 kg; age: 16.85 ± 0.87 years; BMI: 20.70 ± 5.54; experience: 8.46 ± 1.49 years) was evaluated after different warm-up procedures, using 10 and 20 m tests. Subjects performed five types of a warm-up: static, dynamic, combined static + dynamic, combined dynamic + static, and no-stretching. Subjects were divided into five groups. Each group performed five different warm-up protocols in five non-consecutive days. The warm-up protocol used for each group was randomly assigned. The protocols consisted of 4 min jogging, a 1 min stretching program (except for the no-stretching protocol), and 2 min rest periods, followed by the 10 and 20 m sprint test, on the same day. The current findings showed significant differences in the 10 and 20 m tests after dynamic stretching compared with static, combined, and no-stretching protocols. There were also significant differences between the combined stretching compared with static and no-stretching protocols. We concluded that soccer players performed better with respect to acceleration and speed, after dynamic and combined stretching, as they were able to produce more force for a faster execution. PMID:28149355
NASA Astrophysics Data System (ADS)
Balasubramoniam, A.; Bednarek, D. R.; Rudin, S.; Ionita, C. N.
2016-03-01
An evaluation of the relation between parametric imaging results obtained from Digital Subtraction Angiography (DSA) images and blood-flow velocity measured using Doppler ultrasound in patient-specific neurovascular phantoms is provided. A silicone neurovascular phantom containing internal carotid artery, middle cerebral artery and anterior communicating artery was embedded in a tissue equivalent gel. The gel prevented movement of the vessels when blood mimicking fluid was pumped through it to obtain Colour Doppler images. The phantom was connected to a peristaltic pump, simulating physiological flow conditions. To obtain the parametric images, water was pumped through the phantom at various flow rates (100, 120 and 160 ml/min) and 10 ml contrast boluses were injected. DSA images were obtained at 10 frames/sec from the Toshiba C-arm and DSA image sequences were input into LabVIEW software to get parametric maps from time-density curves. The parametric maps were compared with velocities determined by Doppler ultrasound at the internal carotid artery. The velocities measured by the Doppler ultrasound were 38, 48 and 65 cm/s for flow rates of 100, 120 and 160 ml/min, respectively. For the 20% increase in flow rate, the percentage change of blood velocity measured by Doppler ultrasound was 26.3%. Correspondingly, there was a 20% decrease of Bolus Arrival Time (BAT) and 14.3% decrease of Mean Transit Time (MTT), showing strong inverse correlation with Doppler measured velocity. The parametric imaging parameters are quite sensitive to velocity changes and are well correlated to the velocities measured by Doppler ultrasound.
Proposition of an Accelerated Ageing Method for Natural Fibre/Polylactic Acid Composite
NASA Astrophysics Data System (ADS)
Zandvliet, Clio; Bandyopadhyay, N. R.; Ray, Dipa
2015-10-01
Natural fibre composite based on polylactic acid (PLA) composite is of special interest because it is entirely from renewable resources and biodegradable. Some samples of jute/PLA composite and PLA alone made 6 years ago and kept in tropical climate on a shelf shows too fast ageing degradation. In this work, an accelerated ageing method for natural fibres/PLA composite is proposed and tested. Experiment was carried out with jute and flax fibre/PLA composite. The method was compared with the standard ISO 1037-06a. The residual flexural strength after ageing test was compared with the one of common wood-based panels and of real aged samples prepared 6 years ago.
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
A new method of accelerated graph display in primary flight display based on FPGA
NASA Astrophysics Data System (ADS)
Kong, Quancun; Li, Chenggui; Zhang, Fengqing
2006-11-01
With the development of avionic technology, there is the increasing amount of information to be displayed on Primary Flight Display (PFD) of the cockpit. Beside the higher requirement of accuracy, reliability and the real-time property of information should be met in some emergency situations. Therefore, it is rather important to make further improvement on speeding up graph generation and display. This paper, based on hardware acceleration, describes a designated method to satisfy the higher requirement of PFD for graph display. The new method is characterized with graphic layering double frame buffer alternation and graphic synthesis, which to a great extent, reduces the job of a processor and speeds up the graphic generation and display, hence solving the speed bottleneck in PFD graphic display.
The Accelerated Intake: A Method for Increasing Initial Attendance to Outpatient Cocaine Treatment.
ERIC Educational Resources Information Center
Festinger, David S.; And Others
1996-01-01
The effectiveness of offering same day appointments at an outpatient cocaine treatment program to increase intake attendance was examined. Seventy-eight clients were given standard or accelerated intake appointments. Significantly more clients who were given accelerated appointments attended the program. An accelerated intake procedure appears to…
In situ baking method for degassing of a kicker magnet in accelerator beam line
Kamiya, Junichiro Ogiwara, Norio; Yanagibashi, Toru; Kinsho, Michikazu; Yasuda, Yuichi
2016-03-15
In this study, the authors propose a new in situ degassing method by which only kicker magnets in the accelerator beam line are baked out without raising the temperature of the vacuum chamber to prevent unwanted thermal expansion of the chamber. By simply installing the heater and thermal radiation shield plates between the kicker magnet and the chamber wall, most of the heat flux from the heater directs toward the kicker magnet. The result of the verification test showed that each part of the kicker magnet was heated to above the target temperature with a small rise in the vacuum chamber temperature. A graphite heater was selected in this application to bake-out the kicker magnet in the beam line to ensure reliability and easy maintainability of the heater. The vacuum characteristics of graphite were suitable for heater operation in the beam line. A preliminary heat-up test conducted in the accelerator beam line also showed that each part of the kicker magnet was successfully heated and that thermal expansion of the chamber was negligibly small.
Accelerated regularized estimation of MR coil sensitivities using augmented Lagrangian methods.
Allison, Michael J; Ramani, Sathish; Fessler, Jeffrey A
2013-03-01
Several magnetic resonance parallel imaging techniques require explicit estimates of the receive coil sensitivity profiles. These estimates must be accurate over both the object and its surrounding regions to avoid generating artifacts in the reconstructed images. Regularized estimation methods that involve minimizing a cost function containing both a data-fit term and a regularization term provide robust sensitivity estimates. However, these methods can be computationally expensive when dealing with large problems. In this paper, we propose an iterative algorithm based on variable splitting and the augmented Lagrangian method that estimates the coil sensitivity profile by minimizing a quadratic cost function. Our method, ADMM-Circ, reformulates the finite differencing matrix in the regularization term to enable exact alternating minimization steps. We also present a faster variant of this algorithm using intermediate updating of the associated Lagrange multipliers. Numerical experiments with simulated and real data sets indicate that our proposed method converges approximately twice as fast as the preconditioned conjugate gradient method over the entire field-of-view. These concepts may accelerate other quadratic optimization problems.
NASA Astrophysics Data System (ADS)
Hoi, Yiemeng; Ionita, Ciprian N.; Tranquebar, Rekha V.; Hoffmann, Kenneth R.; Woodward, Scott H.; Taulbee, Dale B.; Meng, Hui; Rudin, Stephen
2006-03-01
An asymmetric stent with low porosity patch across the intracranial aneurysm neck and high porosity elsewhere is designed to modify the flow to result in thrombogenesis and occlusion of the aneurysm and yet to reduce the possibility of also occluding adjacent perforator vessels. The purposes of this study are to evaluate the flow field induced by an asymmetric stent using both numerical and digital subtraction angiography (DSA) methods and to quantify the flow dynamics of an asymmetric stent in an in vivo aneurysm model. We created a vein-pouch aneurysm model on the canine carotid artery. An asymmetric stent was implanted at the aneurysm, with 25% porosity across the aneurysm neck and 80% porosity elsewhere. The aneurysm geometry, before and after stent implantation, was acquired using cone beam CT and reconstructed for computational fluid dynamics (CFD) analysis. Both steady-state and pulsatile flow conditions using the measured waveforms from the aneurysm model were studied. To reduce computational costs, we modeled the asymmetric stent effect by specifying a pressure drop over the layer across the aneurysm orifice where the low porosity patch was located. From the CFD results, we found the asymmetric stent reduced the inflow into the aneurysm by 51%, and appeared to create a stasis-like environment which favors thrombus formation. The DSA sequences also showed substantial flow reduction into the aneurysm. Asymmetric stents may be a viable image guided intervention for treating intracranial aneurysms with desired flow modification features.
Kallmerten, Amy E; Alexander, Abigail; Wager, Krista M; Jones, Graham B
2011-10-01
Nuclear imaging using positron emission tomography [PET] is a powerful technique with clinical applications which include oncology, cardiovascular disease and CNS disorders. Conventional chemical syntheses of the short half-life radionuclides used in the process however imposes numerous limitations on scope of available ligands. By utilizing microwave assisted synthesis methods many of these limitations can be overcome, paving the way for the design of diverse families of agents with defined cellular targets. This review will survey recent developments in the field with emphasis on the period 2006-2011. Positron emission tomography [PET] has become one of the most powerful in vivo imaging modalities, capable of delivering mm3 resolution of radiotracer distribution and metabolism [1]. When combined with anatomic imaging methods (MRI, CT) co-registered multimode images offer the potential to track metabolic and physiologic events in diseased states and guide and accelerate clinical trials of investigational new drugs. Also, this same methodology can be used to evaluate first pass pharmacokinetics/pharmacodynamics in early stage drug discovery. Though powerful as a technique only a limited number of drugs have seen clinical use and to date only one drug 2-fluoro-deoxy-D-glucose (FDG) has received FDA approval [2]. One of the drawbacks of PET imaging is the need for tracers labeled with an appropriate nuclide and the half-lives of these agents places special constraints on the chemical synthesis. Among the most popular are 11C (t½ =20.4 min) and 18F (t ½ =109.8 min) labeled compounds and this has resulted in a resurgence of interest in practical application of their chemistries [3,4]. This review will focus on microwave mediated methods of acceleration of organic reactions used for the production of labeled PET image contrast agents, with emphasis on the five year period 2006 to 2011.
A convergence accelerator of a linear system of equations based upon the power method
NASA Astrophysics Data System (ADS)
Dagan, A.
2001-03-01
This paper considers the convergence rate of an iterative numerical scheme as a method for accelerating at the post-processor stage. The methodology adapted here is: (1) residual eigenmodes included in the origin of the convex hull are eliminated; (2) remaining residual terms are smoothed away by the main convergence algorithm. For this purpose, the polynomial matrix approach is employed for deriving the characteristic equation by two different methods. The first method is based on vector scaling and the second is based on the normal equations approach. The input for both methods is the solution difference between two consecutive iteration/cycle levels obtained from the main program. The singular value decomposition was employed for both methods due to the ill-conditioned structure of the matrices. The use of the explicit form of the Richardson extrapolation in the present work overrules the need to employ the Richardson iteration with a Leja ordering. The performance of these methods was compared with the GMRES algorithm for three representative problems: two-dimensional boundary value problem using the Laplace equation, three-dimensional multi-grid, potential solution over a sphere and the one-dimensional steady state Burger equation. In all three examples both methods have the same rate of convergence, or better, as that of the GMRES method in terms of computer operational count. However, in terms of storage requirements, the method based upon vector scaling has a significant advantage over the normal equations approach as well as the GMRES method, in which only one vector of the N grid-points is required. Copyright
NASA Astrophysics Data System (ADS)
Dizaji, Farzad F.; Marshall, Jeffrey S.
2016-11-01
Modeling the response of interacting particles, droplets, or bubbles to subgrid-scale fluctuations in turbulent flows is a long-standing challenge in multiphase flow simulations using the Reynolds-Averaged Navier-Stokes approach. The problem also arises for large-eddy simulation for sufficiently small values of the Kolmogorov-scale particle Stokes number. This paper expands on a recently proposed stochastic vortex structure (SVS) method for modeling of turbulence fluctuations for colliding or otherwise interacting particles. An accelerated version of the SVS method was developed using the fast multipole expansion and local Taylor expansion approach, which reduces computation speed by two orders of magnitude compared to the original SVS method. Detailed comparisons are presented showing close agreement of the energy spectrum and probability density functions of various fields between the SVS computational model, direct numerical simulation (DNS) results, and various theoretical and experimental results found in the literature. Results of the SVS method for particle collision rate and related measures of particle interaction exhibit excellent agreement with DNS predictions for homogeneous turbulent flows. The SVS method was also used with adhesive particles to simulate formation of particle agglomerates with different values of the particle Stokes and adhesion numbers, and various measures of the agglomerate structure are compared to the DNS results.
Shah, Ashesh; Coste, Jérôme; Lemaire, Jean-Jacques; Schkommodau, Erik; Taub, Ethan; Guzman, Raphael; Derost, Philippe; Hemm, Simone
2016-12-16
OBJECTIVE Despite the widespread use of deep brain stimulation (DBS) for movement disorders such as Parkinson's disease (PD), the exact anatomical target responsible for the therapeutic effect is still a subject of research. Intraoperative stimulation tests by experts consist of performing passive movements of the patient's arm or wrist while the amplitude of the stimulation current is increased. At each position, the amplitude that best alleviates rigidity is identified. Intrarater and interrater variations due to the subjective and semiquantitative nature of such evaluations have been reported. The aim of the present study was to evaluate the use of an acceleration sensor attached to the evaluator's wrist to assess the change in rigidity, hypothesizing that such a change will alter the speed of the passive movements. Furthermore, the combined analysis of such quantitative results with anatomy would generate a more reproducible description of the most effective stimulation sites. METHODS To test the reliability of the method, it was applied during postoperative follow-up examinations of 3 patients. To study the feasibility of intraoperative use, it was used during 9 bilateral DBS operations in patients suffering from PD. Changes in rigidity were calculated by extracting relevant outcome measures from the accelerometer data. These values were used to identify rigidity-suppressing stimulation current amplitudes, which were statistically compared with the amplitudes identified by the neurologist. Positions for the chronic DBS lead implantation that would have been chosen based on the acceleration data were compared with clinical choices. The data were also analyzed with respect to the anatomical location of the stimulating electrode. RESULTS Outcome measures extracted from the accelerometer data were reproducible for the same evaluator, thus providing a reliable assessment of rigidity changes during intraoperative stimulation tests. Of the 188 stimulation sites
On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials
NASA Technical Reports Server (NTRS)
Gates, Thomas S.
2003-01-01
A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
NASA Astrophysics Data System (ADS)
Yu, H.; Wang, Z.; Zhang, C.; Chen, N.; Zhao, Y.; Sawchuk, A. P.; Dalsing, M. C.; Teague, S. D.; Cheng, Y.
2014-11-01
Existing research of patient-specific computational hemodynamics (PSCH) heavily relies on software for anatomical extraction of blood arteries. Data reconstruction and mesh generation have to be done using existing commercial software due to the gap between medical image processing and CFD, which increases computation burden and introduces inaccuracy during data transformation thus limits the medical applications of PSCH. We use lattice Boltzmann method (LBM) to solve the level-set equation over an Eulerian distance field and implicitly and dynamically segment the artery surfaces from radiological CT/MRI imaging data. The segments seamlessly feed to the LBM based CFD computation of PSCH thus explicit mesh construction and extra data management are avoided. The LBM is ideally suited for GPU (graphic processing unit)-based parallel computing. The parallel acceleration over GPU achieves excellent performance in PSCH computation. An application study will be presented which segments an aortic artery from a chest CT dataset and models PSCH of the segmented artery.
Research on acceleration method of reactor physics based on FPGA platforms
Li, C.; Yu, G.; Wang, K.
2013-07-01
The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)
Accelerated Discovery in Photocatalysis using a Mechanism-Based Screening Method.
Hopkinson, Matthew N; Gómez-Suárez, Adrián; Teders, Michael; Sahoo, Basudev; Glorius, Frank
2016-03-18
Herein, we report a conceptually novel mechanism-based screening approach to accelerate discovery in photocatalysis. In contrast to most screening methods, which consider reactions as discrete entities, this approach instead focuses on a single constituent mechanistic step of a catalytic reaction. Using luminescence spectroscopy to investigate the key quenching step in photocatalytic reactions, an initial screen of 100 compounds led to the discovery of two promising substrate classes. Moreover, a second, more focused screen provided mechanistic insights useful in developing proof-of-concept reactions. Overall, this fast and straightforward approach both facilitated the discovery and aided the development of new light-promoted reactions and suggests that mechanism-based screening strategies could become useful tools in the hunt for new reactivity.
Challenges in LER/CDU metrology in DSA: placement error and cross-line correlations
NASA Astrophysics Data System (ADS)
Constantoudis, Vassilios; Kuppuswamy, Vijaya-Kumar M.; Gogolides, Evangelos; Pret, Alessandro V.; Pathangi, Hari; Gronheid, Roel
2016-03-01
DSA lithography poses new challenges in LER/LWR metrology due to its self-organized and pitch-based nature. To cope with these challenges, a novel characterization approach with new metrics and updating the older ones is required. To this end, we focus on two specific challenges of DSA line patterns: a) the large correlations between the left and right edges of a line (line wiggling, rms(LWR)
Thin Foil Acceleration Method for Measuring the Unloading Isentropes of Shock-Compressed Matter
Asay, J.R.; Chhabildas, L.C.; Fortov, V.E.; Kanel, G.I.; Khishchenko, K.V.; Lomonosov, I.V.; Mehlhorn, T.; Razorenov, S.V.; Utkin, A.V.
1999-07-21
This work has been performed as part of the search for possible ways to utilize the capabilities of laser and particle beams techniques in shock wave and equation of state physics. The peculiarity of these techniques is that we have to deal with micron-thick targets and not well reproducible incident shock wave parameters, so all measurements should be of a high resolution and be done in one shot. Besides the Hugoniots, the experimental basis for creating the equations of state includes isentropes corresponding to unloading of shock-compressed matter. Experimental isentrope data are most important in the region of vaporization. With guns or explosive facilities, the unloading isentrope is recovered from a series of experiments where the shock wave parameters in plates of standard low-impedance materials placed behind the sample are measured [1,2]. The specific internal energy and specific volume are calculated from the measured p(u) release curve which corresponds to the Riemann integral. This way is not quite suitable for experiments with beam techniques where the incident shock waves are not well reproducible. The thick foil method [3] provides a few experimental points on the isentrope in one shot. When a higher shock impedance foil is placed on the surface of the material studied, the release phase occurs by steps, whose durations correspond to that for the shock wave to go back and forth in the foil. The velocity during the different steps, connected with the knowledge of the Hugoniot of the foil, allows us to determine a few points on the isentropic unloading curve. However, the method becomes insensitive when the low pressure range of vaporization is reached in the course of the unloading. The isentrope in this region can be measured by recording the smooth acceleration of a thin witness plate foil. With the mass of the foil known, measurements of the foil acceleration will give us the vapor pressure.
Park, Jaehong; Caprioli, Damiano; Spitkovsky, Anatoly
2015-02-27
We study diffusive shock acceleration (DSA) of protons and electrons at nonrelativistic, high Mach number, quasiparallel, collisionless shocks by means of self-consistent 1D particle-in-cell simulations. For the first time, both species are found to develop power-law distributions with the universal spectral index -4 in momentum space, in agreement with the prediction of DSA. We find that scattering of both protons and electrons is mediated by right-handed circularly polarized waves excited by the current of energetic protons via nonresonant hybrid (Bell) instability. Protons are injected into DSA after a few gyrocycles of shock drift acceleration (SDA), while electrons are first preheated via SDA, then energized via a hybrid acceleration process that involves both SDA and Fermi-like acceleration mediated by Bell waves, before eventual injection into DSA. Using the simulations we can measure the electron-proton ratio in accelerated particles, which is of paramount importance for explaining the cosmic ray fluxes measured on Earth and the multiwavelength emission of astrophysical objects such as supernova remnants, radio supernovae, and galaxy clusters. We find the normalization of the electron power law is ≲10^{-2} of the protons for strong nonrelativistic shocks.
WDS/DSA Certification - International collaboration for a trustworthy research data infrastructure
NASA Astrophysics Data System (ADS)
Mokrane, Mustapha; Hugo, Wim; Harrison, Sandy
2016-04-01
, German Institute for Standardization (DIN) standard 31644, Trustworthy Repositories Audit and Certification (TRAC) criteria and the International Organization for Standardization (ISO) standard 16363. In addition, the Data Seal of Approval (DSA) and WDS have set up core certification mechanisms for trusted digital repositories in 2009, which are increasingly recognized as de facto standards. While DSA emerged in Europe in the Humanities and Social Sciences, WDS started as an international initiative with historical roots in the Earth and Space Sciences. Their catalogues of requirements and review procedures are based on the same principles of openness, transparency. A unique feature of the DSA and WDS certification is that it strikes a balance between simplicity, robustness and the effort required to complete. A successful international cross-project collaboration was initiated between WDS and DSA under the umbrella of the Research Data Alliance (RDA), an international initiative started in 2013 to promote data interoperability which provided a useful and neutral forum. A joint working group was established in early 2014 to reconcile and simplify the array of certification options and improve and stimulate core certification for scientific data services. The outputs of this collaboration are a Catalogue of Common Requirements (https://goo.gl/LJZqDo) and a Catalogue of Common Procedures (https://goo.gl/vNR0q1) which will be implemented jointly by WDS and DSA.
Mask free intravenous 3D digital subtraction angiography (IV 3D-DSA) from a single C-arm acquisition
NASA Astrophysics Data System (ADS)
Li, Yinsheng; Niu, Kai; Yang, Pengfei; Aagaard-Kienitz, Beveley; Niemann, David B.; Ahmed, Azam S.; Strother, Charles; Chen, Guang-Hong
2016-03-01
Currently, clinical acquisition of IV 3D-DSA requires two separate scans: one mask scan without contrast medium and a filled scan with contrast injection. Having two separate scans adds radiation dose to the patient and increases the likelihood of suffering inadvertent patient motion induced mis-registration and the associated mis-registraion artifacts in IV 3D-DSA images. In this paper, a new technique, SMART-RECON is introduced to generate IV 3D-DSA images from a single Cone Beam CT (CBCT) acquisition to eliminate the mask scan. Potential benefits of eliminating mask scan would be: (1) both radiation dose and scan time can be reduced by a factor of 2; (2) intra-sweep motion can be eliminated; (3) inter-sweep motion can be mitigated. Numerical simulations were used to validate the algorithm in terms of contrast recoverability and the ability to mitigate limited view artifacts.
Abdel-Aal, El-Sayed M; Akhtar, Humayoun; Rabalski, Iwona; Bryan, Michael
2014-02-01
Anthocyanins are important dietary components with diverse positive functions in human health. This study investigates effects of accelerated solvent extraction (ASE) and microwave-assisted extraction (MAE) on anthocyanin composition and extraction efficiency from blue wheat, purple corn, and black rice in comparison with the commonly used solvent extraction (CSE). Factorial experimental design was employed to study effects of ASE and MAE variables, and anthocyanin extracts were analyzed by spectrophotometry, high-performance liquid chromatography-diode array detector (DAD), and liquid chromatography-mass spectrometry chromatography. The extraction efficiency of ASE and MAE was comparable with CSE at the optimal conditions. The greatest extraction by ASE was achieved at 50 °C, 2500 psi, 10 min using 5 cycles, and 100% flush. For MAE, a combination of 70 °C, 300 W, and 10 min in MAE was the most effective in extracting anthocyanins from blue wheat and purple corn compared with 50 °C, 1200 W, and 20 min for black rice. The anthocyanin composition of grain extracts was influenced by the extraction method. The ASE extraction method seems to be more appropriate in extracting anthocyanins from the colored grains as being comparable with the CSE method based on changes in anthocyanin composition. The method caused lower structural changes in anthocaynins compared with the MAE method. Changes in blue wheat anthocyanins were lower in comparison with purple corn or black rice perhaps due to the absence of acylated anthocyanin compounds in blue wheat. The results show significant differences in anthocyanins among the 3 extraction methods, which indicate a need to standardize a method for valid comparisons among studies and for quality assurance purposes.
The accelerated intake: a method for increasing initial attendance to outpatient cocaine treatment.
Festinger, D S; Lamb, R J; Kirby, K C; Marlowe, D B
1996-01-01
We examined whether offering an accelerated (same-day) versus a standard (1- to 7-day delay) intake appointment increased initial attendance at an outpatient cocaine treatment program. Significantly more of the subjects who were offered an accelerated intake (59%) attended than those who were given a standard intake (33%), chi 2 (2, N = 78) = 4.198, p < .05. The accelerated intake procedure appears to be useful for enhancing enrollment in outpatient addiction treatment.
NASA Astrophysics Data System (ADS)
Kawakami, Taiki; Okubo, Kan; Uchida, Naoki; Takeuchi, Nobunao; Matsuzawa, Toru
2013-04-01
Repeating earthquakes are occurring on the similar asperity at the plate boundary. These earthquakes have an important property; the seismic waveforms observed at the identical observation site are very similar regardless of their occurrence time. The slip histories of repeating earthquakes could reveal the existence of asperities: The Analysis of repeating earthquakes can detect the characteristics of the asperities and realize the temporal and spatial monitoring of the slip in the plate boundary. Moreover, we are expecting the medium-term predictions of earthquake at the plate boundary by means of analysis of repeating earthquakes. Although the previous works mostly clarified the existence of asperity and repeating earthquake, and relationship between asperity and quasi-static slip area, the stable and robust method for automatic detection of repeating earthquakes has not been established yet. Furthermore, in order to process the enormous data (so-called big data) the speedup of the signal processing is an important issue. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for the signal processing in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. That is, a PC (personal computer) with GPUs might be a personal supercomputer. GPU computing gives us the high-performance computing environment at a lower cost than before. Therefore, the use of GPUs contributes to a significant reduction of the execution time in signal processing of the huge seismic data. In this study, first, we applied the band-limited Fourier phase correlation as a fast method of detecting repeating earthquake. This method utilizes only band-limited phase information and yields the correlation values between two seismic signals. Secondly, we employ coherence function using three orthogonal components (East-West, North-South, and Up-Down) of seismic data as a
NASA Astrophysics Data System (ADS)
Romo-Negreira, A.; Younkin, T. R.; Gronheid, R.; Demuynck, S.; Vandenbroeck, N.; Seo, T.; Guerrero, D. J.; Parnell, D.; Muramatsu, M.; Kawakami, S.; Yamauchi, T.; Nafus, K.; Somervell, M. H.
2014-03-01
An electrical test vehicle for fabricating direct self-assembly (DSA) sub-30 nm via interconnects has been fabricated employing a soft mask grapho-epitaxy contact-hole shrink. The generation of the resist pre-pattern was carried out using 193i lithography on three different stacks and the BCP assembly was evaluated with and without template affinity control on the resist pre-pattern. After DSA shrink, the holes were transferred in a 100 nm oxide for standard Tungsten metallization for electrical characterization.
Villena, Jorge Fernandez; Polimeridis, Athanasios G; Eryaman, Yigitcan; Adalsteinsson, Elfar; Wald, Lawrence L; White, Jacob K; Daniel, Luca
2016-11-01
A fast frequency domain full-wave electromagnetic simulation method is introduced for the analysis of MRI coils loaded with the realistic human body models. The approach is based on integral equation methods decomposed into two domains: 1) the RF coil array and shield, and 2) the human body region where the load is placed. The analysis of multiple coil designs is accelerated by introducing the precomputed magnetic resonance Green functions (MRGFs), which describe how the particular body model used responds to the incident fields from external sources. These MRGFs, which are precomputed once for a given body model, can be combined with any integral equation solver and reused for the analysis of many coil designs. This approach provides a fast, yet comprehensive, analysis of coil designs, including the port S-parameters and the electromagnetic field distribution within the inhomogeneous body. The method solves the full-wave electromagnetic problem for a head array in few minutes, achieving a speed up of over 150 folds with root mean square errors in the electromagnetic field maps smaller than 0.4% when compared to the unaccelerated integral equation-based solver. This enables the characterization of a large number of RF coil designs in a reasonable time, which is a first step toward an automatic optimization of multiple parameters in the design of transmit arrays, as illustrated in this paper, but also receive arrays.
Qiao, Jixin; Hou, Xiaolin; Steier, Peter; Nielsen, Sven; Golser, Robin
2015-07-21
An automated analytical method implemented in a flow injection (FI) system was developed for rapid determination of (236)U in 10 L seawater samples. (238)U was used as a chemical yield tracer for the whole procedure, in which extraction chromatography (UTEVA) was exploited to purify uranium, after an effective iron hydroxide coprecipitation. Accelerator mass spectrometry (AMS) was applied for quantifying the (236)U/(238)U ratio, and inductively coupled plasma mass spectrometry (ICPMS) was used to determine the absolute concentration of (238)U; thus, the concentration of (236)U can be calculated. The key experimental parameters affecting the analytical effectiveness were investigated and optimized in order to achieve high chemical yields and simple and rapid analysis as well as low procedure background. Besides, the operational conditions for the target preparation prior to the AMS measurement were optimized, on the basis of studying the coprecipitation behavior of uranium with iron hydroxide. The analytical results indicate that the developed method is simple and robust, providing satisfactory chemical yields (80-100%) and high analysis speed (4 h/sample), which could be an appealing alternative to conventional manual methods for (236)U determination in its tracer application.
The cell-in-series method: A technique for accelerated electrode degradation in redox flow batteries
Pezeshki, Alan M.; Sacci, Robert L.; Veith, Gabriel M.; Zawodzinski, Thomas A.; Mench, Matthew M.
2015-11-21
Here, we demonstrate a novel method to accelerate electrode degradation in redox flow batteries and apply this method to the all-vanadium chemistry. Electrode performance degradation occurred seven times faster than in a typical cycling experiment, enabling rapid evaluation of materials. This method also enables the steady-state study of electrodes. In this manner, it is possible to delineate whether specific operating conditions induce performance degradation; we found that both aggressively charging and discharging result in performance loss. Post-mortem x-ray photoelectron spectroscopy of the degraded electrodes was used to resolve the effects of state of charge (SoC) and current on the electrode surface chemistry. For the electrode material tested in this work, we found evidence that a loss of oxygen content on the negative electrode cannot explain decreased cell performance. Furthermore, the effects of decreased electrode and membrane performance on capacity fade in a typical cycling battery were decoupled from crossover; electrode and membrane performance decay were responsible for a 22% fade in capacity, while crossover caused a 12% fade.
The cell-in-series method: A technique for accelerated electrode degradation in redox flow batteries
Pezeshki, Alan M.; Sacci, Robert L.; Veith, Gabriel M.; ...
2015-11-21
Here, we demonstrate a novel method to accelerate electrode degradation in redox flow batteries and apply this method to the all-vanadium chemistry. Electrode performance degradation occurred seven times faster than in a typical cycling experiment, enabling rapid evaluation of materials. This method also enables the steady-state study of electrodes. In this manner, it is possible to delineate whether specific operating conditions induce performance degradation; we found that both aggressively charging and discharging result in performance loss. Post-mortem x-ray photoelectron spectroscopy of the degraded electrodes was used to resolve the effects of state of charge (SoC) and current on the electrodemore » surface chemistry. For the electrode material tested in this work, we found evidence that a loss of oxygen content on the negative electrode cannot explain decreased cell performance. Furthermore, the effects of decreased electrode and membrane performance on capacity fade in a typical cycling battery were decoupled from crossover; electrode and membrane performance decay were responsible for a 22% fade in capacity, while crossover caused a 12% fade.« less
Kim, Seung-Hyun; Kelly, Peter B; Clifford, Andrew J
2009-07-15
The high-throughput Zn reduction method was developed and optimized for various biological/biomedical accelerator mass spectrometry (AMS) applications of mg of C size samples. However, high levels of background carbon from the high-throughput Zn reduction method were not suitable for sub-mg of C size samples in environmental, geochronology, and biological/biomedical AMS applications. This study investigated the effect of background carbon mass (mc) and background 14C level (Fc) from the high-throughput Zn reduction method. Background mc was 0.011 mg of C and background Fc was 1.5445. Background subtraction, two-component mixing, and expanded formulas were used for background correction. All three formulas accurately corrected for backgrounds to 0.025 mg of C in the aerosol standard (NIST SRM 1648a). Only the background subtraction and the two-component mixing formulas accurately corrected for backgrounds to 0.1 mg of C in the IAEA-C6 and -C7 standards. After the background corrections, our high-throughput Zn reduction method was suitable for biological (diet)/biomedical (drug) and environmental (fine particulate matter) applications of sub-mg of C samples (> or = 0.1 mg of C) in keeping with a balance between throughput (270 samples/day/analyst) and sensitivity/accuracy/precision of AMS measurement. The development of a high-throughput method for examination of > or = 0.1 mg of C size samples opens up a range of applications for 14C AMS studies. While other methods do exist for > or = 0.1 mg of C size samples, the low throughput has made them cost prohibitive for many applications.
An ultrasonic-accelerated oxidation method for determining the oxidative stability of biodiesel.
Avila Orozco, Francisco D; Sousa, Antonio C; Domini, Claudia E; Ugulino Araujo, Mario Cesar; Fernández Band, Beatriz S
2013-05-01
Biodiesel is considered an alternative energy because it is produced from fats and vegetable oils by means of transesterification. Furthermore, it consists of fatty acid alkyl esters (FAAS) which have a great influence on biodiesel fuel properties and in the storage lifetime of biodiesel itself. The biodiesel storage stability is directly related to the oxidative stability parameter (Induction Time - IT) which is determined by means of the Rancimat® method. This method uses condutimetric monitoring and induces the degradation of FAAS by heating the sample at a constant temperature. The European Committee for Standardization established a standard (EN 14214) to determine the oxidative stability of biodiesel, which requires it to reach a minimum induction period of 6h as tested by Rancimat® method at 110°C. In this research, we aimed at developing a fast and simple alternative method to determine the induction time (IT) based on the FAAS ultrasonic-accelerated oxidation. The sonodegradation of biodiesel samples was induced by means of an ultrasonic homogenizer fitted with an immersible horn at 480Watts of power and 20 duty cycles. The UV-Vis spectrometry was used to monitor the FAAS sonodegradation by measuring the absorbance at 270nm every 2. Biodiesel samples from different feedstock were studied in this work. In all cases, IT was established as the inflection point of the absorbance versus time curve. The induction time values of all biodiesel samples determined using the proposed method was in accordance with those measured through the Rancimat® reference method by showing a R(2)=0.998.
A GPU Accelerated Discontinuous Galerkin Conservative Level Set Method for Simulating Atomization
NASA Astrophysics Data System (ADS)
Jibben, Zechariah J.
This dissertation describes a process for interface capturing via an arbitrary-order, nearly quadrature free, discontinuous Galerkin (DG) scheme for the conservative level set method (Olsson et al., 2005, 2008). The DG numerical method is utilized to solve both advection and reinitialization, and executed on a refined level set grid (Herrmann, 2008) for effective use of processing power. Computation is executed in parallel utilizing both CPU and GPU architectures to make the method feasible at high order. Finally, a sparse data structure is implemented to take full advantage of parallelism on the GPU, where performance relies on well-managed memory operations. With solution variables projected into a kth order polynomial basis, a k + 1 order convergence rate is found for both advection and reinitialization tests using the method of manufactured solutions. Other standard test cases, such as Zalesak's disk and deformation of columns and spheres in periodic vortices are also performed, showing several orders of magnitude improvement over traditional WENO level set methods. These tests also show the impact of reinitialization, which often increases shape and volume errors as a result of level set scalar trapping by normal vectors calculated from the local level set field. Accelerating advection via GPU hardware is found to provide a 30x speedup factor comparing a 2.0GHz Intel Xeon E5-2620 CPU in serial vs. a Nvidia Tesla K20 GPU, with speedup factors increasing with polynomial degree until shared memory is filled. A similar algorithm is implemented for reinitialization, which relies on heavier use of shared and global memory and as a result fills them more quickly and produces smaller speedups of 18x.
Rodgers, J.E.; Elebi, M.
2011-01-01
The 1994 Northridge earthquake caused brittle fractures in steel moment frame building connections, despite causing little visible building damage in most cases. Future strong earthquakes are likely to cause similar damage to the many un-retrofitted pre-Northridge buildings in the western US and elsewhere. Without obvious permanent building deformation, costly intrusive inspections are currently the only way to determine if major fracture damage that compromises building safety has occurred. Building instrumentation has the potential to provide engineers and owners with timely information on fracture occurrence. Structural dynamics theory predicts and scale model experiments have demonstrated that sudden, large changes in structure properties caused by moment connection fractures will cause transient dynamic response. A method is proposed for detecting the building-wide level of connection fracture damage, based on observing high-frequency, fracture-induced transient dynamic responses in strong motion accelerograms. High-frequency transients are short (<1 s), sudden-onset waveforms with frequency content above 25 Hz that are visually apparent in recorded accelerations. Strong motion data and damage information from intrusive inspections collected from 24 sparsely instrumented buildings following the 1994 Northridge earthquake are used to evaluate the proposed method. The method's overall success rate for this data set is 67%, but this rate varies significantly with damage level. The method performs reasonably well in detecting significant fracture damage and in identifying cases with no damage, but fails in cases with few fractures. Combining the method with other damage indicators and removing records with excessive noise improves the ability to detect the level of damage. ?? 2010 Elsevier B.V. All rights reserved.
A coupled ordinates method for solution acceleration of rarefied gas dynamics simulations
Das, Shankhadeep; Mathur, Sanjay R.; Alexeenko, Alina; Murthy, Jayathi Y.
2015-05-15
Non-equilibrium rarefied flows are frequently encountered in a wide range of applications, including atmospheric re-entry vehicles, vacuum technology, and microscale devices. Rarefied flows at the microscale can be effectively modeled using the ellipsoidal statistical Bhatnagar–Gross–Krook (ESBGK) form of the Boltzmann kinetic equation. Numerical solutions of these equations are often based on the finite volume method (FVM) in physical space and the discrete ordinates method in velocity space. However, existing solvers use a sequential solution procedure wherein the velocity distribution functions are implicitly coupled in physical space, but are solved sequentially in velocity space. This leads to explicit coupling of the distribution function values in velocity space and slows down convergence in systems with low Knudsen numbers. Furthermore, this also makes it difficult to solve multiscale problems or problems in which there is a large range of Knudsen numbers. In this paper, we extend the coupled ordinates method (COMET), previously developed to study participating radiative heat transfer, to solve the ESBGK equations. In this method, at each cell in the physical domain, distribution function values for all velocity ordinates are solved simultaneously. This coupled solution is used as a relaxation sweep in a geometric multigrid method in the spatial domain. Enhancements to COMET to account for the non-linearity of the ESBGK equations, as well as the coupled implementation of boundary conditions, are presented. The methodology works well with arbitrary convex polyhedral meshes, and is shown to give significantly faster solutions than the conventional sequential solution procedure. Acceleration factors of 5–9 are obtained for low to moderate Knudsen numbers on single processor platforms.
GPU accelerated simulations of bluff body flows using vortex particle methods
NASA Astrophysics Data System (ADS)
Rossinelli, Diego; Bergdorf, Michael; Cottet, Georges-Henri; Koumoutsakos, Petros
2010-05-01
We present a GPU accelerated solver for simulations of bluff body flows in 2D using a remeshed vortex particle method and the vorticity formulation of the Brinkman penalization technique to enforce boundary conditions. The efficiency of the method relies on fast and accurate particle-grid interpolations on GPUs for the remeshing of the particles and the computation of the field operators. The GPU implementation uses OpenGL so as to perform efficient particle-grid operations and a CUFFT-based solver for the Poisson equation with unbounded boundary conditions. The accuracy and performance of the GPU simulations and their relative advantages/drawbacks over CPU based computations are reported in simulations of flows past an impulsively started circular cylinder from Reynolds numbers between 40 and 9500. The results indicate up to two orders of magnitude speed up of the GPU implementation over the respective CPU implementations. The accuracy of the GPU computations depends on the Re number of the flow. For Re up to 1000 there is little difference between GPU and CPU calculations but this agreement deteriorates (albeit remaining to within 5% in drag calculations) for higher Re numbers as the single precision of the GPU adversely affects the accuracy of the simulations.
Method for direct measurement of cosmic acceleration by 21-cm absorption systems.
Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li
2014-07-25
So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively.
Method for Direct Measurement of Cosmic Acceleration by 21-cm Absorption Systems
NASA Astrophysics Data System (ADS)
Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li
2014-07-01
So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively.
Methods of Generating High-Quality Beams in Laser Wakefield Accelerators through Self-Injection
NASA Astrophysics Data System (ADS)
Davidson, Asher Warren
In the pursuit of discovering the fundamental laws and particles of nature, physicists have been colliding particles at ever increasing energy for almost a century. Lepton (electrons and positrons) colliders rely on linear accelerators (LINACS) because leptons radiate copious amounts of energy when accelerated in a circular machine. The size and cost of a linear collider is mainly determined by the acceleration gradient. Modern linear accelerators have gradients limited to 20-100 MeV/m because of the breakdown of the walls of the accelerator. Plasma based acceleration is receiving much attention because a plasma wave with a phase velocity near the speed of light can support acceleration gradients at least three orders of magnitude larger than those in modern accelerators. There is no breakdown limit in a plasma since it is already ionized. Such a plasma wave can be excited by the radiation pressure of an intense short pulse laser. This is called laser wakefield acceleration (LWFA). Much progress has been made in LWFA research in the past 30 years. Particle-in-cell (PIC) simulations have played a major part in this progress. The physics inherent in LWFA is nonlinear and three-dimensional in nature. Three-dimensional PIC simulations are computationally intensive. In this dissertation, we present and describe in detail a new algorithm that was introduced into the Particle-In-Cell Simulation Framework. We subsequently use this new quasi three-dimensional algorithm to efficiently explore the parameter regimes of LWFA that are accessible for existing and near term lasers. This regimes cannot be explored using full three-dimensional simulations even on leadership class computing facilities. The simulations presented in this dissertation show that the nonlinear, self-guided regime of LWFA described through phenomenological scaling laws by Lu et al., in 2007 is still useful for accelerating electrons to energies greater than 10 GeV. (Abstract shortened by ProQuest.).
Dynamic feature extraction of coronary artery motion using DSA image sequences.
Puentes, J; Roux, C; Garreau, M; Coatrieux, J L
1998-12-01
This paper aims to define and describe features of the motion of coronary arteries in two and three dimensions, presented as geometrical parameters that identify motion patterns. The main left coronary artery centerlines, obtained from digital subtraction angiography (DSA) image sequences, are first reconstructed. Thereafter, global and local motion features are evaluated along the sequence. The global attributes are centerline and point trajectory lengths, displacement amplitude, and virtual reference point, while local attributes are displacement direction, perpendicular/radial components, rotation direction, and curvature and torsion. These kinetic features allow us to obtain a detailed quantitative description of the displacements of arteries' centerlines, as well as associated epicardium deformations. Our modeling of local attributes as quasi-homogeneous on a segment analysis, enables us to propose a novel numeric to symbolic image transformation, which provides the required facts for knowledge-based motion interpretation. Experimental results using real data are consistent with cardiac dynamic behavior.
Recent advances in high-performance modeling of plasma-based acceleration using the full PIC method
NASA Astrophysics Data System (ADS)
Vay, J.-L.; Lehe, R.; Vincenti, H.; Godfrey, B. B.; Haber, I.; Lee, P.
2016-09-01
Numerical simulations have been critical in the recent rapid developments of plasma-based acceleration concepts. Among the various available numerical techniques, the particle-in-cell (PIC) approach is the method of choice for self-consistent simulations from first principles. The fundamentals of the PIC method were established decades ago, but improvements or variations are continuously being proposed. We report on several recent advances in PIC-related algorithms that are of interest for application to plasma-based accelerators, including (a) detailed analysis of the numerical Cherenkov instability and its remediation for the modeling of plasma accelerators in laboratory and Lorentz boosted frames, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, and (c) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of perfectly matched layers in high-order and pseudo-spectral solvers.
NASA Astrophysics Data System (ADS)
Shin, Wae-Gyeong; Lee, Soo-Hong
Reliability of automotive parts has been one of the most interesting fields in the automotive industry. Especially small DC motor was issued because of the increasing adoption for passengers' safety and convenience. This study was performed to develop the accelerated life test method using Inverse power law model for small DC motors. The failure mode of small DC motor includes brush wear-out. Inverse power law model is applied effectively the electronic components to reduce the testing time and to achieve the accelerating test conditions. Accelerated life testing method was induced to bring on the brush wear-out as increasing voltage of motor. Life distribution of the small DC motor was supposed to follow Weibull distribution and life test time was calculated under the conditions of B10 life and 90% confidence level.
NASA Astrophysics Data System (ADS)
Aldana, M.; Costanzo-Alvarez, V.; Gonzalez, C.; Gomez, L.
2009-05-01
During the last few years we have performed surface reservoir characterization at some Venezuelan oil fields using rock magnetic properties. We have tried to identify, at shallow levels, the "oil magnetic signature" of subjacent reservoirs. Recent data obtained from eastern Venezuela (San Juan field) emphasizes the differences between rock magnetic data from eastern and western oil fields. These results support the hypothesis of different authigenic processes. To better characterize hydrocarbon microseepage in both cases, we apply a new method to analyze IRM curves in order to find out the main magnetic phases responsible for the observed magnetic susceptibility (MS) anomalies. This alternative method is based on a Direct Signal Analysis (DSA) of the IRM in order to identify the number and type of magnetic components. According to this method, the IRM curve is decomposed as the sum of N elementary curves (modeled using the expression proposed by Robertson and France, 1994) whose mean coercivities vary in the interval of the measured magnetic field. The result is an adjusted spectral histogram from which the number of main contributions, their widths and mean coercivities, associated with the number and type of magnetic minerals, can be obtained. This analysis indicates that in western fields the main magnetic mineralogy is magnetite. Conversely in eastern fields, the MS anomalies are mainly caused by the presence of Fe sulphides (i.e. greigite). These results support the hypothesis of two different processes. In western fields a net electron transfer from the organic matter, degraded by hydrocarbon gas leakage, should occur precipitating Fe(II) magnetic minerals (e.g. magnetite). On the other hand, high concentrations of H2S at shallow depth levels, might allow the formation of secondary Fe-sulphides in eastern fields.
GPU acceleration of Runge Kutta-Fehlberg and its comparison with Dormand-Prince method
NASA Astrophysics Data System (ADS)
Seen, Wo Mei; Gobithaasan, R. U.; Miura, Kenjiro T.
2014-07-01
There is a significant reduction of processing time and speedup of performance in computer graphics with the emergence of Graphic Processing Units (GPUs). GPUs have been developed to surpass Central Processing Unit (CPU) in terms of performance and processing speed. This evolution has opened up a new area in computing and researches where highly parallel GPU has been used for non-graphical algorithms. Physical or phenomenal simulations and modelling can be accelerated through General Purpose Graphic Processing Units (GPGPU) and Compute Unified Device Architecture (CUDA) implementations. These phenomena can be represented with mathematical models in the form of Ordinary Differential Equations (ODEs) which encompasses the gist of change rate between independent and dependent variables. ODEs are numerically integrated over time in order to simulate these behaviours. The classical Runge-Kutta (RK) scheme is the common method used to numerically solve ODEs. The Runge Kutta Fehlberg (RKF) scheme has been specially developed to provide an estimate of the principal local truncation error at each step, known as embedding estimate technique. This paper delves into the implementation of RKF scheme for GPU devices and compares its result with Dorman Prince method. A pseudo code is developed to show the implementation in detail. Hence, practitioners will be able to understand the data allocation in GPU, formation of RKF kernels and the flow of data to/from GPU-CPU upon RKF kernel evaluation. The pseudo code is then written in C Language and two ODE models are executed to show the achievable speedup as compared to CPU implementation. The accuracy and efficiency of the proposed implementation method is discussed in the final section of this paper.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S{sub n}) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Huang, Jun; Goolcharran, Chimanlall; Ghosh, Krishnendu
2011-05-01
This paper presents the use of experimental design, optimization and multivariate techniques to investigate root-cause of tablet dissolution shift (slow-down) upon stability and develop control strategies for a drug product during formulation and process development. The effectiveness and usefulness of these methodologies were demonstrated through two application examples. In both applications, dissolution slow-down was observed during a 4-week accelerated stability test under 51°C/75%RH storage condition. In Application I, an experimental design was carried out to evaluate the interactions and effects of the design factors on critical quality attribute (CQA) of dissolution upon stability. The design space was studied by design of experiment (DOE) and multivariate analysis to ensure desired dissolution profile and minimal dissolution shift upon stability. Multivariate techniques, such as multi-way principal component analysis (MPCA) of the entire dissolution profiles upon stability, were performed to reveal batch relationships and to evaluate the impact of design factors on dissolution. In Application II, an experiment was conducted to study the impact of varying tablet breaking force on dissolution upon stability utilizing MPCA. It was demonstrated that the use of multivariate methods, defined as Quality by Design (QbD) principles and tools in ICH-Q8 guidance, provides an effective means to achieve a greater understanding of tablet dissolution upon stability.
Deken, Jean Marie; /SLAC
2009-06-19
Advocating for the good of the SLAC Archives and History Office (AHO) has not been a one-time affair, nor has it been a one-method procedure. It has required taking time to ascertain the current and perhaps predict the future climate of the Laboratory, and it has required developing and implementing a portfolio of approaches to the goal of building a stronger archive program by strengthening and appropriately expanding its resources. Among the successful tools in the AHO advocacy portfolio, the Archives Program Review Committee has been the most visible. The Committee and the role it serves as well as other formal and informal advocacy efforts are the focus of this case study My remarks today will begin with a brief introduction to advocacy and outreach as I understand them, and with a description of the Archives and History Office's efforts to understand and work within the corporate culture of the SLAC National Accelerator Laboratory. I will then share with you some of the tools we have employed to advocate for the Archives and History Office programs and activities; and finally, I will talk about how well - or badly - those tools have served us over the past decade.
Accelerated path integral methods for atomistic simulations at ultra-low temperatures
NASA Astrophysics Data System (ADS)
Uhl, Felix; Marx, Dominik; Ceriotti, Michele
2016-08-01
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.
Accelerated path integral methods for atomistic simulations at ultra-low temperatures.
Uhl, Felix; Marx, Dominik; Ceriotti, Michele
2016-08-07
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.
GPU accelerated study of heat transfer and fluid flow by lattice Boltzmann method on CUDA
NASA Astrophysics Data System (ADS)
Ren, Qinlong
Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1
ERIC Educational Resources Information Center
Manche, Emanuel P.
1979-01-01
Describes a compact and portable apparatus for the measurement, with a high degree of precision, the value of the gravitational acceleration g. The apparatus consists of a falling mercury drop and an electronic timing circuit. (GA)
ERIC Educational Resources Information Center
Huang, SuHua
2012-01-01
The mixed-method explanatory research design was employed to investigate the effectiveness of the Accelerated Reader (AR) program on middle school students' reading achievement and motivation. A total of 211 sixth to eighth-grade students provided quantitative data by completing an AR Survey. Thirty of the 211 students were randomly selected to…
Hassanein, Ahmed; Konkashbaev, Isak
2006-10-03
A device and method for generating extremely short-wave ultraviolet electromagnetic wave uses two intersecting plasma beams generated by two plasma accelerators. The intersection of the two plasma beams emits electromagnetic radiation and in particular radiation in the extreme ultraviolet wavelength. In the preferred orientation two axially aligned counter streaming plasmas collide to produce an intense source of electromagnetic radiation at the 13.5 nm wavelength. The Mather type plasma accelerators can utilize tin, or lithium covered electrodes. Tin, lithium or xenon can be used as the photon emitting gas source.
Optimization of accelerator parameters using normal form methods on high-order transfer maps
Snopok, Pavel
2007-05-01
Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented
Chapinal, N; de Passillé, A M; Pastell, M; Hänninen, L; Munksgaard, L; Rushen, J
2011-06-01
The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall acceleration, as well as the asymmetry of variance of acceleration within the front and rear pair of legs. In experiment 1, the asymmetry of variance of acceleration in the front and rear legs was positively correlated with overall gait and the visually assessed asymmetry of the steps (r ≥ 0.6). Walking speed was negatively correlated with the asymmetry of variance of the rear legs (r=-0.8) and positively correlated with the acceleration and the variance of acceleration of each leg and back (r ≥ 0.7). In experiment 2, cows had lower gait scores [2.3 vs. 2.6; standard error of the difference (SED)=0.1, measured on a 5-point scale] and lower scores for asymmetry of the steps (18.0 vs. 23.1; SED=2.2, measured on a continuous 100-unit scale) when they walked on rubber compared with concrete, and their walking speed increased (1.28 vs. 1.22 m/s; SED=0.02). The acceleration of the front (1.67 vs. 1.72 g; SED=0.02) and rear (1.62 vs. 1.67 g; SED=0.02) legs and the variance of acceleration of the rear legs (0.88 vs. 0.94 g; SED=0.03) were lower when cows walked on rubber
Grassi, G.
2006-07-01
We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)
Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.
2013-10-03
The focused transport theory is appropriate to describe the injection and acceleration of low-energy particles at shocks as an extension of diffusive shock acceleration (DSA). In this investigation, we aim to characterize the role of cross-shock potential (CSP) originated in the charge separation across the shock ramp on pickup ion (PUI) acceleration at various types of shocks with a focused transport model. The simulation results of energy spectrum and spatial density distribution for the cases with and without CSP added in the model are compared. With sufficient acceleration time, the focused transport acceleration finally falls into the DSA regime withmore » the power-law spectral index equal to the solution of the DSA theory. The CSP can affect the shape of the spectrum segment at lower energies, but it does not change the spectral index of the final power-law spectrum at high energies. It is found that the CSP controls the injection efficiency which is the fraction of PUIs reaching the DSA regime. A stronger CSP jump results in a dramatically improved injection efficiency. Our simulation results also show that the injection efficiency of PUIs is mass-dependent, which is lower for species with a higher mass. Additionally, the CSP is able to enhance the particle reflection upstream to produce a stronger intensity spike at the shock front. Lastly, we conclude that the CSP is a non-negligible factor that affects the dynamics of PUIs at shocks.« less
Baboi, Nicoleta
2002-09-19
Dipole modes are the main cause of transverse emittance dilution in the Japanese Linear Collider/Next Linear Collider (JLC/NLC). A diagnostic setup has been built in order to investigate them. The method is based on using a coaxial wire to excite and measure electromagnetic modes of accelerating structures. This method can offer a more efficient and less expensive procedure than the ASSET facility. Initial measurements have been made and are presented in this paper.
NASA Astrophysics Data System (ADS)
Revol, Jean-Pierre
2003-07-01
Progress in particle accelerator technology makes it possible to use a proton accelerator to produce energy and to destroy nuclear waste efficiently. The energy amplifier (EA) proposed by Carlo Rubbia and his group is a subcritical fast neutron system driven by a proton accelerator. It is particularly attractive for destroying, through fission, transuranic elements produced by presently operating nuclear reactors. The EA could also efficiently and at minimal cost transform long-lived fission fragments using the concept of adiabatic resonance crossing (ARC), recently tested at CERN with the TARC experiment. The ARC concept can be extended to several other domains of application (production of radioactive isotopes for medicine and industry, neutron research applications, etc.).
Application of the Euler-Lagrange method in determination of the coordinate acceleration
NASA Astrophysics Data System (ADS)
Sfarti, A.
2016-05-01
In a recent comment published in this journal (2015 Eur. J. Phys. 36 038001), Khrapko derived the relationship between coordinate acceleration and coordinate speed for the case of radial motion in Schwarzschild coordinates. We will show an alternative derivation based on the Euler-Lagrange formalism. The Euler-Lagrange formalism has the advantage that it circumvents the tedious calculations of the Christoffel symbols and it is more intuitive. Another aspect of our comment is that one should not give much physical meaning to coordinate dependent entities, GR is a coordinate free field, so, a relationship between two coordinate dependent entities, like the acceleration being dependent on speed, should not be given much importance. By contrast, the proper acceleration and proper speed, are meaningful entities and their relationship is relevant. The comment is intended for graduate students and for the instructors who teach GR.
ERIC Educational Resources Information Center
GIBSON, ARTHUR R.; STEPHANS, THOMAS M.
ACCELERATION OF PUPILS AND SUBJECTS IS CONSIDERED A MEANS OF EDUCATING THE ACADEMICALLY GIFTED STUDENT. FIVE INTRODUCTORY ARTICLES PROVIDE A FRAMEWORK FOR THINKING ABOUT ACCELERATION. FIVE PROJECT REPORTS OF ACCELERATED PROGRAMS IN OHIO ARE INCLUDED. ACCELERATION IS NOW BEING REGARDED MORE FAVORABLY THAN FORMERLY, BECAUSE METHODS HAVE BEEN…
Tajima, Toshiki
2005-06-14
A system and method of accelerating ions in an accelerator to optimize the energy produced by a light source. Several parameters may be controlled in constructing a target used in the accelerator system to adjust performance of the accelerator system. These parameters include the material, thickness, geometry and surface of the target.
Tajima, Toshiki
2006-04-18
A system and method of accelerating ions in an accelerator to optimize the energy produced by a light source. Several parameters may be controlled in constructing a target used in the accelerator system to adjust performance of the accelerator system. These parameters include the material, thickness, geometry and surface of the target.
Injection to Rapid Diffusive Shock Acceleration at Perpendicular Shocks in Partially Ionized Plasmas
NASA Astrophysics Data System (ADS)
Ohira, Yutaka
2016-08-01
We present a three-dimensional hybrid simulation of a collisionless perpendicular shock in a partially ionized plasma for the first time. In this simulation, the shock velocity and upstream ionization fraction are v sh ≈ 1333 km s-1 and f i ˜ 0.5, which are typical values for isolated young supernova remnants (SNRs) in the interstellar medium. We confirm previous two-dimensional simulation results showing that downstream hydrogen atoms leak into the upstream region and are accelerated by the pickup process in the upstream region, and large magnetic field fluctuations are generated both in the upstream and downstream regions. In addition, we find that the magnetic field fluctuations have three-dimensional structures and the leaking hydrogen atoms are injected into the diffusive shock acceleration (DSA) at the perpendicular shock after the pickup process. The observed DSA can be interpreted as shock drift acceleration with scattering. In this simulation, particles are accelerated to v ˜ 100 v sh ˜ 0.3 c within ˜100 gyroperiods. The acceleration timescale is faster than that of DSA in parallel shocks. Our simulation results suggest that SNRs can accelerate cosmic rays to 1015.5 eV (the knee) during the Sedov phase.
NASA Technical Reports Server (NTRS)
Kolyer, J. M.; Mann, N. R.
1977-01-01
Methods of accelerated and abbreviated testing were developed and applied to solar cell encapsulants. These encapsulants must provide protection for as long as 20 years outdoors at different locations within the United States. Consequently, encapsulants were exposed for increasing periods of time to the inherent climatic variables of temperature, humidity, and solar flux. Property changes in the encapsulants were observed. The goal was to predict long term behavior of encapsulants based upon experimental data obtained over relatively short test periods.
Accelerated stochastic and hybrid methods for spatial simulations of reaction diffusion systems
NASA Astrophysics Data System (ADS)
Rossinelli, Diego; Bayati, Basil; Koumoutsakos, Petros
2008-01-01
Spatial distributions characterize the evolution of reaction-diffusion models of several physical, chemical, and biological systems. We present two novel algorithms for the efficient simulation of these models: Spatial τ-Leaping ( Sτ-Leaping), employing a unified acceleration of the stochastic simulation of reaction and diffusion, and Hybrid τ-Leaping ( Hτ-Leaping), combining a deterministic diffusion approximation with a τ-Leaping acceleration of the stochastic reactions. The algorithms are validated by solving Fisher's equation and used to explore the role of the number of particles in pattern formation. The results indicate that the present algorithms have a nearly constant time complexity with respect to the number of events (reaction and diffusion), unlike the exact stochastic simulation algorithm which scales linearly.
Pierpont, D. M.; Hicks, M. T.; Turner, P. L.; Watschke, T. M.
2005-11-01
For the successful commercialization of fuel cell technology, it is imperative that membrane electrode assembly (MEA) durability is understood and quantified. MEA lifetimes of 40,000 hours remain a key target for stationary power applications. Since it is impractical to wait 40,000 hours for durability results, it is critical to learn as much information as possible in as short a time period as possible to determine if an MEA sample will survive past its lifetime target. Consequently, 3M has utilized accelerated testing and statistical lifetime modeling tools to develop a methodology for evaluating MEA lifetime. Construction and implementation of a multi-cell test stand have allowed for multiple accelerated tests and stronger statistical data for learning about durability.
New methods for high current fast ion beam production by laser-driven acceleration.
Margarone, D; Krasa, J; Prokupek, J; Velyhan, A; Torrisi, L; Picciotto, A; Giuffrida, L; Gammino, S; Cirrone, P; Cutroneo, M; Romano, F; Serra, E; Mangione, A; Rosinski, M; Parys, P; Ryc, L; Limpouch, J; Laska, L; Jungwirth, K; Ullschmied, J; Mocek, T; Korn, G; Rus, B
2012-02-01
An overview of the last experimental campaigns on laser-driven ion acceleration performed at the PALS facility in Prague is given. Both the 2 TW, sub-nanosecond iodine laser system and the 20 TW, femtosecond Ti:sapphire laser, recently installed at PALS, are used along our experiments performed in the intensity range 10(16)-10(19) W∕cm(2). The main goal of our studies was to generate high energy, high current ion streams at relatively low laser intensities. The discussed experimental investigations show promising results in terms of maximum ion energy and current density, which make the laser-accelerated ion beams a candidate for new-generation ion sources to be employed in medicine, nuclear physics, matter physics, and industry.
Hull, John R.
2000-01-01
Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operating temperature at or below 77K, whereby cooling may be accomplished with liquid nitrogen.
Methods used in WARP3d, a three-dimensional PIC/accelerator code
Grote, D.P.; Friedman, A.; Haber, I.
1997-02-28
WARP-3d(1,2), a three-dimensional PIC/accelerator code, has been developed over several years and has played a major role in the design and analysis of space-charge dominated beam experiments being carried out by the heavy-ion fusion programs at LLNL and LBNL. Major features of the code will be reviewed, including: residence corrections which allow large timesteps to be taken, electrostatic field solution with subgrid scale resolution of internal conductor boundaries, and a beat beam algorithm. Emphasis will be placed on new features and capabilities of the code, which include: a port to parallel processing environments, space-charge limited injection, and the linking of runs covering different sections of an accelerator. Representative applications in which the new features and capabilities are used will be presented along with the important results.
Accelerator-based neutron source for boron neutron capture therapy (BNCT) and method
Yoon, Woo Y.; Jones, James L.; Nigg, David W.; Harker, Yale D.
1999-01-01
A source for boron neutron capture therapy (BNCT) comprises a body of photoneutron emitter that includes heavy water and is closely surrounded in heat-imparting relationship by target material; one or more electron linear accelerators for supplying electron radiation having energy of substantially 2 to 10 MeV and for impinging such radiation on the target material, whereby photoneutrons are produced and heat is absorbed from the target material by the body of photoneutron emitter. The heavy water is circulated through a cooling arrangement to remove heat. A tank, desirably cylindrical or spherical, contains the heavy water, and a desired number of the electron accelerators circumferentially surround the tank and the target material as preferably made up of thin plates of metallic tungsten. Neutrons generated within the tank are passed through a surrounding region containing neutron filtering and moderating materials and through neutron delimiting structure to produce a beam or beams of epithermal neutrons normally having a minimum flux intensity level of 1.0.times.10.sup.9 neutrons per square centimeter per second. Such beam or beams of epithermal neutrons are passed through gamma ray attenuating material to provide the required epithermal neutrons for BNCT use.
Accelerator-based neutron source for boron neutron capture therapy (BNCT) and method
Yoon, W.Y.; Jones, J.L.; Nigg, D.W.; Harker, Y.D.
1999-05-11
A source for boron neutron capture therapy (BNCT) comprises a body of photoneutron emitter that includes heavy water and is closely surrounded in heat-imparting relationship by target material; one or more electron linear accelerators for supplying electron radiation having energy of substantially 2 to 10 MeV and for impinging such radiation on the target material, whereby photoneutrons are produced and heat is absorbed from the target material by the body of photoneutron emitter. The heavy water is circulated through a cooling arrangement to remove heat. A tank, desirably cylindrical or spherical, contains the heavy water, and a desired number of the electron accelerators circumferentially surround the tank and the target material as preferably made up of thin plates of metallic tungsten. Neutrons generated within the tank are passed through a surrounding region containing neutron filtering and moderating materials and through neutron delimiting structure to produce a beam or beams of epithermal neutrons normally having a minimum flux intensity level of 1.0{times}10{sup 9} neutrons per square centimeter per second. Such beam or beams of epithermal neutrons are passed through gamma ray attenuating material to provide the required epithermal neutrons for BNCT use. 3 figs.
2D models of gas flow and ice grain acceleration in Enceladus' vents using DSMC methods
NASA Astrophysics Data System (ADS)
Tucker, Orenthal J.; Combi, Michael R.; Tenishev, Valeriy M.
2015-09-01
The gas distribution of the Enceladus water vapor plume and the terminal speeds of ejected ice grains are physically linked to its subsurface fissures and vents. It is estimated that the gas exits the fissures with speeds of ∼300-1000 m/s, while the micron-sized grains are ejected with speeds comparable to the escape speed (Schmidt, J. et al. [2008]. Nature 451, 685-688). We investigated the effects of isolated axisymmetric vent geometries on subsurface gas distributions, and in turn, the effects of gas drag on grain acceleration. Subsurface gas flows were modeled using a collision-limiter Direct Simulation Monte Carlo (DSMC) technique in order to consider a broad range of flow regimes (Bird, G. [1994]. Molecular Gas Dynamics and the Direct Simulation of Gas Flows. Oxford University Press, Oxford; Titov, E.V. et al. [2008]. J. Propul. Power 24(2), 311-321). The resulting DSMC gas distributions were used to determine the drag force for the integration of ice grain trajectories in a test particle model. Simulations were performed for diffuse flows in wide channels (Reynolds number ∼10-250) and dense flows in narrow tubular channels (Reynolds number ∼106). We compared gas properties like bulk speed and temperature, and the terminal grain speeds obtained at the vent exit with inferred values for the plume from Cassini data. In the simulations of wide fissures with dimensions similar to that of the Tiger Stripes the resulting subsurface gas densities of ∼1014-1020 m-3 were not sufficient to accelerate even micron-sized ice grains to the Enceladus escape speed. In the simulations of narrow tubular vents with radii of ∼10 m, the much denser flows with number densities of 1021-1023 m-3 accelerated micron-sized grains to bulk gas speed of ∼600 m/s. Further investigations are required to understand the complex relationship between the vent geometry, gas source rate and the sizes and speeds of ejected grains.
Evaluation of Dynamic Mechanical Loading as an Accelerated Test Method for Ribbon Fatigue
Bosco, Nick; Silverman, Timothy J.; Wohlgemuth, John; Kurtz, Sarah; Inoue, Masanao; Sakurai, Keiichiro; Shioda, Tsuyoshi; Zenkoh, Hirofumi; Hirota, Kusato; Miyashita, Masanori; Tadanori, Tanahashi; Suzuki, Soh; Chen, Yifeng; Verlinden, Pierre J.
2014-12-31
Dynamic Mechanical Loading (DML) of photovoltaic modules is explored as a route to quickly fatigue copper interconnect ribbons. Results indicate that most of the interconnect ribbons may be strained through module mechanical loading to a level that will result in failure in a few hundred to thousands of cycles. Considering the speed at which DML may be applied, this translates into a few hours of testing. To evaluate the equivalence of DML to thermal cycling, parallel tests were conducted with thermal cycling. Preliminary analysis suggests that one +/-1 kPa DML cycle is roughly equivalent to one standard accelerated thermal cycle and approximately 175 of these cycles are equivalent to a 25-year exposure in Golden Colorado for the mechanism of module ribbon fatigue.
Evaluation of Dynamic Mechanical Loading as an Accelerated Test Method for Ribbon Fatigue: Preprint
Bosco, N.; Silverman, T. J.; Wohlgemuth, J.; Kurtz, S.; Inoue, M.; Sakurai, K.; Shinoda, T.; Zenkoh, H.; Hirota, K.; Miyashita, M.; Tadanori, T.; Suzuki, S.
2015-04-07
Dynamic Mechanical Loading (DML) of photovoltaic modules is explored as a route to quickly fatigue copper interconnect ribbons. Results indicate that most of the interconnect ribbons may be strained through module mechanical loading to a level that will result in failure in a few hundred to thousands of cycles. Considering the speed at which DML may be applied, this translates into a few hours o testing. To evaluate the equivalence of DML to thermal cycling, parallel tests were conducted with thermal cycling. Preliminary analysis suggests that one +/-1 kPa DML cycle is roughly equivalent to one standard accelerated thermal cycle and approximately 175 of these cycles are equivalent to a 25-year exposure in Golden Colorado for the mechanism of module ribbon fatigue.
Neutron source, linear-accelerator fuel enricher and regenerator and associated methods
Steinberg, Meyer; Powell, James R.; Takahashi, Hiroshi; Grand, Pierre; Kouts, Herbert
1982-01-01
A device for producing fissile material inside of fabricated nuclear elements so that they can be used to produce power in nuclear power reactors. Fuel elements, for example, of a LWR are placed in pressure tubes in a vessel surrounding a liquid lead-bismuth flowing columnar target. A linear-accelerator proton beam enters the side of the vessel and impinges on the dispersed liquid lead-bismuth columns and produces neutrons which radiate through the surrounding pressure tube assembly or blanket containing the nuclear fuel elements. These neutrons are absorbed by the natural fertile uranium-238 elements and are transformed to fissile plutonium-239. The fertile fuel is thus enriched in fissile material to a concentration whereby they can be used in power reactors. After use in the power reactors, dispensed depleted fuel elements can be reinserted into the pressure tubes surrounding the target and the nuclear fuel regenerated for further burning in the power reactor.
Liu, F.; Brown, I.; Phillips, H.; Biallas, George; Siggins, Timothy
1997-05-01
An important technique used for the suppression of surface flashover on high voltage DC ceramic insulators as well as for RF windows is that of providing some surface conduction to bleed off accumulated surface charge. We have used metal ion implantation to modify the surface of high voltage ceramic vacuum insulators to provide a niform surface resistivity of approximately 5 x 1010 W/square. A vacuum arc ion source based implanter was used to implant Pt at an energy of about 135 keV to doses of up to more than 5 x 1016 ions/cm2 into small ceramic test coupons and also into the inside surface of several ceramic accelerator columns 25 cm I. D. by 28 cm long. Here we describe the experimental set-up used to do the ion implantation and summarize the results of our exploratory work on implantation into test coupons as well as the implantations of the actual ceramic columns.
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Kim, Byungyeon; Park, Byungjun; Lee, Seungrag; Won, Youngjae
2016-01-01
We demonstrated GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method. Our algorithm was verified for various fluorescence lifetimes and photon numbers. The GPU processing time was faster than the physical scanning time for images up to 800 × 800, and more than 149 times faster than a single core CPU. The frame rate of our system was demonstrated to be 13 fps for a 200 × 200 pixel image when observing maize vascular tissue. This system can be utilized for observing dynamic biological reactions, medical diagnosis, and real-time industrial inspection. PMID:28018724
Sanz, Darío Esteban; Alvarez, Guillermo Daniel; Nelli, Flavio Enrico
2007-03-21
A new method to measure the effect of the backscatter into the beam monitor chambers in linear accelerators is introduced from first principles. The technique, applicable to high-energy photon beams, is similar to the well-known telescopic method although here the heavy blocks are replaced by a very small, centred block on the shadow tray, thus the name 'ecliptic method'. This effect, caused mainly by backscattering from the secondary collimators, is known to be an output factor constituent and must be accounted for when detailed calculations involving the machine's head are required. Since its magnitude is generally small, experimental errors might obscure the behaviour of the phenomenon. Consequently, the procedure introduced goes along with an uncertainty assessment. Our theory was confirmed via measurements in cobalt-60 beams, where the studied effect does not contribute to the output factor. Measurements were also performed on our Saturne 41 linear accelerator and the results were qualitatively similar to those described elsewhere. The collimation systems were studied separately by varying one jaw setting while keeping the other at its maximum value. In the light of these results, we deduced an algorithm that can correlate the former data with the effect of backscattering to the beam monitor chambers for any rectangular field within 0.5%, which is of the order of the experimental uncertainty (0.6%). As we show, the experimental procedure is safe, simple, not invasive for the linac and requires only basic dosimetry equipment.
Huang, Susie Y; Witzel, Thomas; Wald, Lawrence L
2008-11-01
Control of the longitudinal magnetization in fast gradient-echo (GRE) sequences is an important factor in enabling the high efficiency of balanced steady-state free precession (bSSFP) sequences. We introduce a new method for accelerating the return of the longitudinal magnetization to the +z-axis that is independent of externally applied RF pulses and shows improved off-resonance performance. The accelerated radiation damping for increased spin equilibrium (ARISE) method uses an external feedback circuit to strengthen the radiation damping (RD) field. The enhanced RD field rotates the magnetization back to the +z-axis at a rate faster than T(1) relaxation. The method is characterized in GRE phantom imaging at 3T as a function of feedback gain, phase, and duration, and compared with results from numerical simulations of the Bloch equations incorporating RD. A short period of feedback (10 ms) during a refocused interval of a crushed GRE sequence allowed greater than 99% recovery of the longitudinal magnetization when very little T(2) relaxation had time to occur. An appropriate application might be to improve navigated sequences. Unlike conventional flip-back schemes, the ARISE "flip-back" is generated by the spins themselves, thereby offering a potentially useful building block for enhancing GRE sequences.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
NASA Astrophysics Data System (ADS)
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Ciccotti, Giovanni; Meloni, Simone
2011-04-07
We introduce a new method to simulate the physics of rare events. The method, an extension of the Temperature Accelerated Molecular Dynamics, comes in use when the collective variables introduced to characterize the rare events are either non-analytical or so complex that computing their derivative is not practical. We illustrate the functioning of the method by studying the homogeneous crystallization in a sample of Lennard-Jones particles. The process is studied by introducing a new collective variable that we call Effective Nucleus Size N. We have computed the free energy barriers and the size of critical nucleus, which result in agreement with data available in the literature. We have also performed simulations in the liquid domain of the phase diagram. We found a free energy curve monotonically growing with the nucleus size, consistent with the liquid domain.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
NASA Astrophysics Data System (ADS)
Huré, J.-M.; Hersant, F.
2017-02-01
We compute the structure of a self-gravitating torus with polytropic equation of state (EOS) rotating in an imposed centrifugal potential. The Poisson solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at second order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (i.e. polytropic index n ≥ 1), the underlying second order is naturally recovered for boundary values and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, virial parameter, etc.), i.e. errors vary with the number N of nodes per direction as ˜1/N2. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (i.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS (n < 1), due to infinite mass density gradients at the fluid surface. We fix the problem by using kernel splitting. Finally, we propose an accelerated version of the self-consistent field (SCF) algorithm based on a node-by-node pre-conditioning of the mass density at each step. The computing time is reduced by a factor of 2 typically, regardless of the polytropic index. There is a priori no obstacle to applying these results and techniques to ellipsoidal configurations and even to 3D configurations.
Otsuka, Takao; Okimoto, Noriaki; Taiji, Makoto
2015-11-15
In the field of drug discovery, it is important to accurately predict the binding affinities between target proteins and drug applicant molecules. Many of the computational methods available for evaluating binding affinities have adopted molecular mechanics-based force fields, although they cannot fully describe protein-ligand interactions. A noteworthy computational method in development involves large-scale electronic structure calculations. Fragment molecular orbital (FMO) method, which is one of such large-scale calculation techniques, is applied in this study for calculating the binding energies between proteins and ligands. By testing the effects of specific FMO calculation conditions (including fragmentation size, basis sets, electron correlation, exchange-correlation functionals, and solvation effects) on the binding energies of the FK506-binding protein and 10 ligand complex molecule, we have found that the standard FMO calculation condition, FMO2-MP2/6-31G(d), is suitable for evaluating the protein-ligand interactions. The correlation coefficient between the binding energies calculated with this FMO calculation condition and experimental values is determined to be R = 0.77. Based on these results, we also propose a practical scheme for predicting binding affinities by combining the FMO method with the quantitative structure-activity relationship (QSAR) model. The results of this combined method can be directly compared with experimental binding affinities. The FMO and QSAR combined scheme shows a higher correlation with experimental data (R = 0.91). Furthermore, we propose an acceleration scheme for the binding energy calculations using a multilayer FMO method focusing on the protein-ligand interaction distance. Our acceleration scheme, which uses FMO2-HF/STO-3G:MP2/6-31G(d) at R(int) = 7.0 Å, reduces computational costs, while maintaining accuracy in the evaluation of binding energy.
2014-07-01
Corrosion Screening of EV31A Magnesium and Other Magnesium Alloys Using Laboratory-Based Accelerated Corrosion and Electro-chemical Methods...originator. Army Research Laboratory Aberdeen Proving Ground, MD 21005-5066 ARL-TR-6899 July 2014 Corrosion Screening of EV31A...Magnesium and Other Magnesium Alloys Using Laboratory-Based Accelerated Corrosion and Electro-chemical Methods Brian E. Placzankis, Joseph P
NASA Astrophysics Data System (ADS)
Feonychev, A. I.; Dolgikh, G. A.
2001-07-01
The effect of constant and time-dependent accelerations (vibrations) on the melt flow and heat and mass transfer in the process of crystal growth by the method of directional crystallization (Bridgman method) onboard spacecraft is numerically investigated. The mathematical formulation of the problem and the technique to solve it numerically are given. The time-averaged flow arising under the action of vibrations in a nonisothermal fluid is investigated. With the help of a rational choice of dimensionless similitude parameters, a generalized dependence on the intensity of melt flow is obtained for the radial segregation of dopants. This dependence is invariant with respect to the type of motive power and thermal boundary conditions in the region of very small velocities of melt flow (“creeping” flow), which are characteristic for microgravity conditions. The allowable levels of constant accelerations, as well as the frequency dependences of tolerable vibrations, are obtained for five typical semiconductor materials: Ge(Ga), GaAs(Te), InSb(Te), Si(P), and Si(B). It is shown that the radial segregation of dopant is much more sensitive to microaccelerations than the axial one. In the region of small velocities, the latter is determined by the duration of the transition regime, which depends on certain physical properties of the melt. New problems that resulted from the investigations performed are discussed.
NASA Astrophysics Data System (ADS)
Ellis, P. F., II; Ferguson, A. F.
1995-04-01
In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. Phase 1 of the project, Conceptual Design of an accelerated test method and apparatus, was successfully completed in June 1993. The culmination of that effort was the concept of the Simulated Stator Unit (SSU) test. The objective of the Phase 2 limited proof-of-concept demonstration was to: answer specific engineering/design questions; design and construct an analog control sequencer and supporting apparatus; and conduct limited tests to determine the viability of the SSU test concept. This report reviews the SSU test concept, and describes the results through the conclusion of the proof-of-concept prototype tests in March 1995. The technical design issues inherent in transforming any conceptual design to working equipment have been resolved, and two test systems and controllers have been constructed. Pilot tests and three prototype tests have been completed, concluding the current phase of work. One prototype unit was tested without thermal stress loads. Twice daily insulation property measurements (IPM's) on this unit demonstrated that the insulation property measurements themselves did not degrade the SSU.
Ellis, II, P F; Ferguson, A F
1995-04-19
In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. Phase 1 of the project, Conceptual Design of an accelerated test method and apparatus, was successfully completed in June 1993. The culmination of that effort was the concept of the Simulated Stator Unit (SSU) test. The objective of the Phase 2 limited proof-of-concept demonstration was to: answer specific engineering/design questions; design and construct an analog control sequencer and supporting apparatus; and conduct limited tests to determine the viability of the SSU test concept. This report reviews the SSU test concept, and describes the results through the conclusion of the proof-of-concept prototype tests in March 1995. The technical design issues inherent in transforming any conceptual design to working equipment have been resolved, and two test systems and controllers have been constructed. Pilot tests and three prototype tests have been completed, concluding the current phase of work. One prototype unit was tested without thermal stress loads. Twice daily insulation property measurements (IPMs) on this unit demonstrated that the insulation property measurements themselves did not degrade the SSU.
Can Accelerators Accelerate Learning?
NASA Astrophysics Data System (ADS)
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
Teng, L.C.
1960-01-19
ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.
Accelerators, Colliders, and Snakes
NASA Astrophysics Data System (ADS)
Courant, Ernest D.
2003-12-01
The author traces his involvement in the evolution of particle accelerators over the past 50 years. He participated in building the first billion-volt accelerator, the Brookhaven Cosmotron, which led to the introduction of the "strong-focusing" method that has in turn led to the very large accelerators and colliders of the present day. The problems of acceleration of spin-polarized protons are also addressed, with discussions of depolarizing resonances and "Siberian snakes" as a technique for mitigating these resonances.
Hu, Yu-Jen; Chow, Kuan-Chih; Liu, Ching-Chuan; Lin, Li-Jen; Wang, Sheng-Cheng; Wang, Shulhn-Der
2015-08-01
The standard World Health Organization procedure for vaccine development has provided a guideline for influenza viruses, but no systematic operational model. We recently designed a systemic analysis method to evaluate annual perspective sequence changes of influenza virus strains. We applied dnaml of PHYLIP 3.69, developed by Joseph Felsenstein of Washington University, and ClustalX2, developed by Larkin et al, for calculating, comparing, and localizing the most plausible vaccine epitopes. This study identified the changes in biological sequences and associated alignment alterations, which would ultimately affect epitope structures, as well as the plausible hidden features to search for the most conserved and effective epitopes for vaccine development. Addition our newly designed systemic analysis method to supplement the WHO guidelines could accelerate the development of urgently needed vaccines that might concurrently combat several strains of viruses within a shorter period.
Yon, Lisa; Faulkner, Brian; Kanchanapangka, Sumolya; Chaiyabutr, Narongsak; Meepan, Sompast; Lasley, Bill
2010-01-01
Noninvasive hormone assays provide a way to determine an animal's health or reproductive status without the need for physical or chemical restraint, both of which create unnecessary stress for the animal, and can potentially alter the hormones being measured. Because hormone metabolism is highly species-specific, each assay must be validated for use in the species of interest. Validation of noninvasive steroid hormone assays has traditionally required the administration of relatively high doses of radiolabelled compounds (100 µCi or more of (14)C labeled hormone) to permit subsequent detection of the excreted metabolites in the urine and feces. Accelerator mass spectrometry (AMS) is sensitive to extremely low levels of rare isotopes such as (14)C, and provides a way to validate hormone assays using much lower levels of radioactivity than those traditionally employed. A captive Asian bull elephant was given 1 µCi of (14)C-testosterone intravenously, and an opportunistic urine sample was collected 2 hr after the injection. The sample was separated by HPLC and the (14)C in the fractions was detected by AMS to characterize the metabolites present in the urine. A previously established HPLC protocol was used, which permitted the identification of fractions into which testosterone sulfate, testosterone glucuronide, and the parent compound testosterone elute. Results from this study indicate that the majority of testosterone excreted in the urine of the Asian bull elephant is in the form of testosterone sulfate. A small amount of testosterone glucuronide is also excreted, but there is no parent compound present in the urine at all. These results underscore the need for enzymatic hydrolysis to prepare urine samples for hormone assay measurement. Furthermore, they highlight the importance of proper hormone assay validation in order to ensure accurate measurement of the desired hormone. Although this study demonstrated the utility of AMS for safer validation of
NASA Astrophysics Data System (ADS)
Olson, Allen H.
1987-08-01
The Simultaneous Iterative Reconstruction Technique (SIRT) is a variation of Richardson's method for solving linear systems with positive definitive matrices, and can be used for solving any least squares problem. Previous SIRT methods used in tomography have suggested a constant normalization factor for the step size. With this normalization, the convergence rate of the eigencomponents decreases as the eigenvalue decreases, making these methods impractical for obtaining large bandwidth solutions. By allowing the normalization factor to change with each iteration, the error after k iterations is shown to be a k th order polynomial. The factors are then chosen to yield a Chebyshev polynomial so that the maximum error in the iterative method is minimized over a prescribed range of eigenvalues. Compared with k iterations using a constant normalization, the Chebyshev method requires only √ and has the property that all eigencomponents converge at the same rate. Simple expressions are given which permit the number of iterations to be determined in advanced based upon the desired accuracy and bandwidth. A stable ordering of the Chebyshev factors is also given which minimizes the effects of numerical roundoff. Since a good upper bound for the maximum eigenvalue of the normal matrix is essential to the calculations, the well known 'power method with shift of origin' is combined with the Chebyshev method to estimate its value.
Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method
Dana Knoll; HyeongKae Park; Chris Newman
2011-02-01
We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.
On the equivalence of LIST and DIIS methods for convergence acceleration
NASA Astrophysics Data System (ADS)
Garza, Alejandro J.; Scuseria, Gustavo E.
2015-04-01
Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay's DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.
On the equivalence of LIST and DIIS methods for convergence acceleration
Garza, Alejandro J.; Scuseria, Gustavo E.
2015-04-28
Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.
Energy Spectrum of Nonthermal Electrons Accelerated at a Plane Shock
NASA Astrophysics Data System (ADS)
Kang, Hyesung
2011-04-01
We calculate the energy spectra of cosmic ray (CR) protons and electrons at a plane shock with quasi-parallel magnetic fields,using time-dependent, diffusive shock acceleration (DSA) simulations,including energy losses via synchrotron emission and Inverse Compton (IC) scattering. A thermal leakage injection model and a Bohm type diffusion coefficient are adopted. The electron spectrum at the shock becomes steady after the DSA energy gains balance the synchrotron/IC losses, and it cuts off at the equilibrium momentum p_{eq}.In the postshock region the cutoff momentum of the electron spectrum decreases with the distance from the shock due to the energy losses and the thickness of the spatial distribution of electrons scales as p^{-1}. Thus the slope of the downstream integrated spectrum steepens by one power of p for p_{br}
DSA.
Nunez, Yolanda P; Carrascosa, Alfonso V; González, Ramón; Polo, María C; Martínez-Rodríguez, Adolfo J
2005-09-07
Five mutants (obtained by UV mutagenesis) and the parent strain were selected to produce sparkling wines following the traditional or champenoise method. The wines were aged with the yeast for 9 months, with samples being taken each month for analytical and sensory determinations. The wines elaborated with mutant strain IFI473I demonstrated an accelerated release of protein, amino acids, and polysaccharides. An analysis of the secreted polysaccharides revealed that mannose was the major sugar present. The effects of the products released by yeasts on the foaming properties of the wines were determined by both sensory and instrumental analysis. In all cases, the wines elaborated with mutant strain IFI473I showed improved foaming properties as compared to wines fermented without this strain. Similar results were obtained at a decreased aging time of 6 months, thereby confirming the capacity of IFI473I strain to carry out an accelerated autolysis. These results demonstrate that mutant strain IFI473I can significantly reduce production times of high-quality sparkling wines.
Particle Acceleration at Relativistic Shocks in Extragalactic Systems
NASA Astrophysics Data System (ADS)
Baring, Matthew G.; Summerlin, Errol J.
2009-11-01
Diffusive shock acceleration (DSA) at relativistic shocks is expected to be an important acceleration mechanism in a variety of astrophysical objects including extragalactic jets in active galactic nuclei and gamma ray bursts. These sources remain strong and interesting candidate sites for the generation of ultra-high energy cosmic rays. In this paper, key predictions of DSA at relativistic shocks that are salient to the issue of cosmic ray ion and electron production are outlined. Results from a Monte Carlo simulation of such diffusive acceleration in test-particle, relativistic, oblique, MHD shocks are presented. Simulation output is described for both large angle and small angle scattering scenarios, and a variety of shock obliquities including superluminal regimes when the de Hoffman-Teller frame does not exist. The distribution function power-law indices compare favorably with results from other techniques. They are found to depend sensitively on the mean magnetic field orientation in the shock, and the nature of MHD turbulence that propagates along fields in shock environs. An interesting regime of flat spectrum generation is addressed, providing evidence for its origin being due to shock drift acceleration. The impact of these theoretical results on gamma-ray burst and blazar science is outlined. Specifically, Fermi gamma-ray observations of these cosmic sources are already providing significant constraints on important environmental quantities for relativistic shocks, namely the frequency of scattering and the level of field turbulence.
NASA Astrophysics Data System (ADS)
Fasolka, Michael J.
2005-03-01
Increasingly, new materials are highly tailored towards specific applications, are formulated from many components, and exhibit behavior governed by a multitude of physical, chemical and processing factors. Accordingly, the discovery and optimization of materials are met by considerable challenges inherent to the understanding of large, complex parameter spaces. In this respect, combinatorial and high-throughput (C&HT) approaches are advantageous, since they present the ability to rapidly assess materials properties over large parameter ranges. The NIST Combinatorial Methods Center (NCMC, see www.nist.gov/combi) specializes in the development of quantitative C&HT measurement methods for materials research. In large part, the NCMC concentrates on continuous gradient (CG) combinatorial methods, which involve the fabrication and HT measurement of systems that gradually vary parameters over a single specimen, and which offer an alternative to the (often costly) robotics-driven C&HT paradigm used by the pharmaceutical industry. CG techniques are particularly suited for materials science since they naturally produce thorough maps (e.g. continuous phase diagrams) that relate materials properties to chemical, compositional, physical and processing parameters. This presentation focuses on NCMC research applied to the advancement of polymer-based nanotechnology. Topics to be discussed include CG techniques for the design and optimization of self-assembled systems, ultra-thin films, and intelligent surfaces; and HT methods for measuring thin film morphology and mechanical properties. In addition, the application of CG methods to the advancement of nanometrology, specifically scanned probe microscopy, will be discussed.
NASA Astrophysics Data System (ADS)
Spellings, Matthew; Marson, Ryan L.; Anderson, Joshua A.; Glotzer, Sharon C.
2017-04-01
Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks-Chandler-Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method is a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
An Analytical Method to Calculate Phantom Scatter Factor for Photon Beam Accelerators
Birgani, Mohammad Javad Tahmasebi; Chegeni, Nahid; Behrooz, Mohammad Ali; Bagheri, Marziyeh; Danyaei, Amir; Shamsi, Azin
2017-01-01
Introduction One of the important input factors in the commissioning of the radiotherapy treatment planning systems is the phantom scatter factor (Sp) which requires the same collimator opening for all radiation fields. In this study, we have proposed an analytical method to overcome this issue. Methods The measurements were performed using Siemens Primus Plus with photon energy 6 MV for field sizes from 5×5cm2 to 40×40cm2. Phantom scatter factor was measured through the division of total scatter output factors (Scp), and collimator scatter factor (Sc). Results The mean percent difference between the measured and calculated Sp was 1.00% and -3.11% for 5×5, 40×40 cm2 field size respectively. Conclusion This method is applicable especially for small fields used in IMRT which, measuring collimator scatter factor is not reliable due to the lateral electron disequilibrium. PMID:28243402
Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.
Nagaoka, Tomoaki; Watanabe, Soichi
2011-01-01
Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.
Application of the Multipoint Method to the Kinetics of Accelerator-Driven Systems
Ravetto, P.; Rostagno, M.M.; Bianchini, G.; Carta, M.; D'Angelo, A.
2004-09-15
The mathematical foundations of the multipoint method are illustrated and the method is developed for the neutron kinetics of multiplying systems to treat physical situations in which spatial and spectral effects can play an important role in transient conditions, and hence the classical point-kinetic model can become inadequate. In the present paper the method is specifically developed for source-driven systems, through a proper adaptation of the factorization-projection technique used to derive other classic kinetic models. The results presented for some test cases show the advantages that can be attained with respect to the standard point model, even when treating relevant spatial and spectral transients. It is then shown how the technique can be inserted into a quasi-static framework.
White III, James B; Archibald, Richard K; Evans, Katherine J; Drake, John
2011-01-01
In this paper we present a new approach to increase the time-step size for an explicit discontinuous Galerkin numerical method. The attributes of this approach are demonstrated on standard tests for the shallow-water equations on the sphere. The addition of multiwavelets to discontinuous Galerkin method, which has the benefit of being scalable, flexible, and conservative, provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact linear part time-evolution schemes, which can remain stable for implicit-sized time steps, can help increase the time-step size for shallow water equations on the sphere.
NASA Astrophysics Data System (ADS)
Mohanty, Nihar; Ko, Akiteru; Cole, Christopher; Rastogi, Vinayak; Kumar, Kaushik; Schmid, Gerard; Farrell, Richard; Ryan, Todd; Hosler, Erik; Xu, Ji; Preil, Moshe
2014-03-01
In this paper, we demonstrate the unique advantage of dual-frequency mid-gap capacitively coupled plasma (m-CCP) in advanced node patterning process with regard to etch rate / depth uniformity and critical dimension (CD) control in conjunction with wider process window for aspect ratio dependent & microloading effects. Unlike the non-planar plasma sources, the simple design of the mid-gap CCPs enables both metal and non-metal hard-mask based patterning, which provides essential flexibility for conventional and DSA patterning. We present data on both, the conventional multi patterning as well as DSA patterning for trenches / fins and holes. Rigorous CD control and CDU is shown to be crucial for multi patterning as they lead to undesirable odd-even delta and pitch walking. For DSA patterning, co-optimized Ne / Vdc of the dual frequency CCPs would be demonstrated to be advantageous for higher organic-to-organic selectivity during co-polymer etching.
NASA Astrophysics Data System (ADS)
Singh, Arjun; Chan, Boon Teik; Parnell, Doni; Wu, Hengpeng; Yin, Jian; Cao, Yi; Gronheid, Roel
2015-03-01
The patterning potential of block copolymer (BCP) materials via various directed self-assembly (DSA) schemes has been demonstrated for over a decade. We have previously reported the HONEYCOMB flow; a process flow where we utilize Extreme Ultraviolet Lithography and Oxygen plasma to guide the assembly of cylindrical phase BCPs into regular hexagonal arrays of contact holes [1, 2]. In this work we report the development of a new process flow, the CHIPS flow, where we use ArFi lithography to print guiding patterns for the chemo-epitaxial DSA of BCPs. Using this process flow we demonstrate BCP assembly into hexagonal arrays with sub-25 nm half-pitch and discuss critical steps of the process flow. Additionally, we discuss the influence of under-layer surface energy on the DSA process window and report contact hole metrology results.
Investigation of Accelerated Methods for the Determination of Available Alkali in Pozzolans
1974-02-01
four natural and two fly ash ) were selected and identified as follows: a. Volcanic cinders (Vol C) b. Calcined Keasey Shale (CK Sh) c. Calcined...Diatomaceous Shale (CD Sh) d. Calcined Tuff (CT) e. F e. Fly Ash (FA I) ’ f. Fly Ash (FA II) 6. Each pozzolan was blended in a one-quart blender for four...the acceptance testing of fly ash pozzolans. Method 3 is also recommended I as an optional method for testing natural pozzolans which have been shown
Properties of the Feynman-alpha method applied to accelerator-driven subcritical systems.
Taczanowski, S; Domanska, G; Kopec, M; Janczyszyn, J
2005-01-01
A Monte Carlo study of the Feynman-method with a simple code simulating the multiplication chain, confined to pertinent time-dependent phenomena has been done. The significance of its key parameters (detector efficiency and dead time, k-source and spallation neutrons multiplicities, required number of fissions etc.) has been discussed. It has been demonstrated that this method can be insensitive to properties of the zones surrounding the core, whereas is strongly affected by the detector dead time. In turn, the influence of harmonics in the neutron field and of the dispersion of spallation neutrons has proven much less pronounced.
Accelerating the Use of Weblogs as an Alternative Method to Deliver Case-Based Learning
ERIC Educational Resources Information Center
Chen, Charlie; Wu, Jiinpo; Yang, Samuel C.
2008-01-01
Weblog technology is an alternative medium to deliver the case-based method of learning business concepts. The social nature of this technology can potentially promote active learning and enhance analytical ability of students. The present research investigates the primary factors contributing to the adoption of Weblog technology by students to…
Kuwahara, Hiroyuki; Myers, Chris J
2008-09-01
Given the substantial computational requirements of stochastic simulation, approximation is essential for efficient analysis of any realistic biochemical system. This paper introduces a new approximation method to reduce the computational cost of stochastic simulations of an enzymatic reaction scheme which in biochemical systems often includes rapidly changing fast reactions with enzyme and enzyme-substrate complex molecules present in very small counts. Our new method removes the substrate dissociation reaction by approximating the passage time of the formation of each enzyme-substrate complex molecule which is destined to a production reaction. This approach skips the firings of unimportant yet expensive reaction events, resulting in a substantial acceleration in the stochastic simulations of enzymatic reactions. Additionally, since all the parameters used in our new approach can be derived by the Michaelis-Menten parameters which can actually be measured from experimental data, applications of this approximation can be practical even without having full knowledge of the underlying enzymatic reaction. Here, we apply this new method to various enzymatic reaction systems, resulting in a speedup of orders of magnitude in temporal behavior analysis without any significant loss in accuracy. Furthermore, we show that our new method can perform better than some of the best existing approximation methods for enzymatic reactions in terms of accuracy and efficiency.
2008-03-01
9540P Cyclic Accelerated Corrosion Analysis of Nonchromate Conversion Coatings on Aluminum Alloys 2024, 2219 , 5083, and 7075 Using DoD Paint Systems...PERFORMANCE ASSESSMENT IN ACCELERATED CORROSION AND ADHESION OF CARC PREPARED ALUMINUM ALLOY 5059-H131 FOR THREE DIFFERENT PRETREATMENT METHODS Brian E...whether or not the alloy differences will warrant modifications to current pretreatment processes. Keywords: Corrosion , Aluminum , 5059-H131, Cyclic
2009-09-01
General Corrosion Resistance Comparisons of Medium- and High-Strength Aluminum Alloys for DOD Systems Using Laboratory-Based Accelerated... Aluminum Alloys for DOD Systems Using Laboratory-Based Accelerated Corrosion Methods Brian E. Placzankis Weapons and Materials Research Directorate...March 2006–October 2008 4. TITLE AND SUBTITLE General Corrosion Resistance Comparisons of Medium- and High-Strength Aluminum Alloys for DOD
NASA Astrophysics Data System (ADS)
Beaumont, Stéphane; Torfeh, Tarraf; Latreille, Romain; Ben Hdech, Yassine; Guedon, Jeanpierre
2011-03-01
The precision of a medical LINear ACcelerator (LINAC) gantry rotation angle is crucial for the radiation therapy process, especially in stereotactic radio surgery, given the expected precision of the treatment and in Image Guided Radiation Therapy (IGRT) where the mechanical stability is disturbed due to the additional weight of the kV x-ray tube and detector. We present in this paper an extension of the Winston and Lutz test initially dedicated to control the size and the position of the isocenter of the LINAC and here adapted to test the gantry rotation angle with no additional portal images. This new method uses a test-object patented by QualiFormeD5 and is integrated in the QUALIMAGIQ software platform developed to automatically analyze images acquired for quality control of medical devices.
NASA Astrophysics Data System (ADS)
Rybicki, G. B.; Hummer, D. G.
1994-10-01
Since the mass of the electron is very small relative to atomic masses, Thomson scattering of low-energy photons (hν<
Rider, William; Kamm, J. R.; Tomkins, C. D.; Zoldi, C. A.; Prestridge, K. P.; Marr-Lyon, M.; Rightley, P. M.; Benjamin, R. F.
2002-01-01
We consider the detailed structures of mixing flows for Richtmyer-Meshkov experiments of Prestridge et al. [PRE 00] and Tomkins et al. [TOM 01] and examine the most recent measurements from the experimental apparatus. Numerical simulations of these experiments are performed with three different versions of high resolution finite volume Godunov methods. We compare experimental data with simulations for configurations of one and two diffuse cylinders of SF{sub 6} in air using integral measures as well as fractal analysis and continuous wavelet transforms. The details of the initial conditions have a significant effect on the computed results, especially in the case of the double cylinder. Additionally, these comparisons reveal sensitive dependence of the computed solution on the numerical method.
NASA Astrophysics Data System (ADS)
Guda, A. A.; Guda, S. A.; Soldatov, M. A.; Lomachenko, K. A.; Bugaev, A. L.; Lamberti, C.; Gawelda, W.; Bressler, C.; Smolentsev, G.; Soldatov, A. V.; Joly, Y.
2016-05-01
Finite difference method (FDM) implemented in the FDMNES software [Phys. Rev. B, 2001, 63, 125120] was revised. Thorough analysis shows, that the calculated diagonal in the FDM matrix consists of about 96% zero elements. Thus a sparse solver would be more suitable for the problem instead of traditional Gaussian elimination for the diagonal neighbourhood. We have tried several iterative sparse solvers and the direct one MUMPS solver with METIS ordering turned out to be the best. Compared to the Gaussian solver present method is up to 40 times faster and allows XANES simulations for complex systems already on personal computers. We show applicability of the software for metal-organic [Fe(bpy)3]2+ complex both for low spin and high spin states populated after laser excitation.
Wehrle, Marius; Sulc, Miroslav; Vanícek, Jirí
2011-01-01
We explore three specific approaches for speeding up the calculation of quantum time correlation functions needed for time-resolved electronic spectra. The first relies on finding a minimum set of sufficiently accurate electronic surfaces. The second increases the time step required for convergence of exact quantum simulations by using different split-step algorithms to solve the time-dependent Schrödinger equation. The third approach lowers the number of trajectories needed for convergence of approximate semiclassical dynamics methods.
Crespo, Alejandro C; Dominguez, Jose M; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Cicciarelli, James C.; Chang, Youngil; Koss, Michael; Hacke, Katrin; Kasahara, Noriyuki; Burns, Kevin M.; Min, David I.; Naraghi, Robert; Shah, Tariq
2017-01-01
The association between donor specific antibodies (DSA) and renal transplant rejection has been generally established, but there are cases when a DSA is present without rejection. We examined 73 renal transplant recipients biopsied for transplant dysfunction with DSA test results available: 23 patients diffusely positive for C4d (C4d+), 25 patients focally positive for C4d, and 25 patients negative for C4d (C4d−). We performed C1q and IgG subclass testing in our DSA+ and C4d+ patient group. Graft outcomes were determined for the C4d+ group. All 23 C4d+ patients had IgG DSA with an average of 12,500 MFI (cumulative DSA MFI). The C4d− patients had average DSA less than 500 MFI. Among the patients with C4d+ biopsies, 100% had IgG DSA, 70% had C1q+ DSA, and 83% had complement fixing IgG subclass antibodies. Interestingly, IgG4 was seen in 10 of the 23 recipients' sera, but always along with complement fixing IgG1, and we have previously seen excellent function in patients when IgG4 DSA exists alone. Cumulative DSA above 10,000 MFI were associated with C4d deposition and complement fixation. There was no significant correlation between graft loss and C1q positivity, and IgG subclass analysis seemed to be a better correlate for complement fixing antibodies in the C4d+ patient group. PMID:28182088
Two common laboratory extraction techniques were evaluated for routine use with the micro-colorimetric lipid determination method developed by Van Handel (1985) [E. Van Handel, J. Am. Mosq. Control Assoc. 1(1985) 302] and recently validated for small samples by Inouye and Lotufo ...
Contaldi, Carlo R.
2014-10-01
The recent Bicep2 [1] detection of, what is claimed to be primordial B-modes, opens up the possibility of constraining not only the energy scale of inflation but also the detailed acceleration history that occurred during inflation. In turn this can be used to determine the shape of the inflaton potential V(φ) for the first time — if a single, scalar inflaton is assumed to be driving the acceleration. We carry out a Monte Carlo exploration of inflationary trajectories given the current data. Using this method we obtain a posterior distribution of possible acceleration profiles ε(N) as a function of e-fold N and derived posterior distributions of the primordial power spectrum P(k) and potential V(φ). We find that the Bicep2 result, in combination with Planck measurements of total intensity Cosmic Microwave Background (CMB) anisotropies, induces a significant feature in the scalar primordial spectrum at scales k∼ 10{sup -3} Mpc {sup -1}. This is in agreement with a previous detection of a suppression in the scalar power [2].
NASA Astrophysics Data System (ADS)
Różewski, Przemysław
Nowadays, e-learning systems take the form of the Distance Learning Network (DLN) due to widespread use and accessibility of the Internet and networked e-learning services. The focal point of the DLN performance is efficiency of knowledge processing in asynchronous learning mode and facilitating cooperation between students. In addition, the DLN articulates attention to social aspects of the learning process as well. In this paper, a method for the DLN development is proposed. The main research objectives for the proposed method are the processes of acceleration of social collaboration and knowledge sharing in the DLN. The method introduces knowledge-disposed agents (who represent students in educational scenarios) that form a network of individuals aimed to increase their competence. For every agent the competence expansion process is formulated. Based on that outcome the process of dynamic network formation performed on the social and knowledge levels. The method utilizes formal apparatuses of competence set and network game theories combined with an agent system-based approach.
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Li, Changqing
2016-01-01
Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. It remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a nonuniform multiplicative updating algorithm that combines with the ordered subsets (OS) method for fast convergence. However, increasing the number of OS leads to greater approximation errors and the speed gain from larger number of OS is limited. We propose to further enhance the convergence speed by incorporating a first-order momentum method that uses previous iterations to achieve optimal convergence rate. Using numerical simulations and a cubic phantom experiment, we have systematically compared the effects of the momentum technique, the OS method, and the nonuniform updating scheme in accelerating the FMT reconstruction. We found that the proposed combined method can produce a high-quality image using an order of magnitude less time.
Application of accelerated acquisition and highly constrained reconstruction methods to MR
NASA Astrophysics Data System (ADS)
Wang, Kang
2011-12-01
There are many Magnetic Resonance Imaging (MRI) applications that require rapid data acquisition. In conventional proton MRI, representative applications include real-time dynamic imaging, whole-chest pulmonary perfusion imaging, high resolution coronary imaging, MR T1 or T2 mapping, etc. The requirement for fast acquisition and novel reconstruction methods is either due to clinical demand for high temporal resolution, high spatial resolution, or both. Another important category in which fast MRI methods are highly desirable is imaging with hyperpolarized (HP) contrast media, such as HP 3He imaging for evaluation of pulmonary function, and imaging of HP 13C-labeled substrates for the study of in vivo metabolic processes. To address these needs, numerous MR undersampling methods have been developed and combined with novel image reconstruction techniques. This thesis aims to develop novel data acquisition and image reconstruction techniques for the following applications. (I) Ultrashort echo time spectroscopic imaging (UTESI). The need for acquiring many echo images in spectroscopic imaging with high spatial resolution usually results in extended scan times, and thus requires k-space undersampling and novel imaging reconstruction methods to overcome the artifacts related to the undersampling. (2) Dynamic hyperpolarized 13C spectroscopic imaging. HP 13C compounds exhibit non-equilibrium T1 decay and rapidly evolving spectral dynamics, and therefore it is vital to utilize the polarized signal wisely and efficiently to observe the entire temporal dynamic of the injected "C compounds as well as the corresponding downstream metabolites. (3) Time-resolved contrast-enhanced MR angiography. The diagnosis of vascular diseases often requires large coverage of human body anatomies with high spatial resolution and sufficient temporal resolution for the separation of arterial phases from venous phases. The goal of simultaneously achieving high spatial and temporal resolution has
Nicholson, Kelly M; Chandrasekhar, Nita; Sholl, David S
2014-11-18
CONSPECTUS: Not only is hydrogen critical for current chemical and refining processes, it is also projected to be an important energy carrier for future green energy systems such as fuel cell vehicles. Scientists have examined light metal hydrides for this purpose, which need to have both good thermodynamic properties and fast charging/discharging kinetics. The properties of hydrogen in metals are also important in the development of membranes for hydrogen purification. In this Account, we highlight our recent work aimed at the large scale screening of metal-based systems with either favorable hydrogen capacities and thermodynamics for hydrogen storage in metal hydrides for use in onboard fuel cell vehicles or promising hydrogen permeabilities relative to pure Pd for hydrogen separation from high temperature mixed gas streams using dense metal membranes. Previously, chemists have found that the metal hydrides need to hit a stability sweet spot: if the compound is too stable, it will not release enough hydrogen under low temperatures; if the compound is too unstable, the reaction may not be reversible under practical conditions. Fortunately, we can use DFT-based methods to assess this stability via prediction of thermodynamic properties, equilibrium reaction pathways, and phase diagrams for candidate metal hydride systems with reasonable accuracy using only proposed crystal structures and compositions as inputs. We have efficiently screened millions of mixtures of pure metals, metal hydrides, and alloys to identify promising reaction schemes via the grand canonical linear programming method. Pure Pd and Pd-based membranes have ideal hydrogen selectivities over other gases but suffer shortcomings such as sensitivity to sulfur poisoning and hydrogen embrittlement. Using a combination of detailed DFT, Monte Carlo techniques, and simplified models, we are able to accurately predict hydrogen permeabilities of metal membranes and screen large libraries of candidate alloys
NASA Astrophysics Data System (ADS)
Deutsch, Joseph; Ma, Kaizung; Rapoport, Stanley I.
2006-03-01
A fast and efficient chemical ionization mass spectrometric (CI-GC-MS) method for measuring myo-inositol in phosphatidylinositol (PtdIns) in rat brain has been developed. Previously, quantitation of PtdIns involved the release of the myo-inositol by two enzymatic reactions using phospholipase C and alkaline phosphatase. The hydrolytic action of these enzymes was replaced by using commercially available 48% hydrofluoric acid (HF) at 80 °C for 30 min. The process can be carried out on the crude Folch extract of brain phospholipids without prior thin layer chromatography (TLC) purification, thereby significantly increasing the speed of analysis. For quantification, unlabeled myo-inositol, labeled myo- and neo-inositol (internal standard) were converted to acetate derivatives and analyzed by CI-GC-MS.
GPU-accelerated Direct Sampling method for multiple-point statistical simulation
NASA Astrophysics Data System (ADS)
Huang, Tao; Li, Xue; Zhang, Ting; Lu, De-Tang
2013-08-01
Geostatistical simulation techniques have become a widely used tool for the modeling of oil and gas reservoirs and the assessment of uncertainty. The Direct Sampling (DS) algorithm is a recent multiple-point statistical simulation technique. It directly samples the training image (TI) during the simulation process by calculating distances between the TI patterns and the given data events found in the simulation grid (SG). Omitting the prior storage of all the TI patterns in a database, the DS algorithm can be used to simulate categorical, continuous and multivariate variables. Three fundamental input parameters are required for the definition of DS applications: the number of neighbors n, the acceptance threshold t and the fraction of the TI to scan f. For very large grids and complex spatial models with more severe parameter restrictions, the computational costs in terms of simulation time often become the bottleneck of practical applications. This paper focuses on an innovative implementation of the Direct Sampling method which exploits the benefits of graphics processing units (GPUs) to improve computational performance. Parallel schemes are applied to deal with two of the DS input parameters, n and f. Performance tests are carried out with large 3D grid size and the results are compared with those obtained based on the simulations with central processing units (CPU). The comparison indicates that the use of GPUs reduces the computation time by a factor of 10X-100X depending on the input parameters. Moreover, the concept of the search ellipsoid can be conveniently combined with the flexible data template of the DS method, and our experimental results of sand channels reconstruction show that it can improve the reproduction of the long-range connectivity patterns.
Delgado-Torre, M Pilar; Ferreiro-Vera, Carlos; Priego-Capote, Feliciano; Pérez-Juan, Pedro M; Luque de Castro, María Dolores
2012-03-28
Most research on the extraction of high-priced compounds from vineyard/wine byproducts has traditionally been focused on grape seeds and skins as raw materials. Vine-shoots can represent an additional source to those materials, the characteristics of which could depend on the cultivar. A comparative study of hydroalcoholic extracts from 18 different vineyard cultivars obtained by superheated liquid extraction (SHLE), microwave-assisted extraction (MAE), and ultrasound-assisted extraction (USAE) is here presented. The optimal working conditions for each type of extraction have been investigated by using multivariate experimental designs to maximize the yield of total phenolic compounds, measured by the Folin-Ciocalteu method, and control hydroxymethylfurfural because of the organoleptic properties of furanic derivatives and toxicity at given levels. The best values found for the influential variables on each extraction method were 80% (v/v) aqueous ethanol at pH 3, 180 °C, and 60 min for SHLE; 140 W and 5 min microwave irradiation for MAE; and 280 W, 50% duty cycle, and 7.5 min extraction for USAE. SHLE reported better extraction efficiencies as compared to the other two approaches, supporting the utility of SHLE for scaling-up the process. The extracts were dried in a rotary evaporator, reconstituted in 5 mL of methanol, and finally subjected to liquid-liquid extraction with n-hexane to remove nonpolar compounds that could complicate chromatographic separation. The methanolic fractions were analyzed by both LC-DAD and LC-TOF/MS, and the differences in composition according to the extraction conditions were studied. Compounds usually present in commercial wood extracts (mainly benzoic and hydroxycinnamic acids and aldehydes) were detected in vine-shoot extracts.
NASA Astrophysics Data System (ADS)
Ghasemi, F.; Abbasi Davani, F.
2015-06-01
Due to Iran's growing need for accelerators in various applications, IPM's electron Linac project has been defined. This accelerator is a 15 MeV energy S-band traveling-wave accelerator which is being designed and constructed based on the klystron that has been built in Iran. Based on the design, operating mode is π /2 and the accelerating chamber consists of two 60cm long tubes with constant impedance and a 30cm long buncher. Amongst all construction methods, shrinking method is selected for construction of IPM's electron Linac tube because it has a simple procedure and there is no need for large vacuum or hydrogen furnaces. In this paper, different aspects of this method are investigated. According to the calculations, linear ratio of frequency alteration to radius change is 787.8 MHz/cm, and the maximum deformation at the tube wall where disks and the tube make contact is 2.7μ m. Applying shrinking method for construction of 8- and 24-cavity tubes results in satisfactory frequency and quality factor. Average deviations of cavities frequency of 8- and 24-cavity tubes to the design values are 0.68 MHz and 1.8 MHz respectively before tune and 0.2 MHz and 0.4 MHz after tune. Accelerating tubes, buncher, and high power couplers of IPM's electron linac are constructed using shrinking method.
A method for measurement of ultratrace 79Se with accelerator mass spectrometry
NASA Astrophysics Data System (ADS)
Wang, Wei; Guan, Yongjing; He, Ming; Jiang, Shan; Wu, Shaoyong; Li, Chaoli
2010-04-01
79Se is a long-lived fission product with chemical and radiological toxicity. It is one of the radionuclides of interest in nuclear waste disposal due to its potential migration capacity to the surface environment. Furthermore, 79Se is an ideal tracer in biomedicine. One of the major obstacles in the measurement of ultratrace 79Se with AMS is the strong interference from the isobaric nuclide 79Br. This paper presents a new ultra-sensitive method for 79Se measurements with AMS. The novel aspects of our procedures include the extraction of SeO2- molecular ions, that results in a suppression of 79Br background by as much as about five orders of magnitude; the selection of Ag 2SeO 3 as the chemical form of Se in the target sample, that brings about a relatively large and stable SeO2- beam current; and the renovation of the multi-anode detector, that makes 79Se better identified from the interfering nuclide 79Br. By using these procedures, a sensitivity of better than 1.0 × 10 -12 has been achieved for 79Se/Se measurement with the CIAE-AMS system. It is then possible to quantify the tracer 79Se in biological samples. Recently, we are prepared to develop the 79Se-AMS biological tracer methodology.
Ings, Robert M J
2009-10-01
The concept of specifically determining the clinical pharmacokinetics of a compound using a very low nonpharmacologically active dose (microdose) with an abridged safety and chemistry, manufacturing and control package is relatively new. It is not without its controversy and it is still a subject of discussion. Here, the rationale and application of this approach are examined, together with the regulatory and bioanalytical framework. There are two bioanalytical methods commonly used for human microdosing studies: LC-MS/MS and accelerator MS (AMS). Each method has advantages and disadvantages with the choice of instrumentation being closely tied to the primary objective(s) of the study. If a rapid decision is required on the appropriateness of a pharmacokinetic profile or if a choice is needed from a series of compounds, especially before radiolabeled material is available, LC-MS/MS may be preferable. However, if extreme sensitivity is required, data are required on all drug-related material and metabolites, or a simultaneous intravenous microdose is used to determine absolute bioavailability (sometimes referred to as microtracing), AMS becomes the analytical method of choice. Examples are provided of microdosing studies utilizing both of these bioanalytical techniques. It is emphasized that microdosing is only one tool in the drug developer's tool box and it should be used in the context of all available data. However, when used appropriately, microdosing is a valuable tool, bridging between lead optimization and early clinical development.
NASA Astrophysics Data System (ADS)
Ha, Sanghyun; You, Donghyun
2015-11-01
Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of both incompressible and compressible Navier-Stokes equations. A semi-implicit ADI finite-volume method for integration of the incompressible and compressible Navier-Stokes equations, which are discretized on a structured arbitrary grid, is parallelized for GPU computations using CUDA (Compute Unified Device Architecture). In the semi-implicit ADI finite-volume method, the nonlinear convection terms and the linear diffusion terms are integrated in time using a combination of an explicit scheme and an ADI scheme. Inversion of multiple tri-diagonal matrices is found to be the major challenge in GPU computations of the present method. Some of the algorithms for solving tri-diagonal matrices on GPUs are evaluated and optimized for GPU-acceleration of the present semi-implicit ADI computations of incompressible and compressible Navier-Stokes equations. Supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning Grant NRF-2014R1A2A1A11049599.
NASA Astrophysics Data System (ADS)
Smith, Cindy D.
Common methods for commissioning linear accelerators often neglect beam data for small fields. Examining the methods of beam data collection and modeling for commissioning linear accelerators revealed little to no discussion of the protocols for fields smaller than 4 cm x 4 cm. This leads to decreased confidence levels in the dose calculations and associated monitor units (MUs) for Intensity Modulated Radiation Therapy (IMRT). The parameters of commissioning the Novalis linear accelerator (linac) on the Eclipse Treatment Planning System (TPS) led to the study of challenges collecting data for very small fields. The focus of this thesis is the examination of the protocols for output factor collection and their impact on dose calculations by the TPS for IMRT treatment plans. Improving output factor collection methods, led to significant improvement in absolute dose calculations which correlated with the complexity of the plans.
The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2015-01-01
The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forward-forward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.
The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2015-01-01
The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forwardforward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.
Kesy, Lena; Kopczyński, Przemysław; Baszczuk, Aleksandra; Kopczyński, Zygmunt
2014-04-01
Platelet rich plasma is being increasingly used in the modem medicine as a material, stimulating regeneration and accelerating tissue healing. Platelet rich plasma is an autologous platelet concentrate, which is obtained from the peripheral blood of the patient. The method of extraction is based on the isolation of platelets during centrifugation of the whole blood, drew on anticoagulant. With the difference in density between the various cellular components of blood, such as red blood cells, buffy coat and platelet poor plasma, the separation into individual fractions is possible. At the present moment no optimal method of preparation of the platelet rich plasma has been found. On the market there are a number of commercial collection systems available, differing from each other in centrifugation parameters, type of container to which blood is collected and anticogulant used. Unfortunately, this can lead to obtaining platelet rich plasma with a varying number of platelets, leukocytes and resulting in a different concentration of growth factors. This is important, because the studies show, that a positive clinical effect depends on the quality of the used platelet-rich plasma.
NASA Astrophysics Data System (ADS)
Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song
2015-09-01
An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.
NASA Astrophysics Data System (ADS)
Colazo, M.
2016-08-01
Argentine has 10 percent of the operative time available for the DSA 3 Antenna of the European Space Agency, installed in Malargüe, Mendoza. Here we present the history of the project and the current activities for the scientific use of the antenna.
Kaminski, Artur; Grazka, Ewelina; Jastrzebska, Anna; Marowska, Joanna; Gut, Grzegorz; Wojciechowski, Artur; Uhrynowska-Tyszkiewicz, Izabela
2012-08-01
Accelerated electron beam (EB) irradiation has been a sufficient method used for sterilisation of human tissue grafts for many years in a number of tissue banks. Accelerated EB, in contrast to more often used gamma photons, is a form of ionizing radiation that is characterized by lower penetration, however it is more effective in producing ionisation and to reach the same level of sterility, the exposition time of irradiated product is shorter. There are several factors, including dose and temperature of irradiation, processing conditions, as well as source of irradiation that may influence mechanical properties of a bone graft. The purpose of this study was to evaluate the effect e-beam irradiation with doses of 25 or 35 kGy, performed on dry ice or at ambient temperature, on mechanical properties of non-defatted or defatted compact bone grafts. Left and right femurs from six male cadaveric donors, aged from 46 to 54 years, were transversely cut into slices of 10 mm height, parallel to the longitudinal axis of the bone. Compact bone rings were assigned to the eight experimental groups according to the different processing method (defatted or non-defatted), as well as e-beam irradiation dose (25 or 35 kGy) and temperature conditions of irradiation (ambient temperature or dry ice). Axial compression testing was performed with a material testing machine. Results obtained for elastic and plastic regions of stress-strain curves examined by univariate analysis are described. Based on multivariate analysis, including all groups, it was found that temperature of e-beam irradiation and defatting had no consistent significant effect on evaluated mechanical parameters of compact bone rings. In contrast, irradiation with both doses significantly decreased the ultimate strain and its derivative toughness, while not affecting the ultimate stress (bone strength). As no deterioration of mechanical properties was observed in the elastic region, the reduction of the energy
1992-01-29
annular proton beam is extracted from the surface flashover of a lucite ring. The ring is concave with respect to the A-K gap, with a 3.3-cm radius...that the beam is expanding radially as it propagates through the gap (see fig. 3 middle). The divergence downstream of the second gap, measured at the...applied to determine the transport efficiency and other beam properties. These included nuclear activation to measure the total number of protons in
NASA Technical Reports Server (NTRS)
Jansen, Ralph
1995-01-01
Neural network systems were evaluated for use in predicting wear of mechanical systems. Three different neural network software simulation packages were utilized in order to create models of tribological wear tests. Representative simple, medium, and high complexity simulation packages were selected. Pin-on-disk, rub shoe, and four-ball tribological test data was used for training, testing, and verification of the neural network models. Results showed mixed success. The neural networks were able to predict results with some accuracy if the number of input variables was low or the amount of training data was high. Increased neural network complexity resulted in more accurate results, however there was a point of diminishing return. Medium complexity models were the best trade off between accuracy and computing time requirements. A NASA Technical Memorandum and a Society of Tribologists and Lubrication Engineers paper are being published which detail the work.
NASA Astrophysics Data System (ADS)
Kawata, Masaaki; Mikami, Masuhiro
A canonical molecular dynamics (MD) simulation was accelerated by using an efficient implementation of the multiple timestep integrator algorithm combined with the periodic fast multiple method (MEFMM) for both Coulombic and van der Waals interactions. Although a significant reduction in computational cost has been obtained previously by using the integrated method, in which the MEFMM was used only to calculate Coulombic interactions (Kawata, M., and Mikami, M., 2000, J. Comput. Chem., in press), the extension of this method to include van der Waals interactions yielded further acceleration of the overall MD calculation by a factor of about two. Compared with conventional methods, such as the velocity-Verlet algorithm combined with the Ewald method (timestep of 0.25fs), the speedup by using the extended integrated method amounted to a factor of 500 for a 100 ps simulation. Therefore, the extended method reduces substantially the computational effort of large scale MD simulations.
Kader, Abdulrahman A; Kumar, Angamuthu; Krishna, Ananth; Zaman, Mohamed Nassimu
2006-12-01
We prospectively studied an accelerated phenotypic method by incorporating the double disk synergy test in the standard Kirby-Bauer disk diffusion susceptibility testing, to evaluate a protocol for the rapid detection of extended-spectrum beta-lactamases (ESBL) in urinary isolates of Escherichia coli (E. coli) and Klebsiella pneumoniae (K. pneumoniae). All ESBL-positive isolates were confirmed by the standard Clinical Laboratory Standards Institute (CLSI) confirmatory disk diffusion method. Between November 2004 and December 2005, a total of 6988 urine specimens were analyzed of which, 776 (11%) showed significant growth. They included E. coli in 577 cases (74%) and K. pneumoniae in 199 (25.6%). Of these, 63 E. coli (8%) and 15 K. pneumoniae (7.5%) were positive for ESBL by the accelerated and CLSI methods. Compared to the standard CLSI method, the accelerated method reduced the ESBL detection time from two days to one day. We conclude that the accelerated ESBL detection technique used by us in this study is a reliable and rapid method for detecting ESBL in urinary isolates of E. coli and K. pneumoniae.
John Womersley
2003-08-21
I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.
Gwin, Joseph T; Chu, Jeffery J; Diamond, Solomon G; Halstead, P David; Crisco, Joseph J; Greenwald, Richard M
2010-01-01
The performance characteristics of football helmets are currently evaluated by simulating head impacts in the laboratory using a linear drop test method. To encourage development of helmets designed to protect against concussion, the National Operating Committee for Standards in Athletic Equipment recently proposed a new headgear testing methodology with the goal of more closely simulating in vivo head impacts. This proposed test methodology involves an impactor striking a helmeted headform, which is attached to a nonrigid neck. The purpose of the present study was to compare headform accelerations recorded according to the current (n=30) and proposed (n=54) laboratory test methodologies to head accelerations recorded in the field during play. In-helmet systems of six single-axis accelerometers were worn by the Dartmouth College men's football team during the 2005 and 2006 seasons (n=20,733 impacts; 40 players). The impulse response characteristics of a subset of laboratory test impacts (n=27) were compared with the impulse response characteristics of a matched sample of in vivo head accelerations (n=24). Second- and third-order underdamped, conventional, continuous-time process models were developed for each impact. These models were used to characterize the linear head/headform accelerations for each impact based on frequency domain parameters. Headform linear accelerations generated according to the proposed test method were less similar to in vivo head accelerations than headform accelerations generated by the current linear drop test method. The nonrigid neck currently utilized was not developed to simulate sport-related direct head impacts and appears to be a source of the discrepancy between frequency characteristics of in vivo and laboratory head/headform accelerations. In vivo impacts occurred 37% more frequently on helmet regions, which are tested in the proposed standard than on helmet regions tested currently. This increase was largely due to the
Rodríguez, Francisca A; Mateo, María N; Aceves, Juan M; Rivero, Eligio P; González, Ignacio
2013-01-01
This work presents a study on degradation of indigo carmine dye in a filter-press type FM01-LC reactor using Sb2O5-doped Ti/IrO2-SnO2 dimensionally stable anode (DSA) electrodes. Micro- and macroelectrolysis studies were carried out using solutions of 0.8 mM indigo carmine in 0.05 M NaCl, which resemble blue denim laundry industrial wastewater. Microelectrolysis results show the behaviour of DSA electrodes in comparison with the behaviour of boron-doped diamond (BDD) electrodes. In general, dye degradation reactions are carried out indirectly through active chlorine generated on DSA, whereas in the case of BDD electrodes more oxidizing species are formed, mainly OH radicals, on the electrode surface. The well-characterized geometry, flow pattern and mass transport of the FM01-LC reactor used in macroelectrolysis experiments allowed the evaluation of the effect of hydrodynamic conditions on the chlorine-mediated degradation rate. Four values of Reynolds number (Re) (93, 371, 464 and 557) at four current densities (50, 100, 150 and 200 A/m2) were tested. The results show that the degradation rate is independent of Re at low current density (50 A/m2) but becomes dependent on the Re at high current density (200 A/m2). This behaviour shows the central role of mass transport and the reactor parameters and design. The low energy consumption (2.02 and 9.04 kWh/m3 for complete discolouration and chemical oxygen demand elimination at 50 A/m2, respectively) and the low cost of DSA electrodes compared to BDD make DSA electrodes promising for practical application in treating industrial textile effluents. In the present study, chlorinated organic compounds were not detected.
1985-01-01
percent (see Fig. 12). 16 mrad +0~--~--~~--~~-1----+ ---~ -O,G -0,¥ -0,2 0 0,2 0,¥ CH Abscissa- particle displacement from the accelarator axis...ARPA." 1. Proton accelerators. 2. Ion bombardment-Research -Soviet Union. 3. Linear accelerators. 4. Particle beams-Technique. I. United States...present author on the sub- ject of generating and accelerating intense ion and neutral particle beams. The first report was The Development of High
Greenough, Lucia; Schermerhorn, Kelly M; Mazzola, Laurie; Bybee, Joanna; Rivizzigno, Danielle; Cantin, Elizabeth; Slatko, Barton E; Gardner, Andrew F
2016-01-29
Detailed biochemical characterization of nucleic acid enzymes is fundamental to understanding nucleic acid metabolism, genome replication and repair. We report the development of a rapid, high-throughput fluorescence capillary gel electrophoresis method as an alternative to traditional polyacrylamide gel electrophoresis to characterize nucleic acid metabolic enzymes. The principles of assay design described here can be applied to nearly any enzyme system that acts on a fluorescently labeled oligonucleotide substrate. Herein, we describe several assays using this core capillary gel electrophoresis methodology to accelerate study of nucleic acid enzymes. First, assays were designed to examine DNA polymerase activities including nucleotide incorporation kinetics, strand displacement synthesis and 3'-5' exonuclease activity. Next, DNA repair activities of DNA ligase, flap endonuclease and RNase H2 were monitored. In addition, a multicolor assay that uses four different fluorescently labeled substrates in a single reaction was implemented to characterize GAN nuclease specificity. Finally, a dual-color fluorescence assay to monitor coupled enzyme reactions during Okazaki fragment maturation is described. These assays serve as a template to guide further technical development for enzyme characterization or nucleoside and non-nucleoside inhibitor screening in a high-throughput manner.
Greenough, Lucia; Schermerhorn, Kelly M.; Mazzola, Laurie; Bybee, Joanna; Rivizzigno, Danielle; Cantin, Elizabeth; Slatko, Barton E.; Gardner, Andrew F.
2016-01-01
Detailed biochemical characterization of nucleic acid enzymes is fundamental to understanding nucleic acid metabolism, genome replication and repair. We report the development of a rapid, high-throughput fluorescence capillary gel electrophoresis method as an alternative to traditional polyacrylamide gel electrophoresis to characterize nucleic acid metabolic enzymes. The principles of assay design described here can be applied to nearly any enzyme system that acts on a fluorescently labeled oligonucleotide substrate. Herein, we describe several assays using this core capillary gel electrophoresis methodology to accelerate study of nucleic acid enzymes. First, assays were designed to examine DNA polymerase activities including nucleotide incorporation kinetics, strand displacement synthesis and 3′-5′ exonuclease activity. Next, DNA repair activities of DNA ligase, flap endonuclease and RNase H2 were monitored. In addition, a multicolor assay that uses four different fluorescently labeled substrates in a single reaction was implemented to characterize GAN nuclease specificity. Finally, a dual-color fluorescence assay to monitor coupled enzyme reactions during Okazaki fragment maturation is described. These assays serve as a template to guide further technical development for enzyme characterization or nucleoside and non-nucleoside inhibitor screening in a high-throughput manner. PMID:26365239
NASA Astrophysics Data System (ADS)
Bailey, I. R.; Barber, D. P.; Chattopadhyay, S.; Hartin, A.; Heinzl, T.; Hesselbach, S.; Moortgat-Pick, G. A.
2009-11-01
The joint IPPP Durham/Cockcroft Institute/ICFA workshop on advanced QED methods for future accelerators took place at the Cockcroft Institute in early March 2009. The motivation for the workshop was the need for a detailed consideration of the physics processes associated with beam-beam effects at the interaction points of future high-energy electron-positron colliders. There is a broad consensus within the particle physics community that the next international facility for experimental high-energy physics research beyond the Large Hadron Collider at CERN should be a high-luminosity electron-positron collider working at the TeV energy scale. One important feature of such a collider will be its ability to deliver polarised beams to the interaction point and to provide accurate measurements of the polarisation state during physics collisions. The physics collisions take place in very dense charge bunches in the presence of extremely strong electromagnetic fields of field strength of order of the Schwinger critical field strength of 4.4×1013 Gauss. These intense fields lead to depolarisation processes which need to be thoroughly understood in order to reduce uncertainty in the polarisation state at collision. To that end, this workshop reviewed the formalisms for describing radiative processes and the methods of calculation in the future strong-field environments. These calculations are based on the Furry picture of organising the interaction term of the Lagrangian. The means of deriving the transition probability of the most important of the beam-beam processes - Beamsstrahlung - was reviewed. The workshop was honoured by the presentations of one of the founders, V N Baier, of the 'Operator method' - one means for performing these calculations. Other theoretical methods of performing calculations in the Furry picture, namely those due to A I Nikishov, V I Ritus et al, were reviewed and intense field quantum processes in fields of different form - namely those
Hoven, Andor F. van den Leeuwen, Maarten S. van Lam, Marnix G. E. H. Bosch, Maurice A. A. J. van den
2015-02-15
PurposeCurrent anatomical classifications do not include all variants relevant for radioembolization (RE). The purpose of this study was to assess the individual hepatic arterial configuration and segmental vascularization pattern and to develop an individualized RE treatment strategy based on an extended classification.MethodsThe hepatic vascular anatomy was assessed on MDCT and DSA in patients who received a workup for RE between February 2009 and November 2012. Reconstructed MDCT studies were assessed to determine the hepatic arterial configuration (origin of every hepatic arterial branch, branching pattern and anatomical course) and the hepatic segmental vascularization territory of all branches. Aberrant hepatic arteries were defined as hepatic arterial branches that did not originate from the celiac axis/CHA/PHA. Early branching patterns were defined as hepatic arterial branches originating from the celiac axis/CHA.ResultsThe hepatic arterial configuration and segmental vascularization pattern could be assessed in 110 of 133 patients. In 59 patients (54 %), no aberrant hepatic arteries or early branching was observed. Fourteen patients without aberrant hepatic arteries (13 %) had an early branching pattern. In the 37 patients (34 %) with aberrant hepatic arteries, five also had an early branching pattern. Sixteen different hepatic arterial segmental vascularization patterns were identified and described, differing by the presence of aberrant hepatic arteries, their respective vascular territory, and origin of the artery vascularizing segment four.ConclusionsThe hepatic arterial configuration and segmental vascularization pattern show marked individual variability beyond well-known classifications of anatomical variants. We developed an individualized RE treatment strategy based on an extended anatomical classification.
Hertzberg, A.; Bruckner, A.P.; Mattick, A.T.; Bogdanoff, D.W.; Brackett, D.C.; McFall, K.A.
1987-01-01
This report describes work performed for the Department of Energy over the time period 1 June 1985 to 30 April 1987. The main areas of investigation are computational studies of gas and high explosive driven ramjet-in-tube concepts over the velocity range 3 - 20 km/sec, linear velocity multiplication over the velocity range 7 - 100/sup +/ km/sec and radiation emitted from impacts at closing velocities of 80 - 400 km/sec. This report presents the computational methods used, including benchmark proof tests of these methods, as well as results of the investigations. 41 refs., 62 figs., 11 tabs.
The possible evidence of the non-linear particle acceleration in Cas A from Planck data
NASA Astrophysics Data System (ADS)
Urošević, Dejan
2015-08-01
Arnaud et al. (2014, arXiv:1409.5746) have recently published their microwave survey Of Galactic supernova remnants by using results of observations made by Planck telescope. The high frequency radio data obtained by Planck reveal obvious concave up form of spectrum of Galactic supernova remnant (SNR) Cas A. It is expected form of spectrum if non-linear diffuse shock acceleration (DSA) process is active. The radio spectral index (the flux density Sν ˜ ν -α ) of Cas A at low and middle frequencies (< 30 GHz) has value α = 0.77. At higher frequencies (between 30 GHz and 353 GHz) this spectral index becomes flatter, α ˜ 0.6. Under assumption of the test particle DSA, as the first approximation, the corresponding compression ratio should increase from 3 (α = 0.77) to 3.5 (α = 0.6). This represents a possible observational evidence for the existence of a modified shock wave.
NASA Astrophysics Data System (ADS)
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER -
Woo, M K
2002-12-01
This work describes a method to obtain "star-shots" of the mechanical and optical isocenters of linear accelerators, similar to the star-shots of radiation isocenters normally obtained using films. In this method a digital camera is connected to a personal computer so that multiply exposed images can be taken at a fixed camera position. A mechanical pointer or a wire aligned along the optical axis can then be imaged by the camera. Multiple exposures at varying gantry angles are then superimposed on a digital image which can be analyzed by the computer to give a high-resolution star-shot. The method provides a convenient way for a linear accelerator quality assurance procedure.
Combined generating-accelerating buncher for compact linear accelerators
NASA Astrophysics Data System (ADS)
Savin, E. A.; Matsievskiy, S. V.; Sobenin, N. P.; Sokolov, I. D.; Zavadtsev, A. A.
2016-09-01
Described in the previous article [1] method of the power extraction from the modulated electron beam has been applied to the compact standing wave electron linear accelerator feeding system, which doesnt require any connection waveguides between the power source and the accelerator itself [2]. Generating and accelerating bunches meet in the hybrid accelerating cell operating at TM020 mode, thus the accelerating module is placed on the axis of the generating module, which consists from the pulsed high voltage electron sources and electrons dumps. This combination makes the accelerator very compact in size which is very valuable for the modern applications such as portable inspection sources. Simulations and geometry cold tests are presented.
Medina, L Carolina; Sartain, Jerry; Obreza, Thomas; Leary, Emily; Hall, William L; Thiex, Nancy J
2014-01-01
Several technologies have been proposed to characterize the nutrient release patterns of enhanced-efficiency fertilizers (EEFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize nutrient release of slow-release fertilizer (SRF) and controlled-release fertilizer (CRF) materials, no official method exists to assess their nutrient release patterns. However, the increased production and distribution of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. Nonlinear regression was used to establish a correlation between the data generated from a 180-day soil incubation-column leaching procedure and 74 h accelerated lab extraction method, and to develop a model that can predict the 180-day nitrogen (N) release curve for a specific SRF and CRF product based on the data from the accelerated laboratory extraction method. Based on the R2 > 0.90 obtained for most materials, results indicated that the data generated from the 74 h accelerated lab extraction method could be used to predict N release from the selected materials during 180 days, including those fertilizers that require biological activity for N release.
YOUNG SUPERNOVAE AS EXPERIMENTAL SITES FOR STUDYING THE ELECTRON ACCELERATION MECHANISM
Maeda, Keiichi
2013-01-10
Radio emissions from young supernovae ({approx}<1 year after the explosion) show a peculiar feature in the relativistic electron population at a shock wave, where their energy distribution is steeper than typically found in supernova remnants and than that predicted from the standard diffusive shock acceleration (DSA) mechanism. This has been especially established for the case for a class of stripped envelope supernovae (SNe IIb/Ib/Ic), where a combination of high shock velocity and low circumstellar material density makes it easier to derive the intrinsic energy distribution than in other classes of SNe. We suggest that this apparent discrepancy reflects a situation where the low energy electrons, before being accelerated by the DSA-like mechanism, are responsible for the radio synchrotron emission from young SNe, and that studying young SNe sheds light on the still-unresolved electron injection problem in the acceleration theory of cosmic rays. We suggest that the electron's energy distribution could be flattened toward high energy, most likely around 100 MeV, which marks a transition from inefficient to efficient acceleration. Identifying this feature will be a major advance in understanding the electron acceleration mechanism. We suggest two further probes: (1) millimeter/submillimeter observations in the first year after the explosion and (2) X-ray observations at about one year and thereafter. We show that these are reachable by ALMA and Chandra for nearby SNe.
Shock waves and particle acceleration in clusters of galaxies
NASA Astrophysics Data System (ADS)
Ryu, Dongsu; Kang, Hyesung; Ha, Ji-Hoon
2017-01-01
During the formation of the large-scale structure of the universe, intracluster media (ICMs), which fills the volume of galaxy clusters and is composed of hot, high-beta plasma, are continuously disturbed by major and minor mergers of clumps as well as infall along filaments of the warm-hot intergalactic medium (WHIM). Such activities induce shock waves, which are observed in radio and X-ray mostly in cluster outskirts. These shocks are collisionless, as in other astrophysical environments, and are thought to accelerate cosmic rays (CRs) via diffusive shock acceleration (DSA) mechanism. Here, we present the properties of shocks in ICMs and their roles in the generation of nonthermal particles, studied with high-resolution simulations. We also discuss the implications on the observations of diffuse radio emission from galaxy clusters, such as radio relics.
Ellis, P.F. II; Ferguson, A.F.; Fuentes, K.T.
1996-05-06
In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. This report presents the results of phase three concerning the reproducibility and discrimination testing.
2008-08-01
Long-Term Accelerated Corrosion and Adhesion Assessment of CARC Prepared Aluminum Alloy 5059-H131 Using Three Different Surface Preparation... Corrosion and Adhesion Assessment of CARC Prepared Aluminum Alloy 5059-H131 Using Three Different Surface Preparation Methods 5c. PROGRAM ELEMENT NUMBER 5d...TERMS corrosion , aluminum , 5059-H131, cyclic, GM 9540P, salt fog, adhesion, pull-off 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE
NASA Astrophysics Data System (ADS)
Golyanskaya, Evgeniya. O.; Sivkov, Aleksandr A.; Anikina, Zhanna S.
2016-02-01
One of the most promising trends in modern physics is the high-temperature superconductivity. Analysis of high-temperature superconductors revealed that almost all of them are complex copper-based oxides. Studies have shown the possibility of using them for the synthesis of coaxial magneto accelerator. Studies have identified the products synthesized soot: Cu, Cu2O, CuO, their shape and size. Also been deciphered and electron microscopy confirmed the composition of the nanopowder obtained in laboratory conditions.
Silari, Marco
2007-01-01
A good knowledge of the radiation field present outside the shielding of high-energy particle accelerators is very important to be able to select the type of detectors (active and/or passive) to be employed for area monitoring and the type of personal dosemeter required for estimating the doses received by individuals. Around high-energy electron and proton accelerators the radiation field is usually dominated by neutrons and photons, with minor contributions from other charged particles. Under certain circumstances, muon radiation in the forward beam direction may also be present. Neutron dosimetry and spectrometry are of primary importance to characterise the radiation field and thus to correctly evaluate personnel exposure. Starting from the beam parameters important for radiation monitoring, the paper first briefly reviews the stray radiation fields encountered around high-energy accelerators and then addresses the relevant techniques employed for their monitoring. Recent developments to increase the response of neutron measuring devices beyond 10-20 MeV are illustrated. Instruments should be correctly calibrated either in reference monoenergetic radiation fields or in a field similar to the field in which they are used (workplace calibration). The importance of the instrument calibration is discussed and available neutron calibration facilities are briefly reviewed.
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
2017-01-01
The identification of permissible HLA class II mismatches can prevent DSA in mismatched transplantation. The HLA-DR phenotype of recipients contributes to DSA formation by presenting allo-HLA-derived peptides to T-helper cells, which induces the differentiation of B cells into plasma cells. Comparing the binding affinity of self and nonself allo-HLA-derived peptides for recipients' HLA class II antigens may distinguish immunogenic HLA mismatches from nonimmunogenic ones. The binding affinities of allo-HLA-derived peptides to recipients' HLA-DR and HLA-DQ antigens were predicted using the NetMHCIIpan 3.1 server. HLA class II mismatches were classified based on whether they induced DSA and whether self or nonself peptide was predicted to bind with highest affinity to recipients' HLA-DR and HLA-DQ. Other mismatch characteristics (eplet, hydrophobic, electrostatic, and amino acid mismatch scores and PIRCHE-II) were evaluated. A significant association occurred between DSA formation and the predicted HLA-DR presentation of nonself peptides (P = 0.0169; accuracy = 80%; sensitivity = 88%; specificity = 63%). In contrast, mismatch characteristics did not differ significantly between mismatches that induced DSA and the ones that did not, except for PIRCHE-II (P = 0.0094). This methodology predicts DSA formation based on HLA mismatches and recipients' HLA-DR phenotype and may identify permissible HLA mismatches to help optimize HLA matching and guide donor selection. PMID:28331856
NASA Astrophysics Data System (ADS)
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC
Probing acceleration and turbulence at relativistic shocks in blazar jets
NASA Astrophysics Data System (ADS)
Baring, Matthew G.; Böttcher, Markus; Summerlin, Errol J.
2017-02-01
Diffusive shock acceleration (DSA) at relativistic shocks is widely thought to be an important acceleration mechanism in various astrophysical jet sources, including radio-loud active galactic nuclei such as blazars. Such acceleration can produce the non-thermal particles that emit the broad-band continuum radiation that is detected from extragalactic jets. An important recent development for blazar science is the ability of Fermi-Large Area Telescope spectroscopy to pin down the shape of the distribution of the underlying non-thermal particle population. This paper highlights how multiwavelength spectra spanning optical to X-ray to gamma-ray bands can be used to probe diffusive acceleration in relativistic, oblique, magnetohydrodynamic (MHD) shocks in blazar jets. Diagnostics on the MHD turbulence near such shocks are obtained using thermal and non-thermal particle distributions resulting from detailed Monte Carlo simulations of DSA. These probes are afforded by the characteristic property that the synchrotron νFν peak energy does not appear in the gamma-ray band above 100 MeV. We investigate self-consistently the radiative synchrotron and inverse Compton signatures of the simulated particle distributions. Important constraints on the diffusive mean free paths of electrons, and the level of electromagnetic field turbulence are identified for three different case study blazars, Mrk 501, BL Lacertae and AO 0235+164. The X-ray excess of AO 0235+164 in a flare state can be modelled as the signature of bulk Compton scattering of external radiation fields, thereby tightly constraining the energy-dependence of the diffusion coefficient for electrons. The concomitant interpretations that turbulence levels decline with remoteness from jet shocks, and the probable significant role for non-gyroresonant diffusion, are posited.
Cosmic Ray Acceleration by a Versatile Family of Galactic Wind Termination Shocks
NASA Astrophysics Data System (ADS)
Bustard, Chad; Zweibel, Ellen G.; Cotter, Cory
2017-01-01
There are two distinct breaks in the cosmic ray (CR) spectrum: the so-called “knee” around 3 × 1015 eV and the so-called “ankle” around 1018 eV. Diffusive shock acceleration (DSA) at supernova remnant (SNR) shock fronts is thought to accelerate galactic CRs to energies below the knee, while an extragalactic origin is presumed for CRs with energies beyond the ankle. CRs with energies between 3 × 1015 and 1018 eV, which we dub the “shin,” have an unknown origin. It has been proposed that DSA at galactic wind termination shocks, rather than at SNR shocks, may accelerate CRs to these energies. This paper uses the galactic wind model of Bustard et al. to analyze whether galactic wind termination shocks may accelerate CRs to shin energies within a reasonable acceleration time and whether such CRs can subsequently diffuse back to the Galaxy. We argue for acceleration times on the order of 100 Myr rather than a few billion years, as assumed in some previous works, and we discuss prospects for magnetic field amplification at the shock front. Ultimately, we generously assume that the magnetic field is amplified to equipartition. This formalism allows us to obtain analytic formulae, applicable to any wind model, for CR acceleration. Even with generous assumptions, we find that very high wind velocities are required to set up the necessary conditions for acceleration beyond 1017 eV. We also estimate the luminosities of CRs accelerated by outflow termination shocks, including estimates for the Milky Way wind.
TIME-DEPENDENT DIFFUSIVE SHOCK ACCELERATION IN SLOW SUPERNOVA REMNANT SHOCKS
Tang, Xiaping; Chevalier, Roger A. E-mail: rac5x@virginia.edu
2015-02-20
Recent gamma-ray observations show that middle-aged supernova remnants (SNRs) interacting with molecular clouds can be sources of both GeV and TeV emission. Models involving reacceleration of preexisting cosmic rays (CRs) in the ambient medium and direct interaction between SNR and molecular clouds have been proposed to explain the observed gamma-ray emission. For the reacceleration process, standard diffusive shock acceleration (DSA) theory in the test particle limit produces a steady-state particle spectrum that is too flat compared to observations, which suggests that the high-energy part of the observed spectrum has not yet reached a steady state. We derive a time-dependent DSA solution in the test particle limit for situations involving reacceleration of preexisting CRs in the preshock medium. Simple estimates with our time-dependent DSA solution plus a molecular cloud interaction model can reproduce the overall shape of the spectra of IC 443 and W44 from GeV to TeV energies through pure π{sup 0}-decay emission. We allow for a power-law momentum dependence of the diffusion coefficient, finding that a power-law index of 0.5 is favored.
Abbin, Jr., Joseph P.; Devaney, Howard F.; Hake, Lewis W.
1982-08-17
The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.
Abbin, J.P. Jr.; Devaney, H.F.; Hake, L.W.
1979-08-29
The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.
Bell, J.S.
1959-09-15
An arrangement for the drift tubes in a linear accelerator is described whereby each drift tube acts to shield the particles from the influence of the accelerating field and focuses the particles passing through the tube. In one embodiment the drift tube is splii longitudinally into quadrants supported along the axis of the accelerator by webs from a yoke, the quadrants. webs, and yoke being of magnetic material. A magnetic focusing action is produced by energizing a winding on each web to set up a magnetic field between adjacent quadrants. In the other embodiment the quadrants are electrically insulated from each other and have opposite polarity voltages on adjacent quadrants to provide an electric focusing fleld for the particles, with the quadrants spaced sufficienily close enough to shield the particles within the tube from the accelerating electric field.
Pern, F. J.; Noufi, R.
2012-10-01
A step-stress accelerated degradation testing (SSADT) method was employed for the first time to evaluate the stability of CuInGaSe2 (CIGS) solar cells and device component materials in four Al-framed test structures encapsulated with an edge sealant and three kinds of backsheet or moisture barrier film for moisture ingress control. The SSADT exposure used a 15oC and then a 15% relative humidity (RH) increment step, beginning from 40oC/40%RH (T/RH = 40/40) to 85oC/70%RH (85/70) as of the moment. The voluminous data acquired and processed as of total DH = 3956 h with 85/70 = 704 h produced the following results. The best CIGS solar cells in sample Set-1 with a moisture-permeable TPT backsheet showed essentially identical I-V degradation trend regardless of the Al-doped ZnO (AZO) layer thickness ranging from standard 0.12 μm to 0.50 μm on the cells. No clear 'stepwise' feature in the I-V parameter degradation curves corresponding to the SSADT T/RH/time profile was observed. Irregularity in I-V performance degradation pattern was observed with some cells showing early degradation at low T/RH < 55/55 and some showing large Voc, FF, and efficiency degradation due to increased series Rs (ohm-cm2) at T/RH ≥ 70/70. Results of (electrochemical) impedance spectroscopy (ECIS) analysis indicate degradation of the CIGS solar cells corresponded to increased series resistance Rs (ohm) and degraded parallel (minority carrier diffusion/recombination) resistance Rp, capacitance C, overall time constant Rp*C, and 'capacitor quality' factor (CPE-P), which were related to the cells? p-n junction properties. Heating at 85/70 appeared to benefit the CIGS solar cells as indicated by the largely recovered CPE-P factor. Device component materials, Mo on soda lime glass (Mo/SLG), bilayer ZnO (BZO), AlNi grid contact, and CdS/CIGS/Mo/SLG in test structures with TPT showed notable to significant degradation at T/RH ≥ 70/70. At T/RH = 85/70, substantial blistering of BZO layers on CIGS
Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A; Solberg, Timothy D; Chetty, Indrin J
2009-12-01
In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrc Monte Carlo (MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a 6 MV photon beam from a Siemens Primus linear accelerator (linac) and phase space (PHSP) files were generated at 100 cm source-to-surface distance for the 10 x 10 and 40 x 40 cm2 field sizes. The BEAMnrc parameters/techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections, (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe-Heitler) was used as the "base line" for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were approximately 935 (approximately 111 min on a single 2.6 GHz processor) and approximately 200 (approximately 45 min on a single processor) for the 10 x 10 field size with 50 million histories and 40 x 40 cm2 field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Eriksen, Kristoffer A.; Hughes, John P.; Badenes, Carles; Fesen, Robert; Ghavamian, Parviz; Moffett, David; Plucinksy, Paul P.; Slane, Patrick; Rakowski, Cara E.; Reynoso, Estela M.
2011-02-20
Supernova remnants (SNRs) have long been assumed to be the source of cosmic rays (CRs) up to the 'knee' of the CR spectrum at 10{sup 15} eV, accelerating particles to relativistic energies in their blast waves by the process of diffusive shock acceleration (DSA). Since CR nuclei do not radiate efficiently, their presence must be inferred indirectly. Previous theoretical calculations and X-ray observations show that CR acceleration significantly modifies the structure of the SNR and greatly amplifies the interstellar magnetic field. We present new, deep X-ray observations of the remnant of Tycho's supernova (SN 1572, henceforth Tycho), which reveal a previously unknown, strikingly ordered pattern of non-thermal high-emissivity stripes in the projected interior of the remnant, with spacing that corresponds to the gyroradii of 10{sup 14}-10{sup 15} eV protons. Spectroscopy of the stripes shows the plasma to be highly turbulent on the (smaller) scale of the Larmor radii of TeV energy electrons. Models of the shock amplification of magnetic fields produce structure on the scale of the gyroradius of the highest energy CRs present, but they do not predict the highly ordered pattern we observe. We interpret the stripes as evidence for acceleration of particles to near the knee of the CR spectrum in regions of enhanced magnetic turbulence, while the observed highly ordered pattern of these features provides a new challenge to models of DSA.
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
NASA Technical Reports Server (NTRS)
Kolyer, J. M.; Mann, N. R.
1978-01-01
Inherent weatherability is controlled by the three weather factors common to all exposure sites: insolation, temperature, and humidity. Emphasis was focused on the transparent encapsulant portion of miniature solar cell arrays by eliminating weathering effects on the substrate and circuitry (which are also parts of the encapsulant system). The most extensive data were for yellowing, which were measured conveniently and precisely. Considerable data also were obtained on tensile strength. Changes in these two properties after outdoor exposure were predicted very well from accelerated exposure data.
NASA Astrophysics Data System (ADS)
Gagliardini, O.; Gillet-chaulet, F.; Martin, N.; Monnier, J.; Singh, J.
2011-12-01
Greenland outlet glaciers control the ice discharge toward the sea and the resulting contribution to sea level rise. Physical processes at the root of the observed acceleration and retreat, - decrease of the back force at the calving terminus, increase of basal lubrication and decrease of the lateral friction -, are still not well understood. All these three processes certainly play a role but their relative contributions have not yet been quantified. Helheim glacier, located on the east coast of Greenland, has undergone an enhanced retreat since 2003, and this retreat was concurrent with accelerated ice flow. In this study, the flowline dataset including surface elevation, surface velocity and front position of Helheim from 2001 to 2006 is used to quantify the sensitivity of each of these processes. For that, we used the full-Stokes finite element ice flow model DassFlow/Ice, including adjoint code and full 4d-var data assimilation process in which the control variables are the basal and lateral friction parameters as well as the calving front pressure. For each available date, the sensitivity of each processes is first studied and an optimal distribution is then inferred from the surface measurements. Using this optimal distribution of these parameters, a transient simulation is performed over the whole dataset period. The relative contributions of the basal friction, lateral friction and front back force are then discussed under the light of these new results.
Sleiman, Mohamad; Kirchstetter, Thomas W.; Berdahl, Paul; Gilbert, Haley E.; Quelen, Sarah; Marlot, Lea; Preble, Chelsea V.; Chen, Sharon; Montalbano, Amandine; Rosseler, Olivier; Akbari, Hashem; Levinson, Ronnen; Destaillats, Hugo
2014-01-09
Highly reflective roofs can decrease the energy required for building air conditioning, help mitigate the urban heat island effect, and slow global warming. However, these benefits are diminished by soiling and weathering processes that reduce the solar reflectance of most roofing materials. Soiling results from the deposition of atmospheric particulate matter and the growth of microorganisms, each of which absorb sunlight. Weathering of materials occurs with exposure to water, sunlight, and high temperatures. This study developed an accelerated aging method that incorporates features of soiling and weathering. The method sprays a calibrated aqueous soiling mixture of dust minerals, black carbon, humic acid, and salts onto preconditioned coupons of roofing materials, then subjects the soiled coupons to cycles of ultraviolet radiation, heat and water in a commercial weatherometer. Three soiling mixtures were optimized to reproduce the site-specific solar spectral reflectance features of roofing products exposed for 3 years in a hot and humid climate (Miami, Florida); a hot and dry climate (Phoenix, Arizona); and a polluted atmosphere in a temperate climate (Cleveland, Ohio). A fourth mixture was designed to reproduce the three-site average values of solar reflectance and thermal emittance attained after 3 years of natural exposure, which the Cool Roof Rating Council (CRRC) uses to rate roofing products sold in the US. This accelerated aging method was applied to 25 products₋single ply membranes, factory and field applied coatings, tiles, modified bitumen cap sheets, and asphalt shingles₋and reproduced in 3 days the CRRC's 3-year aged values of solar reflectance. In conclusion, this accelerated aging method can be used to speed the evaluation and rating of new cool roofing materials.
NASA Astrophysics Data System (ADS)
Li, Xianfeng; Snyder, James A.; Stuart, Steven J.; Latour, Robert A.
2015-10-01
The recently developed "temperature intervals with global exchange of replicas" (TIGER2) accelerated sampling method is found to have inaccuracies when applied to systems with explicit solvation. This inaccuracy is due to the energy fluctuations of the solvent, which cause the sampling method to be less sensitive to the energy fluctuations of the solute. In the present work, the problem of the TIGER2 method is addressed in detail and a modification to the sampling method is introduced to correct this problem. The modified method is called "TIGER2 with solvent energy averaging," or TIGER2A. This new method overcomes the sampling problem with the TIGER2 algorithm and is able to closely approximate Boltzmann-weighted sampling of molecular systems with explicit solvation. The difference in performance between the TIGER2 and TIGER2A methods is demonstrated by comparing them against analytical results for simple one-dimensional models, against replica exchange molecular dynamics (REMD) simulations for sampling the conformation of alanine dipeptide and the folding behavior of (AAQAA)3 peptide in aqueous solution, and by comparing their performance in sampling the behavior of hen egg-white lysozyme in aqueous solution. The new TIGER2A method solves the problem caused by solvent energy fluctuations in TIGER2 while maintaining the two important characteristics of TIGER2, i.e., (1) using multiple replicas sampled at different temperature levels to help systems efficiently escape from local potential energy minima and (2) enabling the number of replicas used for a simulation to be independent of the size of the molecular system, thus providing an accelerated sampling method that can be used to efficiently sample systems considered too large for the application of conventional temperature REMD.
NASA Technical Reports Server (NTRS)
Rogers, Melissa J. B.
1993-01-01
Work to support the NASA MSFC Acceleration Characterization and Analysis Project (ACAP) was performed. Four tasks (analysis development, analysis research, analysis documentation, and acceleration analysis) were addressed by parallel projects. Work concentrated on preparation for and implementation of near real-time SAMS data analysis during the USMP-1 mission. User support documents and case specific software documentation and tutorials were developed. Information and results were presented to microgravity users. ACAP computer facilities need to be fully implemented and networked, data resources must be cataloged and accessible, future microgravity missions must be coordinated, and continued Orbiter characterization is necessary.
Re-acceleration Model for Radio Relics with Spectral Curvature
NASA Astrophysics Data System (ADS)
Kang, Hyesung; Ryu, Dongsu
2016-05-01
Most of the observed features of radio gischt relics, such as spectral steepening across the relic width and a power-law-like integrated spectrum, can be adequately explained by a diffusive shock acceleration (DSA) model in which relativistic electrons are (re-)accelerated at shock waves induced in the intracluster medium. However, the steep spectral curvature in the integrated spectrum above ˜2 GHz detected in some radio relics, such as the Sausage relic in cluster CIZA J2242.8+5301, may not be interpreted by the simple radiative cooling of postshock electrons. In order to understand such steepening, we consider here a model in which a spherical shock sweeps through and then exits out of a finite-size cloud with fossil relativistic electrons. The ensuing integrated radio spectrum is expected to steepen much more than predicted for aging postshock electrons, since the re-acceleration stops after the cloud-crossing time. Using DSA simulations that are intended to reproduce radio observations of the Sausage relic, we show that both the integrated radio spectrum and the surface brightness profile can be fitted reasonably well, if a shock of speed {u}s ˜ 2.5-2.8 × {10}3 {km} {{{s}}}-1 and a sonic Mach number {M}s ˜ 2.7-3.0 traverses a fossil cloud for ˜45 Myr, and the postshock electrons cool further for another ˜10 Myr. This attempt illustrates that steep curved spectra of some radio gischt relics could be modeled by adjusting the shape of the fossil electron spectrum and adopting the specific configuration of the fossil cloud.
NASA Technical Reports Server (NTRS)
Vlahos, L.; Machado, M. E.; Ramaty, R.; Murphy, R. J.; Alissandrakis, C.; Bai, T.; Batchelor, D.; Benz, A. O.; Chupp, E.; Ellison, D.
1986-01-01
Data is compiled from Solar Maximum Mission and Hinothori satellites, particle detectors in several satellites, ground based instruments, and balloon flights in order to answer fundamental questions relating to: (1) the requirements for the coronal magnetic field structure in the vicinity of the energization source; (2) the height (above the photosphere) of the energization source; (3) the time of energization; (4) transistion between coronal heating and flares; (5) evidence for purely thermal, purely nonthermal and hybrid type flares; (6) the time characteristics of the energization source; (7) whether every flare accelerates protons; (8) the location of the interaction site of the ions and relativistic electrons; (9) the energy spectra for ions and relativistic electrons; (10) the relationship between particles at the Sun and interplanetary space; (11) evidence for more than one acceleration mechanism; (12) whether there is single mechanism that will accelerate particles to all energies and also heat the plasma; and (13) how fast the existing mechanisms accelerate electrons up to several MeV and ions to 1 GeV.
Wang, Zhehui; Barnes, Cris W.
2002-01-01
There has been invented an apparatus for acceleration of a plasma having coaxially positioned, constant diameter, cylindrical electrodes which are modified to converge (for a positive polarity inner electrode and a negatively charged outer electrode) at the plasma output end of the annulus between the electrodes to achieve improved particle flux per unit of power.
ERIC Educational Resources Information Center
Ford, William J.
2010-01-01
This article focuses on the accelerated associate degree program at Ivy Tech Community College (Indiana) in which low-income students will receive an associate degree in one year. The three-year pilot program is funded by a $2.3 million grant from the Lumina Foundation for Education in Indianapolis and a $270,000 grant from the Indiana Commission…
Pope, K.E.
1958-01-01
This patent relates to an improved acceleration integrator and more particularly to apparatus of this nature which is gyrostabilized. The device may be used to sense the attainment by an airborne vehicle of a predetermined velocitv or distance along a given vector path. In its broad aspects, the acceleration integrator utilizes a magnetized element rotatable driven by a synchronous motor and having a cylin drical flux gap and a restrained eddy- current drag cap deposed to move into the gap. The angular velocity imparted to the rotatable cap shaft is transmitted in a positive manner to the magnetized element through a servo feedback loop. The resultant angular velocity of tae cap is proportional to the acceleration of the housing in this manner and means may be used to measure the velocity and operate switches at a pre-set magnitude. To make the above-described dcvice sensitive to acceleration in only one direction the magnetized element forms the spinning inertia element of a free gyroscope, and the outer housing functions as a gimbal of a gyroscope.
Radioisotope Dating with Accelerators.
ERIC Educational Resources Information Center
Muller, Richard A.
1979-01-01
Explains a new method of detecting radioactive isotopes by counting their accelerated ions rather than the atoms that decay during the counting period. This method increases the sensitivity by several orders of magnitude, and allows one to find the ages of much older and smaller samples. (GA)
NASA Astrophysics Data System (ADS)
Inoue, Yoshiyuki; Tanaka, Yasuyuki T.
2016-09-01
Relativistic jets launched by supermassive black holes, so-called active galactic nuclei (AGNs), are known as the most energetic particle accelerators in the universe. However, the baryon loading efficiency onto the jets from the accretion flows and their particle acceleration efficiencies have been veiled in mystery. With the latest data sets, we perform multi-wavelength spectral analysis of quiescent spectra of 13 TeV gamma-ray detected high-frequency-peaked BL Lacs (HBLs) following one-zone static synchrotron self-Compton (SSC) model. We determine the minimum, cooling break, and maximum electron Lorentz factors following the diffusive shock acceleration (DSA) theory. We find that HBLs have {P}B/{P}e˜ 6.3× {10}-3 and the radiative efficiency {ɛ }{{rad,jet}}˜ 6.7× {10}-4, where P B and P e is the Poynting and electron power, respectively. By assuming 10 leptons per one proton, the jet power relates to the black hole mass as {P}{{jet}}/{L}{{Edd}}˜ 0.18, where {P}{{jet}} and {L}{{Edd}} is the jet power and the Eddington luminosity, respectively. Under our model assumptions, we further find that HBLs have a jet production efficiency of {η }{{jet}}˜ 1.5 and a mass loading efficiency of {ξ }{{jet}}≳ 5× {10}-2. We also investigate the particle acceleration efficiency in the blazar zone by including the most recent Swift/BAT data. Our samples ubiquitously have particle acceleration efficiencies of {η }g˜ {10}4.5, which is inefficient to accelerate particles up to the ultra-high-energy-cosmic-ray (UHECR) regime. This implies that the UHECR acceleration sites should not be the blazar zones of quiescent low power AGN jets, if one assumes the one-zone SSC model based on the DSA theory.
NASA Astrophysics Data System (ADS)
Stavissky, Yurii Ya
2006-12-01
A short review is presented of the development in Russia of intense pulsed neutron sources for physical research — the pulsating fast reactors IBR-1, IBR-30, IBR-2 (Joint Institute for Nuclear Research, Dubna), and the neutron-radiation complex of the Moscow meson factory — the 'Troitsk Trinity' (RAS Institute for Nuclear Research, Troitsk, Moscow region). The possibility of generating giant neutron pulses in beam dumps of superhigh energy accelerators is discussed. In particular, the possibility of producing giant pulsed thermal neutron fluxes in modified beam dumps of the large hadron collider (LHD) under construction at CERN is considered. It is shown that in the case of one-turn extraction ov 7-TeV protons accumulated in the LHC main rings on heavy targets with water or zirconium-hydride moderators placed in the front part of the LHC graphite beam-dump blocks, every 10 hours relatively short (from ~100 µs) thermal neutron pulses with a peak flux density of up to ~1020 neutrons cm-2 s-1 may be produced. The possibility of applying such neutron pulses in physical research is discussed.
F. Liu; I. Brown; L. Phillips; G. Biallas; T. Siggins
1997-05-01
An important technique used for the suppression of surface flashover on high voltage DC ceramic insulators as well as for RF windows is that of providing some surface conduction to bleed off accumulated surface charge. The authors have used metal ion implantation to modify the surface of high voltage ceramic vacuum insulators to provide a uniform surface resistivity of approximately 5 x 10{sup 10} Q{sup 2}. A vacuum arc ion source based implanter was used to implant Pt at an energy of about 135 MeV to doses of up to more than 5 x 10{sup 16} ions cm{sup 2} into small ceramic test coupons and also into the inside surface of several ceramic accelerator columns 25 cm I. D. by 28 cm long. Here they describe the experimental set-up used to do the ion implantation and summarize the results of their exploratory work on implantation into test coupons as well as the implantations of the actual ceramic columns.
Shandiz, Mahdi Heravian; Anvari, Kazem; Khalilzadeh, Mohammadmahdi
2015-01-01
Purpose In order to keep the acceptable level of the radiation oncology linear accelerators, it is necessary to apply a reliable quality assurance (QA) program. Materials and Methods The QA protocols, published by authoritative organizations, such as the American Association of Physicists in Medicine (AAPM), determine the quality control (QC) tests which should be performed on the medical linear accelerators and the threshold levels for each test. The purpose of this study is to increase the accuracy and precision of the selected QC tests in order to increase the quality of treatment and also increase the speed of the tests to convince the crowded centers to start a reliable QA program. A new method has been developed for two of the QC tests; optical distance indicator (ODI) QC test as a daily test and gantry angle QC test as a monthly test. This method uses an image processing approach utilizing the snapshots taken by the CCD camera to measure the source to surface distance (SSD) and gantry angle. Results The new method of ODI QC test has an accuracy of 99.95% with a standard deviation of 0.061 cm and the new method for gantry angle QC has a precision of 0.43°. The automated proposed method which is used for both ODI and gantry angle QC tests, contains highly accurate and precise results which are objective and the human-caused errors have no effect on the results. Conclusion The results show that they are in the acceptable range for both of the QC tests, according to AAPM task group 142. PMID:25874177
Biomedical accelerator mass spectrometry
NASA Astrophysics Data System (ADS)
Freeman, Stewart P. H. T.; Vogel, John S.
1995-05-01
Ultrasensitive SIMS with accelerator based spectrometers has recently begun to be applied to biomedical problems. Certain very long-lived radioisotopes of very low natural abundances can be used to trace metabolism at environmental dose levels ( [greater-or-equal, slanted] z mol in mg samples). 14C in particular can be employed to label a myriad of compounds. Competing technologies typically require super environmental doses that can perturb the system under investigation, followed by uncertain extrapolation to the low dose regime. 41Ca and 26Al are also used as elemental tracers. Given the sensitivity of the accelerator method, care must be taken to avoid contamination of the mass spectrometer and the apparatus employed in prior sample handling including chemical separation. This infant field comprises the efforts of a dozen accelerator laboratories. The Center for Accelerator Mass Spectrometry has been particularly active. In addition to collaborating with groups further afield, we are researching the kinematics and binding of genotoxins in-house, and we support innovative uses of our capability in the disciplines of chemistry, pharmacology, nutrition and physiology within the University of California. The field can be expected to grow further given the numerous potential applications and the efforts of several groups and companies to integrate more the accelerator technology into biomedical research programs; the development of miniaturized accelerator systems and ion sources capable of interfacing to conventional HPLC and GMC, etc. apparatus for complementary chemical analysis is anticipated for biomedical laboratories.
NASA Astrophysics Data System (ADS)
Hur, Jin-Huek; Lee, Tae-Gu; Moon, Sun-Ae; Lee, Sang-Jae; Yoo, Hoseon; Moon, Seung-Jae; Lee, Jae-Heon
2008-09-01
The thermal reliability of a closed-type BLDC motor for a high-speed fan is analyzed by an accelerated-life testing and numerical methods in this paper. Since a module and a motor part are integrated in a closed case, heat generated from a rotor in a motor and electronic components in the PCB module cannot be effectively removed to the outside. Therefore, the module will easily fail due to high temperature. The experiment for measuring the temperature and the surface heat flux of the electronic components is carried out to predict their surface temperature distributions and main heat sources. The accelerated-life test is accomplished to formulate the life equation depending on the environmental temperature. Moreover, the temperature of the PCB module is different from the environmental temperature since the heat generated from the motor cannot be effectively dissipated owing to the motor’s structure. Therefore a numerical method is used to predict the temperature of the PCB module, which is one of the life equation parameter, according to the environment. By numerically obtaining the maxima of the thermal stress and strain of the electronic components according to the operation environments with the temperature results, the fatigue cycle can be estimated.
Quinto, Francesca; Golser, Robin; Lagos, Markus; Plaschke, Markus; Schäfer, Thorsten; Steier, Peter; Geckeis, Horst
2015-06-02
(236)U, (237)Np, and Pu isotopes and (243)Am were determined in ground- and seawater samples at levels below ppq (fg/g) with a maximum sample size of 250 g. Such high sensitivity was possible by using accelerator mass spectrometry (AMS) at the Vienna Environmental Research Accelerator (VERA) with extreme selectivity and recently improved efficiency and a significantly simplified separation chemistry. The use of nonisotopic tracers was investigated in order to allow for the determination of (237)Np and (243)Am, for which isotopic tracers either are rarely available or suffer from various isobaric mass interferences. In the present study, actinides were concentrated from the sample matrix via iron hydroxide coprecipitation and measured sequentially without previous chemical separation from each other. The analytical method was validated by the analysis of the Reference Material IAEA 443 and was applied to groundwater samples from the Colloid Formation and Migration (CFM) project at the deep underground rock laboratory of the Grimsel Test Site (GTS) and to natural water samples affected solely by global fallout. While the precision of the presented analytical method is somewhat limited by the use of nonisotopic spikes, the sensitivity allows for the determination of ∼10(5) atoms in a sample. This provides, e.g., the capability to study the long-term release and retention of actinide tracers in field experiments as well as the transport of actinides in a variety of environmental systems by tracing contamination from global fallout.
Particle Accelerators in China
NASA Astrophysics Data System (ADS)
Zhang, Chuang; Fang, Shouxian
As the special machines that can accelerate charged particle beams to high energy by using electromagnetic fields, particle accelerators have been widely applied in scientific research and various areas of society. The development of particle accelerators in China started in the early 1950s. After a brief review of the history of accelerators, this article describes in the following sections: particle colliders, heavy-ion accelerators, high-intensity proton accelerators, accelerator-based light sources, pulsed power accelerators, small scale accelerators, accelerators for applications, accelerator technology development and advanced accelerator concepts. The prospects of particle accelerators in China are also presented.
Recent Advances in Plasma Acceleration
Hogan, Mark
2007-03-19
The costs and the time scales of colliders intended to reach the energy frontier are such that it is important to explore new methods of accelerating particles to high energies. Plasma-based accelerators are particularly attractive because they are capable of producing accelerating fields that are orders of magnitude larger than those used in conventional colliders. In these accelerators a drive beam, either laser or particle, produces a plasma wave (wakefield) that accelerates charged particles. The ultimate utility of plasma accelerators will depend on sustaining ultra-high accelerating fields over a substantial length to achieve a significant energy gain. More than 42 GeV energy gain was achieved in an 85 cm long plasma wakefield accelerator driven by a 42 GeV electron drive beam in the Final Focus Test Beam (FFTB) Facility at SLAC. Most of the beam electrons lose energy to the plasma wave, but some electrons in the back of the same beam pulse are accelerated with a field of {approx}52 GV/m. This effectively doubles their energy, producing the energy gain of the 3 km long SLAC accelerator in less than a meter for a small fraction of the electrons in the injected bunch. Prospects for a drive-witness bunch configuration and high-gradient positron acceleration experiments planned for the SABER facility will be discussed.
Caporaso, George J.; Sampayan, Stephen E.; Kirbie, Hugh C.
2007-02-06
A compact linear accelerator having at least one strip-shaped Blumlein module which guides a propagating wavefront between first and second ends and controls the output pulse at the second end. Each Blumlein module has first, second, and third planar conductor strips, with a first dielectric strip between the first and second conductor strips, and a second dielectric strip between the second and third conductor strips. Additionally, the compact linear accelerator includes a high voltage power supply connected to charge the second conductor strip to a high potential, and a switch for switching the high potential in the second conductor strip to at least one of the first and third conductor strips so as to initiate a propagating reverse polarity wavefront(s) in the corresponding dielectric strip(s).
NASA Astrophysics Data System (ADS)
Tajima, T.; Nakajima, K.; Mourou, G.
2017-02-01
The fundamental idea of Laser Wakefield Acceleration (LWFA) is reviewed. An ultrafast intense laser pulse drives coherent wakefield with a relativistic amplitude robustly supported by the plasma. While the large amplitude of wakefields involves collective resonant oscillations of the eigenmode of the entire plasma electrons, the wake phase velocity ˜ c and ultrafastness of the laser pulse introduce the wake stability and rigidity. A large number of worldwide experiments show a rapid progress of this concept realization toward both the high-energy accelerator prospect and broad applications. The strong interest in this has been spurring and stimulating novel laser technologies, including the Chirped Pulse Amplification, the Thin Film Compression, the Coherent Amplification Network, and the Relativistic Mirror Compression. These in turn have created a conglomerate of novel science and technology with LWFA to form a new genre of high field science with many parameters of merit in this field increasing exponentially lately. This science has triggered a number of worldwide research centers and initiatives. Associated physics of ion acceleration, X-ray generation, and astrophysical processes of ultrahigh energy cosmic rays are reviewed. Applications such as X-ray free electron laser, cancer therapy, and radioisotope production etc. are considered. A new avenue of LWFA using nanomaterials is also emerging.
The Brookhaven National Laboratory Accelerator Test Facility
Batchelor, K.
1992-01-01
The Brookhaven National Laboratory Accelerator Test Facility comprises a 50 MeV traveling wave electron linear accelerator utilizing a high gradient, photo-excited, raidofrequency electron gun as an injector and an experimental area for study of new acceleration methods or advanced radiation sources using free electron lasers. Early operation of the linear accelerator system including calculated and measured beam parameters are presented together with the experimental program for accelerator physics and free electron laser studies.
The Brookhaven National Laboratory Accelerator Test Facility
Batchelor, K.
1992-09-01
The Brookhaven National Laboratory Accelerator Test Facility comprises a 50 MeV traveling wave electron linear accelerator utilizing a high gradient, photo-excited, raidofrequency electron gun as an injector and an experimental area for study of new acceleration methods or advanced radiation sources using free electron lasers. Early operation of the linear accelerator system including calculated and measured beam parameters are presented together with the experimental program for accelerator physics and free electron laser studies.
Advanced concepts for acceleration
Keefe, D.
1986-07-01
Selected examples of advanced accelerator concepts are reviewed. Such plasma accelerators as plasma beat wave accelerator, plasma wake field accelerator, and plasma grating accelerator are discussed particularly as examples of concepts for accelerating relativistic electrons or positrons. Also covered are the pulsed electron-beam, pulsed laser accelerator, inverse Cherenkov accelerator, inverse free-electron laser, switched radial-line accelerators, and two-beam accelerator. Advanced concepts for ion acceleration discussed include the electron ring accelerator, excitation of waves on intense electron beams, and two-wave combinations. (LEW)
NASA Astrophysics Data System (ADS)
Nurmukhanbetova, A. K.; Goldberg, V. Z.; Nauruzbayev, D. K.; Rogachev, G. V.; Golovkov, M. S.; Mynbayev, N. A.; Artemov, S.; Karakhodjaev, A.; Kuterbekov, K.; Rakhymzhanov, A.; Berdibek, Zh.; Ivanov, I.; Tikhonov, A.; Zherebchevsky, V. I.; Torilov, S. Yu.; Tribble, R. E.
2017-03-01
To study resonance reactions of heavy ions at low energy we have combined the Thick Target Inverse Kinematics Method (TTIK) with Time of Flight method (TF). We used extended target and TF to resolve the identification problems of various possible nuclear processes inherent to the simplest popular version of TTIK. Investigations of the 15N interaction with hydrogen and helium gas targets by using this new approach are presented.
Accelerators and the Accelerator Community
Malamud, Ernest; Sessler, Andrew
2008-06-01
In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.
NASA Astrophysics Data System (ADS)
Sugawara, Hirotake
2017-04-01
A propagator method (PM), a numerical technique to solve the Boltzmann equation (BE) for the electron velocity or energy distribution function (EVDF/EEDF) of electron swarms in gases, was customized to obtain the equilibrium solution quickly. The PM calculates the number of electrons in cells defined in velocity space using an operator called the propagator or Green’s function. The propagator represents the intercellular transfer of electrons corresponding to the electron velocity change due to the acceleration by the electric field and the collisional events with gas molecules. The relaxation of the EVDF to its drift equilibrium solution proceeds with iterative propagator operations for the EVDF. Merits of the PM are that the series expansion of the EVDF as done in the BE analyses is not required and that time evolution of the electron swarm can be observed if necessary. On the other hand, in case only the equilibrium solution of the EVDF is wanted, the relaxation can be accelerated numerically. A demonstration achieved a shortening of the computational time by about three orders of magnitude. Furthermore, this scheme was applied to calculations of a set of electron transport parameters required in fluid-model simulations, i.e. the effective ionization frequency, the centroid drift velocity and the longitudinal diffusion coefficient, using the zeroth-, first- and second-order moment equations derived from the BE. A detailed description on the PM calculation was presented.
NASA Astrophysics Data System (ADS)
Xue, Yuejun; Ge, Tiantian; Wang, Xuchen
2015-12-01
Radiocarbon (14C) measurement of dissolved organic carbon (DOC) is a very powerful tool to study the sources, transformation and cycling of carbon in the ocean. The technique, however, remains great challenges for complete and successful oxidation of sufficient DOC with low blanks for high precision carbon isotopic ratio analysis, largely due to the overwhelming proportion of salts and low DOC concentrations in the ocean. In this paper, we report an effective UV-Oxidation method for oxidizing DOC in natural waters for radiocarbon analysis by accelerator mass spectrometry (AMS). The UV-oxidation system and method show 95%±4% oxidation efficiency and high reproducibility for DOC in both river and seawater samples. The blanks associated with the method was also low (about 3 µg C) that is critical for 14C analysis. As a great advantage of the method, multiple water samples can be oxidized at the same time so it reduces the sample processing time substantially compared with other UV-oxidation method currently being used in other laboratories. We have used the system and method for 14C studies of DOC in rivers, estuaries, and oceanic environments and have received promise results.
Medina, L Carolina; Sartain, Jerry B; Obreza, Thomas A; Hall, William L; Thiex, Nancy J
2014-01-01
Several technologies have been proposed to characterize the nutrient release and availability patterns of enhanced-efficiency fertilizers (EEFs), especially slow-release fertilizers (SRFs) and controlled-release fertilizers (CRFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize EEF materials, no validated method exists to assess their nutrient release patterns. However, the increased use of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. A series of experiments were conducted to evaluate the effect of temperature, fertilizer test portion size, and extraction time on the performance of a 74 h accelerated laboratory extraction method to measure SRF and CRF nutrient release profiles. Temperature was the only factor that influenced nutrient release rate, with a highly marked effect for phosphorus and to a lesser extent for nitrogen (N) and potassium. Based on the results, the optimal extraction temperature set was: Extraction No. 1-2:00 h at 25 degrees C; Extraction No. 2-2:00 h at 50 degrees C; Extraction No. 3-20:00 h at 55 degrees C; and Extraction No. 4-50:00 h at 60 degrees C. Ruggedness of the method was tested by evaluating the effect of small changes in seven selected factors on method behavior using a fractional multifactorial design. Overall, the method showed ruggedness for measuring N release rates of coated CRFs.
NASA Astrophysics Data System (ADS)
Öz, E.; Batsch, F.; Muggli, P.
2016-09-01
A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE) (Assmann et al., 2014 [1]) project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook (Marlow, 1967 [2]) method and has been described in great detail in the work by Hill et al. (1986) [3]. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of 1% for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prototype 8 cm long novel Rb vapor cell.
NASA Technical Reports Server (NTRS)
Vongierke, H. E.; Brinkley, J. W.
1975-01-01
The degree to which impact acceleration is an important factor in space flight environments depends primarily upon the technology of capsule landing deceleration and the weight permissible for the associated hardware: parachutes or deceleration rockets, inflatable air bags, or other impact attenuation systems. The problem most specific to space medicine is the potential change of impact tolerance due to reduced bone mass and muscle strength caused by prolonged weightlessness and physical inactivity. Impact hazards, tolerance limits, and human impact tolerance related to space missions are described.
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Li, Changqing
2015-03-01
Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. However, it remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a non-uniform multiplicative updating algorithm, and obtained some further speed gain with the ordered subsets (OS) method. However, increasing the number of OS leads to larger approximation errors and the speed gain from larger number of OS is marginal. In this paper, we propose to further enhance the convergence speed by incorporating the first order momentum method that uses previous iterations to achieve a quadratic convergence rate. Using cubic phantom experiment, we have shown that the proposed method indeed leads to a much faster convergence.
ERIC Educational Resources Information Center
Schneider, Jenifer Jasinski; King, James R.; Kozdras, Deborah; Minick, Vanessa; Welsh, James L.
2012-01-01
During a teaching methods field experience, we initiated several processes to facilitate pre-service teachers' reflection, empowerment, and performance as they learned to teach students. Through an ethno-theater presentation and subsequent revisions to an ethno-theater script, we turned the reflective lens on ourselves as we discovered instances…
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
Accelerator based epithermal neutron source
NASA Astrophysics Data System (ADS)
Taskaev, S. Yu.
2015-11-01
We review the current status of the development of accelerator sources of epithermal neutrons for boron neutron capture therapy (BNCT), a promising method of malignant tumor treatment. Particular attention is given to the source of epithermal neutrons on the basis of a new type of charged particle accelerator: tandem accelerator with vacuum insulation and lithium neutron-producing target. It is also shown that the accelerator with specialized targets makes it possible to generate fast and monoenergetic neutrons, resonance and monoenergetic gamma-rays, alpha-particles, and positrons.
Laser acceleration and its future
Tajima, Toshiki
2010-01-01
Laser acceleration is based on the concept to marshal collective fields that may be induced by laser. In order to exceed the material breakdown field by a large factor, we employ the broken-down matter of plasma. While the generated wakefields resemble with the fields in conventional accelerators in their structure (at least qualitatively), it is their extreme accelerating fields that distinguish the laser wakefield from others, amounting to tiny emittance and compact accelerator. The current research largely falls on how to master the control of acceleration process in spatial and temporal scales several orders of magnitude smaller than the conventional method. The efforts over the last several years have come to a fruition of generating good beam properties with GeV energies on a table top, leading to many applications, such as ultrafast radiolysis, intraoperative radiation therapy, injection to X-ray free electron laser, and a candidate for future high energy accelerators. PMID:20228616
Chuang, Ya-Hui; Zhang, Yingjie; Zhang, Wei; Boyd, Stephen A; Li, Hui
2015-07-24
Land application of biosolids and irrigation with reclaimed water in agricultural production could result in accumulation of pharmaceuticals in vegetable produce. To better assess the potential human health impact from long-term consumption of pharmaceutical-contaminated vegetables, it is important to accurately quantify the amount of pharmaceuticals accumulated in vegetables. In this study, a quick, easy, cheap, effective, rugged and safe (QuEChERS) method was developed and optimized to extract multiple classes of pharmaceuticals from vegetables, which were subsequently quantified by liquid chromatography coupled to tandem mass spectrometry. For the eleven target pharmaceuticals in celery and lettuce, the extraction recovery of the QuEChERS method ranged from 70.1 to 118.6% with relative standard deviation <20%, and the method detection limit was achieved at the levels of nanograms of pharmaceuticals per gram of vegetables. The results revealed that the performance of the QuEChERS method was comparable to, or better than that of accelerated solvent extraction (ASE) method for extraction of pharmaceuticals from plants. The two optimized extraction methods were applied to quantify the uptake of pharmaceuticals by celery and lettuce growing hydroponically. The results showed that all the eleven target pharmaceuticals could be absorbed by the vegetables from water. Compared to the ASE method, the QuEChERS method offers the advantages of short time and reduced costs of sample preparation, and less amount of organic solvents used. The established QuEChERS method could be used to determine the accumulation of multiple classes of pharmaceutical residues in vegetables and other plants, which is needed to evaluate the quality and safety of agricultural produce consumed by humans.
Becker, J; Brunckhorst, E; Schmidt, R
2007-11-07
When radiotherapy with photon energies greater than 10 MV is performed neutrons contaminate the photon beam. In this paper the neutron contamination of the 15 MV photon mode of the Siemens Primus accelerator was studied. The Monte Carlo code MCNPX was used for the description of the treatment head and treatment room. The Monte Carlo results were verified by studying the photon depth dose curve and beam profiles in a water phantom. After these verifications the locations of neutron production were studied and the neutron source spectrum and strength were calculated. The neutron response of the paired Mg/Ar and MgB/Ar ionization chamber system was calculated and experimentally verified for two experimental set-ups. The paired chamber system allowed us to measure neutrons inside the field borders and allowed rapid and point wise measurement in contrast to other methods of neutron detection.
General purpose programmable accelerator board
Robertson, Perry J.; Witzke, Edward L.
2001-01-01
A general purpose accelerator board and acceleration method comprising use of: one or more programmable logic devices; a plurality of memory blocks; bus interface for communicating data between the memory blocks and devices external to the board; and dynamic programming capabilities for providing logic to the programmable logic device to be executed on data in the memory blocks.
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
NASA Astrophysics Data System (ADS)
Khabarova, Olga V.; Zank, Gary P.; Li, Gang; Malandraki, Olga E.; le Roux, Jakobus A.; Webb, Gary M.
2016-04-01
We have recently shown both theoretically (Zank et al. 2014, 2015; le Roux et al. 2015) and observationally (Khabarova et al. 2015) that dynamical small-scale magnetic islands play a significant role in local particle acceleration in the supersonic solar wind. We discuss here observational evidence for particle acceleration at shock waves that is enhanced by the recently proposed mechanism of particle energization by both island contraction and the reconnection electric field generated in merging or contracting magnetic islands downstream of the shocks (Zank et al. 2014, 2015; le Roux et al. 2015). Both observations and simulations suppose formation of magnetic islands in the turbulent wake of heliospheric or interplanetary shocks (ISs) (Turner et al. 2013; Karimabadi et al. 2014; Chasapis et al. 2015). A combination of the DSA mechanism with acceleration by magnetic island dynamics explain why the spectra of energetic particles that are supposed to be accelerated at heliospheric shocks are sometimes harder than predicted by DSA theory (Zank et al. 2015). Moreover, such an approach allows us to explain and describe other unusual behaviour of accelerated particles, such as when energetic particle flux intensity peaks are observed downstream of heliospheric shocks instead of peaking directly at the shock according to DSA theory. Zank et al. (2015) predicted the peak location to be behind the heliospheric termination shock (HTS) and showed that the distance from the shock to the peak depends on particle energy, which is in agreement with Voyager 2 observations. Similar particle behaviour is observed near strong ISs in the outer heliosphere as observed by Voyager 2. Observations show that heliospheric shocks are accompanied by current sheets, and that IS crossings always coincide with sharp changes in the IMF azimuthal angle and the IMF strength, which is typical for strong current sheets. The presence of current sheets in the vicinity of ISs acts to magnetically
NASA Astrophysics Data System (ADS)
Wang, Qiao; Zhou, Wei; Cheng, Yonggang; Ma, Gang; Chang, Xiaolin
2017-04-01
A line integration method (LIM) is proposed to calculate the domain integrals for 3D problems. In the proposed method, the domain integrals are transformed into boundary integrals and only line integrals on straight lines are needed to be computed. A background cell structure is applied to further simplify the line integrals and improve the accuracy. The method creates elements only on the boundary, and the integral lines are created from the boundary elements. The procedure is quite suitable for the boundary element method, and we have applied it to 3D situations. Directly applying the method is time-consuming since the complexity of the computational time is O( NM), where N and M are the numbers of nodes and lines, respectively. To overcome this problem, the fast multipole method is used with the LIM for large-scale computation. The numerical results show that the proposed method is efficient and accurate.
Lee, Seungyeoun; Son, Donghee; Yu, Wenbao; Park, Taesung
2016-12-01
Although a large number of genetic variants have been identified to be associated with common diseases through genome-wide association studies, there still exits limitations in explaining the missing heritability. One approach to solving this missing heritability problem is to investigate gene-gene interactions, rather than a single-locus approach. For gene-gene interaction analysis, the multifactor dimensionality reduction (MDR) method has been widely applied, since the constructive induction algorithm of MDR efficiently reduces high-order dimensions into one dimension by classifying multi-level genotypes into high- and low-risk groups. The MDR method has been extended to various phenotypes and has been improved to provide a significance test for gene-gene interactions. In this paper, we propose a simple method, called accelerated failure time (AFT) UM-MDR, in which the idea of a unified model-based MDR is extended to the survival phenotype by incorporating AFT-MDR into the classification step. The proposed AFT UM-MDR method is compared with AFT-MDR through simulation studies, and a short discussion is given.
NASA Astrophysics Data System (ADS)
Mu, Dawei; Chen, Po; Wang, Liqiang
2013-02-01
We have successfully ported an arbitrary high-order discontinuous Galerkin (ADER-DG) method for solving the three-dimensional elastic seismic wave equation on unstructured tetrahedral meshes to an Nvidia Tesla C2075 GPU using the Nvidia CUDA programming model. On average our implementation obtained a speedup factor of about 24.3 for the single-precision version of our GPU code and a speedup factor of about 12.8 for the double-precision version of our GPU code when compared with the double precision serial CPU code running on one Intel Xeon W5880 core. When compared with the parallel CPU code running on two, four and eight cores, the speedup factor of our single-precision GPU code is around 12.9, 6.8 and 3.6, respectively. In this article, we give a brief summary of the ADER-DG method, a short introduction to the CUDA programming model and a description of our CUDA implementation and optimization of the ADER-DG method on the GPU. To our knowledge, this is the first study that explores the potential of accelerating the ADER-DG method for seismic wave-propagation simulations using a GPU.
Kabaha, Khaled; Taralp, Alpay; Cakmak, Ismail; Ozturk, Levent
2011-04-13
The technique of microwave-assisted acid hydrolysis was applied to wholegrain wheat (Triticum durum Desf. cv. Balcali 2000) flour in order to speed the preparation of samples for analysis. The resultant hydrolysates were chromatographed and quantified in an automated amino acid analyzer. The effect of different hydrolysis temperatures, times and sample weights was examined using flour dispersed in 6 N HCl. Within the range of values tested, the highest amino acid recoveries were generally obtained by setting the hydrolysis parameters to 150 °C, 3 h and 200 mg sample weight. These conditions struck an optimal balance between liberating amino acid residues from the wheat matrix and limiting their subsequent degradation or transformation. Compared to the traditional 24 h reflux method, the hydrolysates were prepared in dramatically less time, yet afforded comparable ninhydrin color yields. Under optimal hydrolysis conditions, the total amino acid recovery corresponded to at least 85.1% of the total protein content, indicating the efficient extraction of amino acids from the flour matrix. The findings suggest that this microwave-assisted method can be used to rapidly profile the amino acids of numerous wheat grain samples, and can be extended to the grain analysis of other cereal crops.
Accelerated kinetics of amorphous silicon using an on-the-fly off-lattice kinetic Monte-Carlo method
NASA Astrophysics Data System (ADS)
Joly, Jean-Francois; El-Mellouhi, Fedwa; Beland, Laurent Karim; Mousseau, Normand
2011-03-01
The time evolution of a series of well relaxed amorphous silicon models was simulated using the kinetic Activation-RelaxationTechnique (kART), an on-the-fly off-lattice kinetic Monte Carlo method. This novel algorithm uses the ART nouveau algorithm to generate activated events and links them with local topologies. It was shown to work well for crystals with few defects but this is the first time it is used to study an amorphous material. A parallel implementation allows us to increase the speed of the event generation phase. After each KMC step, new searches are initiated for each new topology encountered. Well relaxed amorphous silicon models of 1000 atoms described by a modified version of the empirical Stillinger-Weber potential were used as a starting point for the simulations. Initial results show that the method is faster by orders of magnitude compared to conventional MD simulations up to temperatures of 500 K. Vacancy-type defects were also introduced in this system and their stability and lifetimes are calculated.
Delgado-González, M J; Sánchez-Guillén, M M; García-Moreno, M V; Rodríguez-Dodero, M C; García-Barroso, C; Guillén-Sánchez, D A
2017-05-01
During the ageing of brandies, many physicochemical processes take place involving the distilled spirit and the wood of the casks. Because of these reactions, the polyphenolic content of brandies and their content of organic acids increase with the ageing. These reactions are slow, and the ageing of high-quality brandies takes several years. In this paper, the development of a system that uses the circulation of the wine distillate through encapsulated American oak chips and the application of ultrasound energy with the aim of producing aged wine spirits has been carried out, and the influences of the operation variables over the characteristics of the produced drink have been measured. With that proposal, the influence of different powers of ultrasound, and also the influence of the movement of the liquor through oak chips, was determined first. This way, the results show that higher powers of ultrasound, of nearly 40W/L, in addition with the movement of the spirit, improve the extraction of phenolic compounds in a 33.94%, after seven days of ageing. Then, applying Youden and Steiner's experimental design, eight experiments of ageing were performed, and the samples obtained by this new method were analysed to obtain information related to their physicochemical and oenological characterisation in order to determine the experimental conditions that produce the best ageing results. This way, the best spirit produced by this new method of ageing is obtained with a high alcoholic strength of the distilled wine and a high quantity of oak chips, and with room temperature and high flow rate. In addition, the presence of oxygen in the sample and the absence of light increase the quality of the produced spirit. Finally, the application of ultrasound energy in large pulses is related with the improvement of two important ageing markers: the intensity of the colour and the TPI. As a last experiment, we applied this ageing method to five varietal spirits. The sensorial
NASA Technical Reports Server (NTRS)
Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.
1991-01-01
An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...
2017-02-16
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
Xia, Yidong; Lou, Jialin; Luo, Hong; Edwards, Jack; Mueller, Frank
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementation of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.
Klock, Heath E; Koesema, Eric J; Knuth, Mark W; Lesley, Scott A
2008-05-01
Successful protein expression, purification, and crystallization for challenging targets typically requires evaluation of a multitude of expression constructs. Often many iterations of truncations and point mutations are required to identify a suitable derivative for recombinant expression. Making and characterizing these variants is a significant barrier to success. We have developed a rapid and efficient cloning process and combined it with a protein microscreening approach to characterize protein suitability for structural studies. The Polymerase Incomplete Primer Extension (PIPE) cloning method was used to rapidly clone 448 protein targets and then to generate 2143 truncations from 96 targets with minimal effort. Proteins were expressed, purified, and characterized via a microscreening protocol, which incorporates protein quantification, liquid chromatography mass spectrometry and analytical size exclusion chromatography (AnSEC) to evaluate suitability of the protein products for X-ray crystallography. The results suggest that selecting expression constructs for crystal trials based primarily on expression solubility is insufficient. Instead, AnSEC scoring as a measure of protein polydispersity was found to be predictive of ultimate structure determination success and essential for identifying appropriate boundaries for truncation series. Overall structure determination success was increased by at least 38% by applying this combined PIPE cloning and microscreening approach to recalcitrant targets.
NASA Astrophysics Data System (ADS)
Mu, Dawei; Chen, Po; Wang, Liqiang
2013-12-01
We have successfully ported an arbitrary high-order discontinuous Galerkin method for solving the three-dimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) of NVIDIA and Message Passing Interface (MPI) and obtained a speedup factor of about 28.3 for the single-precision version of our codes and a speedup factor of about 14.9 for the double-precision version. The GPU used in the comparisons is NVIDIA Tesla C2070 Fermi, and the CPU used is Intel Xeon W5660. To effectively overlap inter-process communication with computation, we separate the elements on each subdomain into inner and outer elements and complete the computation on outer elements and fill the MPI buffer first. While the MPI messages travel across the network, the GPU performs computation on inner elements, and all other calculations that do not use information of outer elements from neighboring subdomains. A significant portion of the speedup also comes from a customized matrix-matrix multiplication kernel, which is used extensively throughout our program. Preliminary performance analysis on our parallel GPU codes shows favorable strong and weak scalabilities.
NASA Astrophysics Data System (ADS)
Gu, Weijun; Sun, Zechang; Wei, Xuezhe; Dai, Haifeng
2014-12-01
The lack of data samples is the main difficulty for the lifetime study of a lithium-ion battery, especially for a model-based evaluation system. To determine the mapping relationship between the battery fading law and the different external factors, the testing of batteries should be implemented to the greatest extent possible. As a result, performing a battery lifetime study has become a notably time-consuming undertaking. Without reducing the number of testing items pre-specified within the test matrices of an accelerated life testing schedule, a grey model that can be used to predict the cycle numbers that result in the specific life ending index is established in this paper. No aging mechanism is required for this model, which is exclusively a data-driven method obtained from a small quantity of actual testing data. For higher accuracy, a specific smoothing method is introduced, and the error between the predicted value and the actual value is also modeled using the same method. By the verification of a phosphate iron lithium-ion battery and a manganese oxide lithium-ion battery, this grey model demonstrated its ability to reduce the required number of cycles for the operational mode of various electric vehicles.
Ottonello, Giuliana; Ferrari, Angelo; Magi, Emanuele
2014-01-01
A simple and robust method for the determination of 18 polychlorinated biphenyls (PCBs) in fish was developed and validated. A mixture of acetone/n-hexane (1:1, v/v) was selected for accelerated solvent extraction (ASE). After the digestion of fat, the clean-up was carried out using solid phase extraction silica cartridges. Samples were analysed by GC-MS in selected ion monitoring (SIM) using three fragment ions for each congener (one quantifier and two qualifiers). PCB 155 and PCB 198 were employed as internal standards. The lowest limit of detection was observed for PCB 28 (0.4ng/g lipid weight). The accuracy of the method was verified by means of the Certified Reference Material EDF-2525 and good results in terms of linearity (R(2)>0.994) and recoveries (80-110%) were also achieved. Precision was evaluated by spiking blank samples at 4, 8 and 12ng/g. Relative standard deviation values for repeatability and reproducibility were lower than 8% and 16%, respectively. The method was applied to the determination of PCBs in 80 samples belonging to four Mediterranean fish species. The proposed procedure is particularly effective because it provides good recoveries with lowered extraction time and solvent consumption; in fact, the total time of extraction is about 12min per sample and, for the clean-up step, a total solvent volume of 13ml is required.
Muto, Hideshi; Ohshiro, Yukimitsu; Kawasaki, Katsunori; Oyaizu, Michihiro; Hattori, Toshiyuki
2013-04-19
In the past decade, we have developed extremely long-lived carbon stripper foils of 1-50 {mu}g/cm{sup 2} thickness prepared by a heavy ion beam sputtering method. These foils were mainly used for low energy heavy ion beams. Recently, high energy negative Hydrogen and heavy ion accelerators have started to use carbon stripper foils of over 100 {mu}g/cm{sup 2} in thickness. However, the heavy ion beam sputtering method was unsuccessful in production of foils thicker than about 50 {mu}g/cm{sup 2} because of the collapse of carbon particle build-up from substrates during the sputtering process. The reproduction probability of the foils was less than 25%, and most of them had surface defects. However, these defects were successfully eliminated by introducing higher beam energies of sputtering ions and a substrate heater during the sputtering process. In this report we describe a highly reproducible method for making thick carbon stripper foils by a heavy ion beam sputtering with a Krypton ion beam.
NASA Astrophysics Data System (ADS)
Hermus, James; Szczykutowicz, Timothy P.; Strother, Charles M.; Mistretta, Charles
2014-03-01
When performing Computed Tomographic (CT) image reconstruction on digital subtraction angiography (DSA) projections, loss of vessel contrast has been observed behind highly attenuating anatomy, such as dental implants and large contrast filled aneurysms. Because this typically occurs only in a limited range of projection angles, the observed contrast time course can potentially be altered. In this work, we have developed a model for acquiring DSA projections that models both the polychromatic nature of the x-ray spectrum and the x-ray scattering interactions to investigate this problem. In our simulation framework, scatter and beam hardening contributions to vessel dropout can be analyzed separately. We constructed digital phantoms with large clearly defined regions containing iodine contrast, bone, soft issue, titanium (dental implants) or combinations of these materials. As the regions containing the materials were large and rectangular, when the phantoms were forward projected, the projections contained uniform regions of interest (ROI) and enabled accurate vessel dropout analysis. Two phantom models were used, one to model the case of a vessel behind a large contrast filled aneurysm and the other to model a vessel behind a dental implant. Cases in which both beam hardening and scatter were turned off, only scatter was turned on, only beam hardening was turned on, and both scatter and beam hardening were turned on, were simulated for both phantom models. The analysis of this data showed that the contrast degradation is primarily due to scatter. When analyzing the aneurysm case, 90.25% of the vessel contrast was lost in the polychromatic scatter image, however only 50.5% of the vessel contrast was lost in the beam hardening only image. When analyzing the teeth case, 44.2% of the vessel contrast was lost in the polychromatic scatter image and only 26.2% of the vessel contrast was lost in the beam hardening only image.
Basic concepts in plasma accelerators.
Bingham, Robert
2006-03-15
In this article, we present the underlying physics and the present status of high gradient and high-energy plasma accelerators. With the development of compact short pulse high-brightness lasers and electron and positron beams, new areas of studies for laser/particle beam-matter interactions is opening up. A number of methods are being pursued vigorously to achieve ultra-high-acceleration gradients. These include the plasma beat wave accelerator (PBWA) mechanism which uses conventional long pulse ( approximately 100 ps) modest intensity lasers (I approximately 10(14)-10(16) W cm(-2)), the laser wakefield accelerator (LWFA) which uses the new breed of compact high-brightness lasers (<1 ps) and intensities >10(18) W cm(-2), self-modulated laser wakefield accelerator (SMLWFA) concept which combines elements of stimulated Raman forward scattering (SRFS) and electron acceleration by nonlinear plasma waves excited by relativistic electron and positron bunches the plasma wakefield accelerator. In the ultra-high intensity regime, laser/particle beam-plasma interactions are highly nonlinear and relativistic, leading to new phenomenon such as the plasma wakefield excitation for particle acceleration, relativistic self-focusing and guiding of laser beams, high-harmonic generation, acceleration of electrons, positrons, protons and photons. Fields greater than 1 GV cm(-1) have been generated with monoenergetic particle beams accelerated to about 100 MeV in millimetre distances recorded. Plasma wakefields driven by both electron and positron beams at the Stanford linear accelerator centre (SLAC) facility have accelerated the tail of the beams.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby
Saikko, Vesa
2015-01-21
The temporal change of the direction of sliding relative to the ultrahigh molecular weight polyethylene (UHMWPE) component of prosthetic joints is known to be of crucial importance with respect to wear. One complete revolution of the resultant friction vector is commonly called a wear cycle. It was hypothesized that in order to accelerate the wear test, the cycle frequency may be substantially increased if the circumference of the slide track is reduced in proportion, and still the wear mechanisms remain realistic and no overheating takes place. This requires an additional slow motion mechanism with which the lubrication of the contact is maintained and wear particles are conveyed away from the contact. A three-station, dual motion high frequency circular translation pin-on-disk (HF-CTPOD) device with a relative cycle frequency of 25.3 Hz and an average sliding velocity of 27.4 mm/s was designed. The pins circularly translated at high frequency (1.0 mm per cycle, 24.8 Hz, clockwise), and the disks at low frequency (31.4mm per cycle, 0.5 Hz, counter-clockwise). In a 22 million cycle (10 day) test, the wear rate of conventional gamma-sterilized UHMWPE pins against polished CoCr disks in diluted serum was 1.8 mg per 24 h, which was six times higher than that in the established 1 Hz CTPOD device. The wear mechanisms were similar. Burnishing of the pin was the predominant feature. No overheating took place. With the dual motion HF-CTPOD method, the wear testing of UHMWPE as a bearing material in total hip arthroplasty can be substantially accelerated without concerns of the validity of the wear simulation.
Nelli, Flavio Enrico
2016-03-01
A very simple method to measure the effect of the backscatter from secondary collimators into the beam monitor chambers in linear accelerators equipped with multi-leaf collimators (MLC) is presented here. The backscatter to the monitor chambers from the upper jaws of the secondary collimator was measured on three beam-matched linacs by means of three methods: this new methodology, the ecliptic method, and assessing the variation of the beam-on time per monitor unit with dose rate feedback disabled. This new methodology was used to assess the backscatter characteristics of asymmetric over-traveling jaws. Excellent agreement between the backscatter values measured using the new methodology introduced here and the ones obtained using the other two methods was established. The experimental values reported here differ by less than 1% from published data. The sensitivity of this novel technique allowed differences in backscatter due to the same opening of the jaws, when placed at different positions on the beam path, to be resolved. The introduction of the ecliptic method has made the determination of the backscatter to the monitor chambers an easy procedure. The method presented here for machines equipped with MLCs makes the determination of backscatter to the beam monitor chambers even easier, and suitable to characterize linacs equipped with over-traveling asymmetric secondary collimators. This experimental procedure could be simply implemented to fully characterize the backscatter output factor constituent when detailed dosimetric modeling of the machine's head is required. The methodology proved to be uncomplicated, accurate and suitable for clinical or experimental environments.
Suzuki, Yusuke; Hayashi, Naoki; Kato, Hideki; Fukuma, Hiroshi; Hirose, Yasujiro; Kawano, Makoto; Nishii, Yoshio; Nakamura, Masaru; Mukouyama, Takashi
2013-01-01
In small-field irradiation, the back-scattered radiation (BSR) affects the counts measured with a beam monitor chamber (BMC). In general, the effect of the BSR depends on the opened-jaw size. The effect is significantly large in small-field irradiation. Our purpose in this study was to predict the effect of BSR on LINAC output accurately with an improved target-current-pulse (TCP) technique. The pulse signals were measured with a system consisting of a personal computer and a digitizer. The pulse signals were analyzed with in-house software. The measured parameters were the number of pulses, the change in the waveform and the integrated signal values of the TCPs. The TCPs were measured for various field sizes with four linear accelerators. For comparison, Yu's method in which a universal counter was used was re-examined. The results showed that the variance of the measurements by the new method was reduced to approximately 1/10 of the variance by the previous method. There was no significant variation in the number of pulses due to a change in the field size in the Varian Clinac series. However, a change in the integrated signal value was observed. This tendency was different from the result of other investigations in the past. Our prediction method is able to define the cutoff voltage for the TCP acquired by digitizer. This functionality provides the capability of clearly classifying TCPs into signals and noise. In conclusion, our TCP analysis method can predict the effect of BSR on the BMC even for small-field irradiations.
Acceleration modules in linear induction accelerators
NASA Astrophysics Data System (ADS)
Wang, Shao-Heng; Deng, Jian-Jun
2014-05-01
The Linear Induction Accelerator (LIA) is a unique type of accelerator that is capable of accelerating kilo-Ampere charged particle current to tens of MeV energy. The present development of LIA in MHz bursting mode and the successful application into a synchrotron have broadened LIA's usage scope. Although the transformer model is widely used to explain the acceleration mechanism of LIAs, it is not appropriate to consider the induction electric field as the field which accelerates charged particles for many modern LIAs. We have examined the transition of the magnetic cores' functions during the LIA acceleration modules' evolution, distinguished transformer type and transmission line type LIA acceleration modules, and re-considered several related issues based on transmission line type LIA acceleration module. This clarified understanding should help in the further development and design of LIA acceleration modules.
Accelerated Profile HMM Searches.
Eddy, Sean R
2011-10-01
Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the "multiple segment Viterbi" (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call "sparse rescaling". These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches.
Imaging using accelerated heavy ions
Chu, W.T.
1982-05-01
Several methods for imaging using accelerated heavy ion beams are being investigated at Lawrence Berkeley Laboratory. Using the HILAC (Heavy-Ion Linear Accelerator) as an injector, the Bevalac can accelerate fully stripped atomic nuclei from carbon (Z = 6) to krypton (Z = 34), and partly stripped ions up to uranium (Z = 92). Radiographic studies to date have been conducted with helium (from 184-inch cyclotron), carbon, oxygen, and neon beams. Useful ranges in tissue of 40 cm or more are available. To investigate the potential of heavy-ion projection radiography and computed tomography (CT), several methods and instrumentation have been studied.
Shimozato, T; Tabushi, K; Kitoh, S; Shiota, Y; Hirayama, C; Suzuki, S
2007-01-21
To calculate photon spectra for a 10 MV x-ray beam emitted by a medical linear accelerator, we performed numerical analysis using the aluminium transmission data obtained along the central axis of the beam under the narrow beam condition corresponding to a 3x3 cm2 field at a 100 cm distance from the source. We used the BFGS quasi-Newton method based on a general nonlinear optimization technique for the numerical analysis. The attenuation coefficients, aluminium thicknesses and measured transmission data are necessary inputs for the numerical analysis. The calculated x-ray spectrum shape was smooth in the lower to higher energy regions without any angular components. The x-ray spectrum acquired by the employed method was evaluated by comparing the measurements along the central axis percentage depth dose in a water phantom and by a Monte Carlo simulation code, the electron gamma shower code. The values of the calculated percentage depth doses for a 10x10 cm2 field at a 100 cm source-to-surface distance in a water phantom were obtained using the same geometry settings as those of the water phantom measurement. The differences in the measured and calculated values were less than +/-1.0% for a broad region from the shallow part near the surface to deep parts of up to 25 cm in the water phantom.
NASA Astrophysics Data System (ADS)
Lee, Kyoung-Rok; Koo, Weoncheol; Kim, Moo-Hyun
2013-12-01
A floating Oscillating Water Column (OWC) wave energy converter, a Backward Bent Duct Buoy (BBDB), was simulated using a state-of-the-art, two-dimensional, fully-nonlinear Numerical Wave Tank (NWT) technique. The hydrodynamic performance of the floating OWC device was evaluated in the time domain. The acceleration potential method, with a full-updated kernel matrix calculation associated with a mode decomposition scheme, was implemented to obtain accurate estimates of the hydrodynamic force and displacement of a freely floating BBDB. The developed NWT was based on the potential theory and the boundary element method with constant panels on the boundaries. The mixed Eulerian-Lagrangian (MEL) approach was employed to capture the nonlinear free surfaces inside the chamber that interacted with a pneumatic pressure, induced by the time-varying airflow velocity at the air duct. A special viscous damping was applied to the chamber free surface to represent the viscous energy loss due to the BBDB's shape and motions. The viscous damping coefficient was properly selected using a comparison of the experimental data. The calculated surface elevation, inside and outside the chamber, with a tuned viscous damping correlated reasonably well with the experimental data for various incident wave conditions. The conservation of the total wave energy in the computational domain was confirmed over the entire range of wave frequencies.
Nyflot, Matthew J; Cao, Ning; Meyer, Juergen; Ford, Eric C
2014-03-06
Accurate alignment of linear accelerator table rotational axis with radiation isocenter is critical for noncoplanar radiotherapy applications. The purpose of the present study is to develop a method to align the table rotation axis and the MV isocenter to submillimeter accuracy. We developed a computerized method using electronic portal imaging device (EPID) and measured alignment stability over time. Mechanical and radiation isocenter coincidence was measured by placing a steel ball bearing at radiation isocenter using existing EPID techniques. Then, EPID images were acquired over the range of table rotation. A MATLAB script was developed to calculate the center of rotation, as well as the necessary adjustment to move the table rotational axis to MV isocenter. Adjustment was applied via torque to screws at the base of the linac table. Stability of rotational alignment was measured with 49 measurements over 363 days on four linacs. Initial rotational misalignment from radiation isocenter ranged from 0.91-2.11 mm on the four tested linacs. Linac-A had greatest error (> 2 mm) and was adjusted with the described method. After adjustment, the error was significantly decreased to 0.40 ± 0.12 mm. The adjustment was stable over the course of 15 measurements over 231 days. Linac-B was not adjusted, but tracked from time of commissioning with 27 measurements over 363 days. No discernible shift in couch characteristics was observed (mean error 1.40 ± 0.22 mm). The greater variability for Linac-B may relate to the interchangeable two-piece couch, which allows more lateral movement than the one-piece Linac-A couch. Submillimeter isocenter alignment was achieved by applying a precision correction to the linac table base. Table rotational characteristics were shown to be stable over the course of twelve months. The accuracy and efficiency of this method may make it suitable for acceptance testing, annual quality assurance, or commissioning of highly-conformal noncoplanar
Progress on plasma accelerators
Chen, P.
1986-05-01
Several plasma accelerator concepts are reviewed, with emphasis on the Plasma Beat Wave Accelerator (PBWA) and the Plasma Wake Field Accelerator (PWFA). Various accelerator physics issues regarding these schemes are discussed, and numerical examples on laboratory scale experiments are given. The efficiency of plasma accelerators is then revealed with suggestions on improvements. Sources that cause emittance growth are discussed briefly.
Loonen, A J M; Jansz, A R; Kreeftenberg, H; Bruggeman, C A; Wolffs, P F G; van den Brule, A J C
2011-03-01
To accelerate differentiation between Staphylococcus aureus and coagulase-negative staphylococci (CNS), this study aimed to compare six different DNA extraction methods from two commonly used blood culture materials, i.e. BACTEC and BacT/ALERT. Furthermore, we analysed the effect of reduced blood culture incubation for the detection of staphylococci directly from blood culture material. A real-time polymerase chain reaction (PCR) duplex assay was used to compare the six different DNA isolation protocols on two different blood culture systems. Negative blood culture material was spiked with methicillin-resistant S. aureus (MRSA). Bacterial DNA was isolated with automated extractor easyMAG (three protocols), automated extractor MagNA Pure LC (LC Microbiology Kit M(Grade)), a manual kit MolYsis Plus and a combination of MolYsis Plus and the easyMAG. The most optimal isolation method was used to evaluate reduced bacterial incubation times. Bacterial DNA isolation with the MolYsis Plus kit in combination with the specific B protocol on the easyMAG resulted in the most sensitive detection of S. aureus, with a detection limit of 10 CFU/ml, in BacT/ALERT material, whereas using BACTEC resulted in a detection limit of 100 CFU/ml. An initial S. aureus or CNS load of 1 CFU/ml blood can be detected after 5 h of incubation in BacT/ALERT 3D by combining the sensitive isolation method and the tuf LightCycler assay.
A Data-driven Analytic Model for Proton Acceleration by Large-scale Solar Coronal Shocks
NASA Astrophysics Data System (ADS)
Kozarev, Kamen A.; Schwadron, Nathan A.
2016-11-01
We have recently studied the development of an eruptive filament-driven, large-scale off-limb coronal bright front (OCBF) in the low solar corona, using remote observations from the Solar Dynamics Observatory’s Advanced Imaging Assembly EUV telescopes. In that study, we obtained high-temporal resolution estimates of the OCBF parameters regulating the efficiency of charged particle acceleration within the theoretical framework of diffusive shock acceleration (DSA). These parameters include the time-dependent front size, speed, and strength, as well as the upstream coronal magnetic field orientations with respect to the front’s surface normal direction. Here we present an analytical particle acceleration model, specifically developed to incorporate the coronal shock/compressive front properties described above, derived from remote observations. We verify the model’s performance through a grid of idealized case runs using input parameters typical for large-scale coronal shocks, and demonstrate that the results approach the expected DSA steady-state behavior. We then apply the model to the event of 2011 May 11 using the OCBF time-dependent parameters derived by Kozarev et al. We find that the compressive front likely produced energetic particles as low as 1.3 solar radii in the corona. Comparing the modeled and observed fluences near Earth, we also find that the bulk of the acceleration during this event must have occurred above 1.5 solar radii. With this study we have taken a first step in using direct observations of shocks and compressions in the innermost corona to predict the onsets and intensities of solar energetic particle events.
Radiotherapy using a laser proton accelerator
Murakami, Masao; Hishikawa, Yoshio; Miyajima, Satoshi; Okazaki, Yoshiko; Sutherland, Kenneth L.; Abe, Mitsuyuki; Bulanov, Sergei V.; Daido, Hiroyuki; Esirkepov, Timur Zh.; Koga, James; Yamagiwa, Mitsuru; Tajima, Toshiki
2008-06-24
Laser acceleration promises innovation in particle beam therapy of cancer where an ultra-compact accelerator system for cancer beam therapy can become affordable to a broad range of patients. This is not feasible without the introduction of a technology that is radically different from the conventional accelerator-based approach. Because of its compactness and other novel characteristics, the laser acceleration method provides many enhanced capabilities.
Khoshkholgh, Roghaie; Keshavarz, Tahereh; Moshfeghy, Zeinab; Akbarzadeh, Marzieh; Asadi, Nasrin; Zare, Najaf
2016-01-01
Objective: To compare the effects of two auditory methods by mother and fetus on the results of NST in 2011-2012. Materials and methods: In this single-blind clinical trial, 213 pregnant women with gestational age of 37-41 weeks who had no pregnancy complications were randomly divided into 3 groups (auditory intervention for mother, auditory intervention for fetus, and control) each containing 71 subjects. In the intervention groups, music was played through the second 10 minutes of NST. The three groups were compared regarding baseline fetal heart rate and number of accelerations in the first and second 10 minutes of NST. The data were analyzed using one-way ANOVA, Kruskal-Wallis, and paired T-test. Results: The results showed no significant difference among the three groups regarding baseline fetal heart rate in the first (p = 0.945) and second (p = 0.763) 10 minutes. However, a significant difference was found among the three groups concerning the number of accelerations in the second 10 minutes. Also, a significant difference was observed in the number of accelerations in the auditory intervention for mother (p = 0.013) and auditory intervention for fetus groups (p < 0.001). The difference between the number of accelerations in the first and second 10 minutes was also statistically significant (p = 0.002). Conclusion: Music intervention was effective in the number of accelerations which is the indicator of fetal health. Yet, further studies are required to be conducted on the issue. PMID:27385971
Gumerov, Nail A; O'Donovan, Adam E; Duraiswami, Ramani; Zotkin, Dmitry N
2010-01-01
The head-related transfer function (HRTF) is computed using the fast multipole accelerated boundary element method. For efficiency, the HRTF is computed using the reciprocity principle by placing a source at the ear and computing its field. Analysis is presented to modify the boundary value problem accordingly. To compute the HRTF corresponding to different ranges via a single computation, a compact and accurate representation of the HRTF, termed the spherical spectrum, is developed. Computations are reduced to a two stage process, the computation of the spherical spectrum and a subsequent evaluation of the HRTF. This representation allows easy interpolation and range extrapolation of HRTFs. HRTF computations are performed for the range of audible frequencies up to 20 kHz for several models including a sphere, human head models [the Neumann KU-100 ("Fritz") and the Knowles KEMAR ("Kemar") manikins], and head-and-torso model (the Kemar manikin). Comparisons between the different cases are provided. Comparisons with the computational data of other authors and available experimental data are conducted and show satisfactory agreement for the frequencies for which reliable experimental data are available. Results show that, given a good mesh, it is feasible to compute the HRTF over the full audible range on a regular personal computer.
NASA Astrophysics Data System (ADS)
Wang, Hu; Zou, Yubin; Wen, Weiwei; Lu, Yuanrong; Guo, Zhiyu
2016-07-01
Peking University Neutron Imaging Facility (PKUNIFTY) works on an accelerator-based neutron source with a repetition period of 10 ms and pulse duration of 0.4 ms, which has a rather low Cd ratio. To improve the effective Cd ratio and thus improve the detection capability of the facility, energy-filtering neutron imaging was realized with the intensified CCD camera and time-of-flight (TOF) method. Time structure of the pulsed neutron source was firstly simulated with Geant4, and the simulation result was evaluated with experiment. Both simulation and experiment results indicated that fast neutrons and epithermal neutrons were concentrated in the first 0.8 ms of each pulse period; meanwhile in the period of 0.8-2.0 ms only thermal neutrons existed. Based on this result, neutron images with and without energy filtering were acquired respectively, and it showed that detection capability of PKUNIFTY was improved with setting the exposure interval as 0.8-2.0 ms, especially for materials with strong moderating capability.
Heuberger, Adam L; Broeckling, Corey D; Sedin, Dana; Holbrook, Christian; Barr, Lindsay; Kirkpatrick, Kaylyn; Prenni, Jessica E
2016-06-01
Flavour stability is vital to the brewing industry as beer is often stored for an extended time under variable conditions. Developing an accelerated model to evaluate brewing techniques that affect flavour stability is an important area of research. Here, we performed metabolomics on non-volatile compounds in beer stored at 37 °C between 1 and 14 days for two beer types: an amber ale and an India pale ale. The experiment determined high temperature to influence non-volatile metabolites, including the purine 5-methylthioadenosine (5-MTA). In a second experiment, three brewing techniques were evaluated for improved flavour stability: use of antioxidant crowns, chelation of pro-oxidants, and varying plant content in hops. Sensory analysis determined the hop method was associated with improved flavour stability, and this was consistent with reduced 5-MTA at both regular and high temperature storage. Future studies are warranted to understand the influence of 5-MTA on flavour and aging within different beer types.
Mass spectrometry with accelerators.
Litherland, A E; Zhao, X-L; Kieser, W E
2011-01-01
As one in a series of articles on Canadian contributions to mass spectrometry, this review begins with an outline of the history of accelerator mass spectrometry (AMS), noting roles played by researchers at three Canadian AMS laboratories. After a description of the unique features of AMS, three examples, (14)C, (10)Be, and (129)I are given to illustrate the methods. The capabilities of mass spectrometry have been extended by the addition of atomic isobar selection, molecular isobar attenuation, further ion acceleration, followed by ion detection and ion identification at essentially zero dark current or ion flux. This has been accomplished by exploiting the techniques and accelerators of atomic and nuclear physics. In 1939, the first principles of AMS were established using a cyclotron. In 1977 the selection of isobars in the ion source was established when it was shown that the (14)N(-) ion was very unstable, or extremely difficult to create, making a tandem electrostatic accelerator highly suitable for assisting the mass spectrometric measurement of the rare long-lived radioactive isotope (14)C in the environment. This observation, together with the large attenuation of the molecular isobars (13)CH(-) and (12)CH 2(-) during tandem acceleration and the observed very low background contamination from the ion source, was found to facilitate the mass spectrometry of (14)C to at least a level of (14)C/C ~ 6 × 10(-16), the equivalent of a radiocarbon age of 60,000 years. Tandem Accelerator Mass Spectrometry, or AMS, has now made possible the accurate radiocarbon dating of milligram-sized carbon samples by ion counting as well as dating and tracing with many other long-lived radioactive isotopes such as (10)Be, (26)Al, (36)Cl, and (129)I. The difficulty of obtaining large anion currents with low electron affinities and the difficulties of isobar separation, especially for the heavier mass ions, has prompted the use of molecular anions and the search for alternative
Accelerator mass spectrometry.
Hellborg, Ragnar; Skog, Göran
2008-01-01
In this overview the technique of accelerator mass spectrometry (AMS) and its use are described. AMS is a highly sensitive method of counting atoms. It is used to detect very low concentrations of natural isotopic abundances (typically in the range between 10(-12) and 10(-16)) of both radionuclides and stable nuclides. The main advantages of AMS compared to conventional radiometric methods are the use of smaller samples (mg and even sub-mg size) and shorter measuring times (less than 1 hr). The equipment used for AMS is almost exclusively based on the electrostatic tandem accelerator, although some of the newest systems are based on a slightly different principle. Dedicated accelerators as well as older "nuclear physics machines" can be found in the 80 or so AMS laboratories in existence today. The most widely used isotope studied with AMS is 14C. Besides radiocarbon dating this isotope is used in climate studies, biomedicine applications and many other fields. More than 100,000 14C samples are measured per year. Other isotopes studied include 10Be, 26Al, 36Cl, 41Ca, 59Ni, 129I, U, and Pu. Although these measurements are important, the number of samples of these other isotopes measured each year is estimated to be less than 10% of the number of 14C samples.
White, Adrienne Lynne; Min, Thaw Htwe; Gross, Mechthild M.; Kajeechiwa, Ladda; Thwin, May Myo; Hanboonkunupakarn, Borimas; Than, Hla Hla; Zin, Thet Wai; Rijken, Marcus J.; Hoogenboom, Gabie; McGready, Rose
2016-01-01
Background To evaluate a skilled birth attendant (SBA) training program in a neglected population on the Thai-Myanmar border, we used multiple methods to show that refugee and migrant health workers can be given effective training in their own environment to become SBAs and teachers of SBAs. The loss of SBAs through resettlement to third countries necessitated urgent training of available workers to meet local needs. Methods and Findings All results were obtained from student records of theory grades and clinical log books. Qualitative evaluation of both the SBA and teacher programs was obtained using semi-structured interviews with supervisors and teachers. We also reviewed perinatal indicators over an eight-year period, starting prior to the first training program until after the graduation of the fourth cohort of SBAs. Results Four SBA training programs scheduled between 2009 and 2015 resulted in 79/88 (90%) of students successfully completing a training program of 250 theory hours and 625 supervised clinical hours. All 79 students were able to: achieve pass grades on theory examination (median 80%, range [70–89]); obtain the required clinical experience within twelve months; achieve clinical competence to provide safe care during childbirth. In 2010–2011, five experienced SBAs completed a train-the-trainer (TOT) program and went on to facilitate further training programs. Perinatal indicators within Shoklo Malaria Research Unit (SMRU), such as place of birth, maternal and newborn outcomes, showed no significant differences before and after introduction of training or following graduate deployment in the local maternity units. Confidence, competence and teamwork emerged from qualitative evaluation by senior SBAs working with and supervising students in the clinics. Conclusions We demonstrate that in resource-limited settings or in marginalized populations, it is possible to accelerate training of skilled birth attendants to provide safe maternity care
NASA Astrophysics Data System (ADS)
Lee, M. T.; Gottfried, M.; Berglund, E.; Rodriguez, G.; Ceckanowicz, D. J.; Cutter, N.; Badgeley, J.
2014-12-01
The boom and bust history of mineral extraction in the American southwest is visible today in tens of thousands of abandoned and slowly decaying mine installations that scar the landscape. Mine tailing piles, mounds of crushed mineral ore, often contain significant quantities of heavy metal elements which may leach into surrounding soils, surface water and ground water. Chemical analysis of contaminated soils is a tedious and time-consuming process. Regional assessment of heavy metal contamination for treatment prioritization would be greatly accelerated by the development of near-surface imaging indices of heavy-metal vegetative stress in western grasslands. Further, the method would assist in measuring the ongoing effectiveness of phytoremedatian and phytostabilization efforts. To test feasibility we ground truthed nine phytoremediated and two control sites sites along the mine-impacted Kerber Creek watershed in Saguache County, Colorado. Total metal concentration was determined by XRF for both plant and soil samples. Leachable metals were extracted from soil samples following US EPA method 1312. Plants were identified, sorted into roots, shoots and leaves, and digested via microwave acid extraction. Metal concentrations were determined with high accuracy by ICP-OES analysis. Plants were found to contain significantly higher concentrations of heavy metals than surrounding soils, particularly for manganese (Mn), iron (Fe), copper (Cu), zinc (Zn), barium (Ba), and lead (Pb). Plant species accumulated and distributed metals differently, yet most showed translocation of metals from roots to above ground structures. Ground analysis was followed by near surface imaging using an unmanned aerial vehicle equipped with visible/near and shortwave infrared (0.7 to 1.5 μm) cameras. Images were assessed for spectral shifts indicative of plant stress and attempts made to correlate results with measured soil and plant metal concentrations.
NASA Astrophysics Data System (ADS)
Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.
2017-04-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
NASA Astrophysics Data System (ADS)
Fernandes, Milton Virgílio
2014-06-01
In this thesis, high-energy (HE; E > 0.1 GeV) and very-high-energy (VHE; E > 0.1 TeV) γ-ray data were investigated to probe Galactic stellar clusters (SCs) and star-forming regions (SFRs) as sites of hadronic Galactic cosmic-ray (GCR) acceleration. In principle, massive SCs and SFRs could accelerate GCRs at the shock front of the collective SC wind fed by the individual high-mass stars. The subsequently produced VHE γ rays would be measured with imaging air-Cherenkov telescopes (IACTs). A couple of the Galactic VHE γ-ray sources, including those potentially produced by SCs, fill a large fraction of the field-of-view (FoV) and require additional observations of source-free regions to determine the dominant background for a spectral reconstruction. A new method of reconstructing spectra for such extended sources without the need of further observations is developed: the Template Background Spectrum (TBS). This methods is based on a method to generate skymaps, which determines background in parameter space. The idea is the creation of a look-up of the background normalisation in energy, zenith angle, and angular separation and to account for possible systematics. The results obtained with TBS and state-of-the-art background-estimation methods on H.E.S.S. data are in good agreement. With TBS even those sources could be reconstructed that normally would need further observations. Therefore, TBS is the third method to reconstruct VHE γ-ray spectra, but the first one to not need additional observations in the analysis of extended sources. The discovery of the largest VHE γ-ray source HESS J1646-458 (2.2° in size) towards the SC Westerlund 1 (Wd 1) can be plausibly explained by the SC-wind scenario. But owing to its size, other alternative counterparts to the TeV emission (pulsar, binary system, magnetar) were found in the FoV. Therefore, an association of HESS J1646-458 with the SC is favoured, but cannot be confirmed. The SC Pismis 22 is located in the centre of
Acceleration using total internal reflection
Fernow, R.C.
1991-06-07
This report considers the use of a dielectric slab undergoing total internal reflection as an accelerating structure for charged particle beams. We examine the functional dependence of the electromagnetic fields above the surface of the dielectric for polarized incident waves. We present an experimental arrangement for testing the performance of the method, using apparatus under construction for the Grating Acceleration experiment at Brookhaven National Laboratory. 13 refs., 4 figs., 2 tabs.
Sessler, A.M.
1986-05-01
A general discussion is presented of the acceleration of particles. Upon this foundation is built a categorization scheme into which all accelerators can be placed. Special attention is devoted to accelerators which employ a wake-field mechanism and a restricting theorem is examined. It is shown how the theorem may be circumvented. Comments are made on various acceleration schemes.
NASA Astrophysics Data System (ADS)
Torres, J. A.; Jiang, Fan; Ma, Yuansheng; Mellman, Joerg; Lai, Kafai; Raghunathan, Ananthan; Xu, Yongan; Liu, Chi-Chun; Chi, Cheng
2015-03-01
We have performed a systematic study regarding the diblock composition to keep the size of the cylinders relatively constant despite the shape of the guiding pattern. We have also explored how some guiding patterns shapes provide acceptable cylindrical assembly using an EUV exposure system. This study assumes that LER is a random phenomenon which conformably follows the shape of the guiding pattern. While the edges of the guiding pattern have fluctuations related to the LER of the EUV resist, as long as the centroid of the guiding pattern remains constant, the rectification characteristics of DSA permits adequate hole formation. In this paper we include the level of LER a guiding pattern can exhibit given a pre-determined diblock copolymer / homopolymer mixture. As the amount of homopolymer increases, the size and placement of the assembled diblock becomes less sensitive to the guiding pattern's edge roughness. This study also explores how the addition of homopolymer is only effective up to a point, as a homopolymer-rich blend is not able to assemble properly. One of the concerns about homopolymer-rich mixtures is the effect it has in the formation of defects. Such effect has not been fully characterized but this study serves as the basis for testing optimal combinations of materials and lithography settings for an EUV system, with the end goal to enable contact/via printing at lower EUV source power requirements.
Bush, David A
2008-09-30
A research grant was approved to fund development of requirements and concepts for extracting a helium-ion beam at the LLUMC proton accelerator facility, thus enabling the facility to better simulate the deep space environment via beams sufficient to study biological effects of accelerated helium ions in living tissues. A biologically meaningful helium-ion beam will be accomplished by implementing enhancements to increase the accelerator's maximum proton beam energy output from 250MeV to 300MeV. Additional benefits anticipated from the increased energy include the capability to compare possible benefits from helium-beam radiation treatment with proton-beam treatment, and to provide a platform for developing a future proton computed tomography imaging system.
Parallel beam dynamics simulation of linear accelerators
Qiang, Ji; Ryne, Robert D.
2002-01-31
In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies.
NASA Technical Reports Server (NTRS)
Davis, Jeffrey
2012-01-01
Opportunities: I. Engage NASA team (examples) a) Research and technology calls . provide suggestions to AES, HRP, OCT. b) Use NASA@Work to solicit other ideas; (possibly before R+D calls). II. Stimulate collaboration (examples) a) NHHPC. b) Wharton Mack Center for Technological Innovation (Feb 2013). c) International ] DLR ] :envihab (July 2013). d) Accelerated research models . NSF, Myelin Repair Foundation. III. Engage public Prizes (open platform: InnoCentive, yet2.com, NTL; Rice Business Plan, etc.) IV. Use same methods to engage STEM.
Miyaji, Yoshihiro; Ishizuka, Tomoko; Kawai, Kenji; Hamabe, Yoshimi; Miyaoka, Teiji; Oh-hara, Toshinari; Ikeda, Toshihiko; Kurihara, Atsushi
2009-01-01
A technique utilizing simultaneous intravenous microdosing of (14)C-labeled drug with oral dosing of non-labeled drug for measurement of absolute bioavailability was evaluated using R-142086 in male dogs. Plasma concentrations of R-142086 were measured by liquid chromatography-tandem mass spectrometry (LC-MS/MS) and those of (14)C-R-142086 were measured by accelerator mass spectrometry (AMS). The absence of metabolites in the plasma and urine was confirmed by a single radioactive peak of the parent compound in the chromatogram after intravenous microdosing of (14)C-R-142086 (1.5 microg/kg). Although plasma concentrations of R-142086 determined by LC-MS/MS were approximately 20% higher than those of (14)C-R-142086 as determined by AMS, there was excellent correlation (r=0.994) between both concentrations after intravenous dosing of (14)C-R-142086 (0.3 mg/kg). The oral bioavailability of R-142086 at 1 mg/kg obtained by simultaneous intravenous microdosing of (14)C-R-142086 was 16.1%, this being slightly higher than the value (12.5%) obtained by separate intravenous dosing of R-142086 (0.3 mg/kg). In conclusion, on utilizing simultaneous intravenous microdosing of (14)C-labeled drug in conjunction with AMS analysis, absolute bioavailability could be approximately measured in dogs, but without total accuracy. Bioavailability in humans may possibly be approximately measured at an earlier stage and at a lower cost.
Levin, A.R.; Goldberg, H.L.; Borer, J.S.; Rothenberg, L.N.; Nolan, F.A.; Engle, M.A.; Cohen, B.; Skelly, N.T.; Carter, J.
1983-08-01
Digital subtraction angiography (DSA) permits high-resolution cardiac imaging with relatively low doses of contrast medium and reduced radiation exposure. These are potential advantages in children with congenital heart disease. Computer-based DSA (30 frames/sec) and conventional cutfilm angiography (6 frames/sec) or cineangiography (60 frames/sec) were compared in 42 patients, ages 2 months to 18 years (mean 7.8 years) and weighing 3.4 to 78.5 kg (mean 28.2 kg). There were 29 diagnoses that included valvular regurgitant lesions, obstructive lesions, various shunt abnormalities, and a group of miscellaneous anomalies. For injections made at a site distant from the lesion and on the right side of the circulation, the mean dose of contrast medium was 60% to 100% of the conventional dose given during standard angiography. With injections made close to the lesion and on the left side of the circulation, the mean dose of contrast medium was 27.5% to 42% of the conventional dose. Radiation exposure for each technique was markedly reduced in all age groups. A total of 92 digital subtraction angiograms were performed. Five studies were suboptimal because too little contrast medium was injected; in the remaining 87 injections, DSA and conventional studies resulted in identical diagnoses in 81 instances (p less than .001 vs chance). The remaining six injections made during DSA failed to confirm diagnoses made angiographically by standard cutfilm angiography or cineangiography. We conclude that DSA usually provides diagnostic information equivalent to that available from cutfilm angiography and cineangiography, but DSA requires considerably lower doses of contrast medium and less radiation exposure than standard conventional methods.
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
Accelerating Particles with Plasma
Litos, Michael; Hogan, Mark
2016-07-12
Researchers at SLAC explain how they use plasma wakefields to accelerate bunches of electrons to very high energies over only a short distance. Their experiments offer a possible path for the future of particle accelerators.
NASA Technical Reports Server (NTRS)
Chapman, C. P.
1972-01-01
Device is described that limits accelerations by shutting off shaker table power very rapidly in acceleration tests. Absolute value of accelerometer signal is used to trigger electronic switch which terminates test and sounds alarm.
... equipment? How is safety ensured? What is this equipment used for? A linear accelerator (LINAC) is the ... Therapy (SBRT) . top of page How does the equipment work? The linear accelerator uses microwave technology (similar ...
Accelerating Particles with Plasma
Litos, Michael; Hogan, Mark
2014-11-05
Researchers at SLAC explain how they use plasma wakefields to accelerate bunches of electrons to very high energies over only a short distance. Their experiments offer a possible path for the future of particle accelerators.
NASA Technical Reports Server (NTRS)
Cheng, D. Y.
1971-01-01
Converging, coaxial accelerator electrode configuration operates in vacuum as plasma gun. Plasma forms by periodic injections of high pressure gas that is ionized by electrical discharges. Deflagration mode of discharge provides acceleration, and converging contours of plasma gun provide focusing.
Accelerator Technology Division
NASA Astrophysics Data System (ADS)
1992-04-01
In fiscal year (FY) 1991, the Accelerator Technology (AT) division continued fulfilling its mission to pursue accelerator science and technology and to develop new accelerator concepts for application to research, defense, energy, industry, and other areas of national interest. This report discusses the following programs: The Ground Test Accelerator Program; APLE Free-Electron Laser Program; Accelerator Transmutation of Waste; JAERI, OMEGA Project, and Intense Neutron Source for Materials Testing; Advanced Free-Electron Laser Initiative; Superconducting Super Collider; The High-Power Microwave Program; (Phi) Factory Collaboration; Neutral Particle Beam Power System Highlights; Accelerator Physics and Special Projects; Magnetic Optics and Beam Diagnostics; Accelerator Design and Engineering; Radio-Frequency Technology; Free-Electron Laser Technology; Accelerator Controls and Automation; Very High-Power Microwave Sources and Effects; and GTA Installation, Commissioning, and Operations.
Accelerator science in medical physics.
Peach, K; Wilson, P; Jones, B
2011-12-01
The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future.
Accelerator science in medical physics
Peach, K; Wilson, P; Jones, B
2011-01-01
The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future. PMID:22374548
Boundary-projection acceleration: A new approach to synthetic acceleration of transport calculations
Adams, M.L.; Martin, W.R.
1987-01-01
We present a new class of synthetic acceleration methods which can be applied to transport calculations regardless of geometry, discretization scheme, or mesh shape. Unlike other synthetic acceleration methods which base their acceleration on P1 equations, these methods use acceleration equations obtained by projecting the transport solution onto a coarse angular mesh only on cell boundaries. We demonstrate, via Fourier analysis of a simple model problem as well as numerical calculations of various problems, that the simplest of these methods are unconditionally stable with spectral radius less than or equal toc/3 (c being the scattering ratio), for several different discretization schemes in slab geometry. 28 refs., 4 figs., 3 tabs.