A Diffusion Synthetic Acceleration Method for Block Adaptive Mesh Refinement.
Ward, R. C.; Baker, R. S.; Morel, J. E.
2005-01-01
A prototype two-dimensional Diffusion Synthetic Acceleration (DSA) method on a Block-based Adaptive Mesh Refinement (BAMR) transport mesh has been developed. The Block-Adaptive Mesh Refinement Diffusion Synthetic Acceleration (BAMR-DSA) method was tested in the PARallel TIme-Dependent SN (PARTISN) deterministic transport code. The BAMR-DSA equations are derived by differencing the DSA equation using a vertex-centered diffusion discretization that is diamond-like and may be characterized as 'partially' consistent. The derivation of a diffusion discretization that is fully consistent with diamond transport differencing on BAMR mesh does not appear to be possible. However, despite being partially consistent, the BAMR-DSA method is effective for many applications. The BAMR-DSA solver was implemented and tested in two dimensions for rectangular (XY) and cylindrical (RZ) geometries. Testing results confirm that a partially consistent BAMR-DSA method will introduce instabilities for extreme cases, e.g., scattering ratios approaching 1.0 with optically thick cells, but for most realistic problems the BAMR-DSA method provides effective acceleration. The initial use of a full matrix to store and LU-Decomposition to solve the BAMR-DSA equations has been extended to include Compressed Sparse Row (CSR) storage and a Conjugate Gradient (CG) solver. The CSR and CG methods provide significantly more efficient and faster storage and solution methods.
Effect of Material Homogeneity on the Performance of DSA for Even-Parity S_{n} Methods
Azmy, Y.Y.; Morel, J.; Wareing, T.
1999-09-27
A spectral analysis is conducted for the Source Iteration (SI), and Diffusion Synthetic Acceleration (DSA) operators previously formulated for solving the Even-Parity Method (EPM) equations. In order to accommodate material heterogenity, the analysis is performed for the Periodic Horizontal Interface (PHI) configuration. The dependence of the spectral radius on the optical thickness of the two PHI layers illustrates the deterioration in the rate of convergence with increasing material discontinuity, especially when one of the layers approaches a void. The rate at which this deterioration occurs is determined for a specific material discontinuity in order to demonstrate the conditional robustness of the EPM-DSA iterations. The results of the analysis are put in perspective via numerical tests with the DANTE code (McGhee, et. al., 1997) which exhibits a deterioration in the spectral radius consistent with the theory.
Final report on DSA methods for monitoring alumina in aluminum reduction cells with cermet anodes
Windisch, C.F. Jr.
1992-04-01
The Sensors Development Program was conducted at the Pacific Northwest Laboratory (PNL) for the US Department of Energy, Office of Industrial Processes. The work was performed in conjunction with the Inert Electrodes Program at PNL. The objective of the Sensors Development Program in FY 1990 through FY 1992 was to determine whether methods based on digital signal analysis (DSA) could be used to measure alumina concentration in aluminum reduction cells. Specifically, this work was performed to determine whether useful correlations exist between alumina concentration and various DSA-derived quantification parameters, calculated for current and voltage signals from laboratory and field aluminum reduction cells. If appropriate correlations could be found, then the quantification parameters might be used to monitor and, consequently, help control the alumina concentration in commercial reduction cells. The control of alumina concentration is especially important for cermet anodes, which have exhibited instability and excessive wear at alumina concentrations removed from saturation.
Liu, Bin; Zhang, Bingbing; Wan, Chao; Dong, Yihuan
2014-01-01
In order to reduce the motion artifact caused by the patient in cerebral DSA images, a non-rigid registration method based on stretching transformation is presented in this paper. Unlike other traditional methods, it does not need bilinear interpolation which is rather time-consuming and even produce 'originally non-existent gray value'. By this method, the mask image is rasterized to generate appropriate control points. The Energy of Histogram of Differences criterion is adopted as similarity measurement, and the Powell algorithm is utilized for acceleration. A forward stretching transformation is used to complete motion estimation and an inverse stretching transformation to generate target image by pixel mapping strategy. This method is effective to maintain the topological relationships of the gray value before and after the image deformation. The mask image remains clear and accurate contours, and the quality of the subtraction image after the registration is favorable. This method can provide support for clinical treatment and diagnosis of cerebral disease. PMID:24212008
NASA Astrophysics Data System (ADS)
Xu, Jing; Wu, Jian; Feng, Daming; Cui, Zhiming
Serious types of vascular diseases such as carotid stenosis, aneurysm and vascular malformation may lead to brain stroke, which are the third leading cause of death and the number one cause of disability. In the clinical practice of diagnosis and treatment of cerebral vascular diseases, how to do effective detection and description of the vascular structure of two-dimensional angiography sequence image that is blood vessel skeleton extraction has been a difficult study for a long time. This paper mainly discussed two-dimensional image of blood vessel skeleton extraction based on the level set method, first do the preprocessing to the DSA image, namely uses anti-concentration diffusion model for the effective enhancement and uses improved Otsu local threshold segmentation technology based on regional division for the image binarization, then vascular skeleton extraction based on GMM (Group marching method) with fast sweeping theory was actualized. Experiments show that our approach not only improved the time complexity, but also make a good extraction results.
Windisch, C.F. Jr.
1992-04-01
The Sensors Development Program was conducted at the Pacific Northwest Laboratory (PNL) for the US Department of Energy, Office of Industrial Processes. The work was performed in conjunction with the Inert Electrodes Program at PNL. The objective of the Sensors Development Program in FY 1990 through FY 1992 was to determine whether methods based on digital signal analysis (DSA) could be used to measure alumina concentration in aluminum reduction cells. Specifically, this work was performed to determine whether useful correlations exist between alumina concentration and various DSA-derived quantification parameters, calculated for current and voltage signals from laboratory and field aluminum reduction cells. If appropriate correlations could be found, then the quantification parameters might be used to monitor and, consequently, help control the alumina concentration in commercial reduction cells. The control of alumina concentration is especially important for cermet anodes, which have exhibited instability and excessive wear at alumina concentrations removed from saturation.
Rother, T; Duck, H J; Neugebauer, A; Löbe, M
1992-07-01
In invasive diagnostics of coronary heart disease (CHD), three each DSA examinations of the left coronary artery were performed at 2-minute intervals in ten patients subsequent to conventional examination by means of a left-side cardiac catheter and coronary angiography. While placing the patient in left anterior oblique (60 degrees) position, 6 ml each of ionic contrast medium were injected mechanically with a flow of 4 ml/sec at a pacemaker-induced heart rate of 100/min. Examinations were performed according to a standard mode and were evaluated via the image analysing computer APU of the Philips DVI-DSA system. The purpose of this approach was to analyse the examination conditions and a new improved evaluation algorithm in respect of stability, feasibility and sensitivity. 17 series were evaluated by two examiners who were independent of each other. The interobserver differences obtained were between 5% at the time of maximum density (Tmax) and 25% with exponential downward slope of the curve (lambda), with reference to the median value in each case. Scatter of the individual examinations around the median value of all the three DSA runs is 11 to 17% with the exception of lambda. A significant rise can be proven in the RCX region for the curve slope rise parameters "slope" and "RFL2". We interpret this as a genuine 1.2 to 1.3 fold regional flow increase due to the residual effect of the contrast medium. At the same time, this can be interpreted as an indicator for the good sensitivity of the method.(ABSTRACT TRUNCATED AT 250 WORDS)
Accelerator system and method of accelerating particles
NASA Technical Reports Server (NTRS)
Wirz, Richard E. (Inventor)
2010-01-01
An accelerator system and method that utilize dust as the primary mass flux for generating thrust are provided. The accelerator system can include an accelerator capable of operating in a self-neutralizing mode and having a discharge chamber and at least one ionizer capable of charging dust particles. The system can also include a dust particle feeder that is capable of introducing the dust particles into the accelerator. By applying a pulsed positive and negative charge voltage to the accelerator, the charged dust particles can be accelerated thereby generating thrust and neutralizing the accelerator system.
Accelerated molecular dynamics methods
Perez, Danny
2011-01-04
The molecular dynamics method, although extremely powerful for materials simulations, is limited to times scales of roughly one microsecond or less. On longer time scales, dynamical evolution typically consists of infrequent events, which are usually activated processes. This course is focused on understanding infrequent-event dynamics, on methods for characterizing infrequent-event mechanisms and rate constants, and on methods for simulating long time scales in infrequent-event systems, emphasizing the recently developed accelerated molecular dynamics methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics). Some familiarity with basic statistical mechanics and molecular dynamics methods will be assumed.
An integrated source/mask/DSA optimization approach
NASA Astrophysics Data System (ADS)
Fühner, Tim; Michalak, Przemysław; Welling, Ulrich; Orozco-Rey, Juan Carlos; Müller, Marcus; Erdmann, Andreas
2016-03-01
The introduction of DSA for lithography is still obstructed by a number of technical issues including the lack of a comprehensive computational platform. This work presents a direct source/mask/DSA optimization (SMDSAO) method, which incorporates standard lithographic metrics and figures of merit such as the maximization of process windows. The procedure is demonstrated for a contact doubling example, assuming grapho-epitaxy-DSA. To retain a feasible runtime, a geometry-based Interface Hamiltonian DSA model is employed. The feasibility of this approach is demonstrated through several results and their comparison with more rigorous DSA models.
Manufacturability considerations for DSA
NASA Astrophysics Data System (ADS)
Farrell, Richard A.; Hosler, Erik R.; Schmid, Gerard M.; Xu, Ji; Preil, Moshe E.; Rastogi, Vinayak; Mohanty, Nihar; Kumar, Kaushik; Cicoria, Michael J.; Hetzer, David R.; DeVilliers, Anton
2014-03-01
Implementation of Directed Self-Assembly (DSA) as a viable lithographic technology for high volume manufacturing will require significant efforts to co-optimize the DSA process options and constraints with existing work flows. These work flows include established etch stacks, integration schemes, and design layout principles. The two foremost patterning schemes for DSA, chemoepitaxy and graphoepitaxy, each have their own advantages and disadvantages. Chemoepitaxy is well suited for regular repeating patterns, but has challenges when non-periodic design elements are required. As the line-space polystyrene-block-polymethylmethacrylate chemoepitaxy DSA processes mature, considerable progress has been made on reducing the density of topological (dislocation and disclination) defects but little is known about the existence of 3D buried defects and their subsequent pattern transfer to underlayers. In this paper, we highlight the emergence of a specific type of buried bridging defect within our two 28 nm pitch DSA flows and summarize our efforts to characterize and eliminate the buried defects using process, materials, and plasma-etch optimization. We also discuss how the optimization and removal of the buried defects impacts both the process window and pitch multiplication, facilitates measurement of the pattern roughness rectification, and demonstrate hard-mask open within a back-end-of-line integration flow. Finally, since graphoepitaxy has intrinsic benefits in terms of design flexibility when compared to chemoepitaxy, we highlight our initial investigations on implementing high-chi block copolymer patterning using multiple graphoepitaxy flows to realize sub-20 nm pitch line-space patterns and discuss the benefits of using high-chi block copolymers for roughness reduction.
4D-DSA and 4D fluoroscopy: preliminary implementation
NASA Astrophysics Data System (ADS)
Mistretta, C. A.; Oberstar, E.; Davis, B.; Brodsky, E.; Strother, C. M.
2010-04-01
We have described methods that allow highly accelerated MRI using under-sampled acquisitions and constrained reconstruction. One is a hybrid acquisition involving the constrained reconstruction of time dependent information obtained from a separate scan of longer duration. We have developed reconstruction algorithms for DSA that allow use of a single injection to provide the temporal data required for flow visualization and the steady state data required for construction of a 3D-DSA vascular volume. The result is time resolved 3D volumes with typical resolution of 5123 at frame rates of 20-30 fps. Full manipulation of these images is possible during each stage of vascular filling thereby allowing for simplified interpretation of vascular dynamics. For intravenous angiography this time resolved 3D capability overcomes the vessel overlap problem that greatly limited the use of conventional intravenous 2D-DSA. Following further hardware development, it will be also be possible to rotate fluoroscopic volumes for use as roadmaps that can be viewed at arbitrary angles without a need for gantry rotation. The most precise implementation of this capability requires availability of biplane fluoroscopy data. Since the reconstruction of 3D volumes presently suppresses the contrast in the soft tissue, the possibility of using these techniques to derive complete indications of perfusion deficits based on cerebral blood volume (CBV), mean transit time (MTT) and time to peak (TTP) parameters requires further investigation. Using MATLAB post-processing, successful studies in animals and humans done in conjunction with both intravenous and intra-arterial injections have been completed. Real time implementation is in progress.
ECG-synchronized DSA exposure control: improved cervicothoracic image quality
Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.
1984-10-01
An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.
Lasers and new methods of particle acceleration
Parsa, Z.
1998-02-01
There has been a great progress in development of high power laser technology. Harnessing their potential for particle accelerators is a challenge and of great interest for development of future high energy colliders. The author discusses some of the advances and new methods of acceleration including plasma-based accelerators. The exponential increase in sophistication and power of all aspects of accelerator development and operation that has been demonstrated has been remarkable. This success has been driven by the inherent interest to gain new and deeper understanding of the universe around us. With the limitations of the conventional technology it may not be possible to meet the requirements of the future accelerators with demands for higher and higher energies and luminosities. It is believed that using the existing technology one can build a linear collider with about 1 TeV center of mass energy. However, it would be very difficult (or impossible) to build linear colliders with energies much above one or two TeV without a new method of acceleration. Laser driven high gradient accelerators are becoming more realistic and is expected to provide an alternative, (more compact, and more economical), to conventional accelerators in the future. The author discusses some of the new methods of particle acceleration, including laser and particle beam driven plasma based accelerators, near and far field accelerators. He also discusses the enhanced IFEL (Inverse Free Electron Laser) and NAIBEA (Nonlinear Amplification of Inverse-Beamstrahlung Electron Acceleration) schemes, laser driven photo-injector and the high energy physics requirements.
Proactive DSA application and implementation
Draelos, T.; Hamilton, V.; Istrail, G.
1998-05-03
Data authentication as provided by digital signatures is a well known technique for verifying data sent via untrusted network links. Recent work has extended digital signatures to allow jointly generated signatures using threshold techniques. In addition, new proactive mechanisms have been developed to protect the joint private key over long periods of time and to allow each of the parties involved to verify the actions of the other parties. In this paper, the authors describe an application in which proactive digital signature techniques are a particularly valuable tool. They describe the proactive DSA protocol and discuss the underlying software tools that they found valuable in developing an implementation. Finally, the authors briefly describe the protocol and note difficulties they experienced and continue to experience in implementing this complex cryptographic protocol.
Tracking of Acceleration with HNJ Method
Ruggiero,A.
2008-02-01
After reviewing the principle of operation of acceleration with the method of Harmonic Number Jump (HNJ) in a Fixed-Field Alternating Gradient (FFAG) accelerator for protons and heavy ions, we report in this talk the results of computer simulations performed to assess the capability and the limits of the method in a variety of practical situations. Though the study is not yet completed, and there still remain other cases to be investigated, nonetheless the tracking results so far obtained are very encouraging, and confirm the validity of the method.
Critical assessment of accelerating trypsination methods.
Hustoft, Hanne Kolsrud; Reubsaet, Leon; Greibrokk, Tyge; Lundanes, Elsa; Malerod, Helle
2011-12-15
In LC-MS based proteomics, several accelerating trypsination methods have been introduced in order to speed up the protein digestion, which is often considered a bottleneck. Traditionally and most commonly, due to sample heterogeneity, overnight digestion at 37 °C is performed in order to digest both easily and more resistant proteins. High efficiency protein identification is important in proteomics, hours with LC-MS/MS analysis is needless if the majority of the proteins are not digested. Based on preliminary experiments utilizing some of the suggested accelerating methods, the question of whether accelerating digestion methods really provide the same protein identification efficiency as the overnight digestion was asked. In the present study we have evaluated four different accelerating trypsination methods (infrared (IR) and microwave assisted, solvent aided and immobilized trypsination). The methods were compared with conventional digestion at 37 °C in the same time range using a four protein mixture. Sequence coverage and peak area of intact proteins were used for the comparison. The accelerating methods were able to digest the proteins, but none of the methods appeared to be more efficient than the conventional digestion method at 37 °C. The conventional method at 37 °C is easy to perform using commercially available instrumentation and appears to be the digestion method to use. The digestion time in targeted proteomics can be optimized for each protein, while in comprehensive proteomics the digestion time should be extended due to sample heterogeneity and influence of other proteins present. Recommendations regarding optimizing and evaluating the tryptic digestion for both targeted and comprehensive proteomics are given, and a digestion method suitable as the first method for newcomers in comprehensive proteomics is suggested.
DSA hole defectivity analysis using advanced optical inspection tool
NASA Astrophysics Data System (ADS)
Harukawa, Ryota; Aoki, Masami; Cross, Andrew; Nagaswami, Venkat; Tomita, Tadatoshi; Nagahara, Seiji; Muramatsu, Makoto; Kawakami, Shinichiro; Kosugi, Hitoshi; Rathsack, Benjamen; Kitano, Takahiro; Sweis, Jason; Mokhberi, Ali
2013-04-01
This paper discusses the defect density detection and analysis methodology using advanced optical wafer inspection capability to enable accelerated development of a DSA process/process tools and the required inspection capability to monitor such a process. The defectivity inspection methodologies are optimized for grapho epitaxy directed self-assembly (DSA) contact holes with 25 nm sizes. A defect test reticle with programmed defects on guide patterns is designed for improved optimization of defectivity monitoring. Using this reticle, resist guide holes with a variety of sizes and shapes are patterned using an ArF immersion scanner. The negative tone development (NTD) type thermally stable resist guide is used for DSA of a polystyrene-b-poly(methyl methacrylate) (PS-b-PMMA) block copolymer (BCP). Using a variety of defects intentionally made by changing guide pattern sizes, the detection rates of each specific defectivity type has been analyzed. It is found in this work that to maximize sensitivity, a two pass scan with bright field (BF) and dark field (DF) modes provides the best overall defect type coverage and sensitivity. The performance of the two pass scan with BF and DF modes is also revealed by defect analysis for baseline defectivity on a wafer processed with nominal process conditions.
Ultra low radiation dose digital subtraction angiography (DSA) imaging using low rank constraint
NASA Astrophysics Data System (ADS)
Niu, Kai; Li, Yinsheng; Schafer, Sebastian; Royalty, Kevin; Wu, Yijing; Strother, Charles; Chen, Guang-Hong
2015-03-01
In this work we developed a novel denoising algorithm for DSA image series. This algorithm takes advantage of the low rank nature of the DSA image sequences to enable a dramatic reduction in radiation and/or contrast doses in DSA imaging. Both spatial and temporal regularizers were introduced in the optimization algorithm to further reduce noise. To validate the method, in vivo animal studies were conducted with a Siemens Artis Zee biplane system using different radiation dose levels and contrast concentrations. Both conventionally processed DSA images and the DSA images generated using the novel denoising method were compared using absolute noise standard deviation and the contrast to noise ratio (CNR). With the application of the novel denoising algorithm for DSA, image quality can be maintained with a radiation dose reduction by a factor of 20 and/or a factor of 2 reduction in contrast dose. Image processing is completed on a GPU within a second for a 10s DSA data acquisition.
Accelerated Learning: Madness with a Method.
ERIC Educational Resources Information Center
Zemke, Ron
1995-01-01
Accelerated learning methods have evolved into a variety of holistic techniques that involve participants in the learning process and overcome negative attitudes about learning. These components are part of the mix: the brain, learning environment, music, imaginative activities, suggestion, positive mental state, the arts, multiple intelligences,…
NASA Astrophysics Data System (ADS)
Wei, Liyang; Shen, Dinggang; Kumar, Dinesh; Turlapati, Ram; Suri, Jasjit S.
2008-02-01
DSA images suffer from challenges like system X-ray noise and artifacts due to patient movement. In this paper, we present a two-step strategy to improve DSA image quality. First, a hierarchical deformable registration algorithm is used to register the mask frame and the bolus frame before subtraction. Second, the resulted DSA image is further enhanced by background diffusion and nonlinear normalization for better visualization. Two major changes are made in the hierarchical deformable registration algorithm for DSA images: 1) B-Spline is used to represent the deformation field in order to produce the smooth deformation field; 2) two features are defined as the attribute vector for each point in the image, i.e., original image intensity and gradient. Also, for speeding up the 2D image registration, the hierarchical motion compensation algorithm is implemented by a multi-resolution framework. The proposed method has been evaluated on a database of 73 subjects by quantitatively measuring signal-to-noise (SNR) ratio. DSA embedded with proposed strategies demonstrates an improvement of 74.1% over conventional DSA in terms of SNR. Our system runs on Eigen's DSA workstation using C++ in Windows environment.
Improved cost-effectiveness of the block co-polymer anneal process for DSA
NASA Astrophysics Data System (ADS)
Pathangi, Hari; Stokhof, Maarten; Knaepen, Werner; Vaid, Varun; Mallik, Arindam; Chan, Boon Teik; Vandenbroeck, Nadia; Maes, Jan Willem; Gronheid, Roel
2016-04-01
This manuscript first presents a cost model to compare the cost of ownership of DSA and SAQP for a typical front end of line (FEoL) line patterning exercise. Then, we proceed to a feasibility study of using a vertical furnace to batch anneal the block co-polymer for DSA applications. We show that the defect performance of such a batch anneal process is comparable to the process of record anneal methods. This helps in increasing the cost benefit for DSA compared to the conventional multiple patterning approaches.
Comparing methods of quantifying tibial acceleration slope.
Duquette, Adriana M; Andrews, David M
2010-05-01
Considerable variability in tibial acceleration slope (AS) values, and different interpretations of injury risk based on these values, have been reported. Acceleration slope variability may be due in part to variations in the quantification methods used. Therefore, the purpose of this study was to quantify differences in tibial AS values determined using end points at various percentage ranges between impact and peak tibial acceleration, as a function of either amplitude or time. Tibial accelerations were recorded from 20 participants (21.8 +/- 2.9 years, 1.7 m +/- 0.1 m, 75.1 kg +/- 17.0 kg) during 24 unshod heel impacts using a human pendulum apparatus. Nine ranges were tested from 5-95% (widest range) to 45-55% (narrowest range) at 5% increments. AS(Amplitude) values increased consistently from the widest to narrowest ranges, whereas the AS(Time) values remained essentially the same. The magnitudes of AS(Amplitude) values were significantly higher and more sensitive to changes in percentage range than AS(Time) values derived from the same impact data. This study shows that tibial AS magnitudes are highly dependent on the method used to calculate them. Researchers are encouraged to carefully consider the method they use to calculate AS so that equivalent comparisons and assessments of injury risk across studies can be made.
What promotes derected self assembly (DSA)?
NASA Astrophysics Data System (ADS)
Nakagawa, S. T.
2016-09-01
A low-energy electron beam (EB) can create self-interstitial atoms (SIA) in a solid and can cause directed self-assembly (DSA), e.g. {3 1 1}SIA platelets in c-Si. The crystalline structure of this planar defect is known from experiment to be made up of SIAs that form well aligned <1 1 0> atomic rows on each (3 1 1) plane. To simulate the experiment we distributed Frenkel pairs (FP) randomly in bulk c-Si. Then making use of a molecular dynamic (MD) simulation, we have reproduced the experimental result, where SIAs are trapped at metastable sites in bulk. With increasing pre-doped FP concentration, the number of SIAs that participate in DSA tends to be increased but soon slightly supressed. On the other hand, when the FP concentration is less than 3%, a cooperative motion of target atoms was characterized from the long-range-order (LRO) parameter. Here we investigated the correlation between DSA and that cooperative motion, by adding a case of intrinsic c-Si. We confirmed that the cooperative motion slightly promote DSA by assisting migration of SIAs toward metastable sites as long as the FP concentration is less than 3%, however, it is essentially independent of DSA.
Application of image fusion techniques in DSA
NASA Astrophysics Data System (ADS)
Ye, Feng; Wu, Jian; Cui, Zhiming; Xu, Jing
2007-12-01
Digital subtraction angiography (DSA) is an important technology in both medical diagnoses and interposal therapy, which can eliminate the interferential background and give prominence to blood vessels by computer processing. After contrast material is injected into an artery or vein, a physician produces fluoroscopic images. Using these digitized images, a computer subtracts the image made with contrast material from a series of post injection images made without background information. By analyzing the characteristics of DSA medical images, this paper provides a solution of image fusion which is in allusion to the application of DSA subtraction. We fuse the images of angiogram and subtraction, in order to obtain the new image which has more data information. The image that fused by wavelet transform can display the blood vessels and background information clearly, and medical experts gave high score on the effect of it.
Projected discrete ordinates methods for numerical transport problems
Larsen, E.W.
1985-01-01
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Influence of template fill in graphoepitaxy DSA
NASA Astrophysics Data System (ADS)
Doise, Jan; Bekaert, Joost; Chan, Boon Teik; Hong, SungEun; Lin, Guanyang; Gronheid, Roel
2016-03-01
Directed self-assembly (DSA) of block copolymers (BCP) is considered a promising patterning approach for the 7 nm node and beyond. Specifically, a grapho-epitaxy process using a cylindrical phase BCP may offer an efficient solution for patterning randomly distributed contact holes with sub-resolution pitches, such as found in via and cut mask levels. In any grapho-epitaxy process, the pattern density impacts the template fill (local BCP thickness inside the template) and may cause defects due to respectively over- or underfilling of the template. In order to tackle this issue thoroughly, the parameters that determine template fill and the influence of template fill on the resulting pattern should be investigated. In this work, using three process flow variations (with different template surface energy), template fill is experimentally characterized as a function of pattern density and film thickness. The impact of these parameters on template fill is highly dependent on the process flow, and thus pre-pattern surface energy. Template fill has a considerable effect on the pattern transfer of the DSA contact holes into the underlying layer. Higher fill levels give rise to smaller contact holes and worse critical dimension uniformity. These results are important towards DSA-aware design and show that fill is a crucial parameter in grapho-epitaxy DSA.
NASA Astrophysics Data System (ADS)
Azmy, Y. Y.
1999-06-01
We propose preconditioning as a viable acceleration scheme for the inner iterations of transport calculations in slab geometry. In particular we develop Adjacent-Cell Preconditioners (AP) that have the same coupling stencil as cell-centered diffusion schemes. For lowest order methods, e.g., Diamond Difference, Step, and 0-order Nodal Integral Method (0NIM), cast in a Weighted Diamond Difference (WDD) form, we derive AP for thick (KAP) and thin (NAP) cells that for model problems are unconditionally stable and efficient. For the First-Order Nodal Integral Method (1NIM) we derive a NAP that possesses similarly excellent spectral properties for model problems. [Note that the order of NIM refers to the truncated order of the local expansion of the cell and edge fluxes in Legendre series.] The two most attractive features of our new technique are: (1) its cell-centered coupling stencil, which makes it more adequate for extension to multidimensional, higher order situations than the standard edge-centered or point-centered Diffusion Synthetic Acceleration (DSA) methods; and (2) its decreasing spectral radius with increasing cell thickness to the extent that immediate pointwise convergence, i.e., in one iteration, can be achieved for problems with sufficiently thick cells. We implemented these methods, augmented with appropriate boundary conditions and mixing formulas for material heterogeneities, in the test code AP1D that we use to successfully verify the analytical spectral properties for homogeneous problems. Furthermore, we conduct numerical tests to demonstrate the robustness of the KAP and NAP in the presence of sharp mesh or material discontinuities. We show that the AP for WDD is highly resilient to such discontinuities, but for 1NIM a few cases occur in which the scheme does not converge; however, when it converges, AP greatly reduces the number of iterations required to achieve convergence.
Azmy, Y.Y.
1999-06-10
The author proposes preconditioning as a viable acceleration scheme for the inner iterations of transport calculations in slab geometry. In particular he develops Adjacent-Cell Preconditioners (AP) that have the same coupling stencil as cell-centered diffusion schemes. For lowest order methods, e.g., Diamond Difference, Step, and 0-order Nodal Integral Method (ONIM), cast in a Weighted Diamond Difference (WDD) form, he derives AP for thick (KAP) and thin (NAP) cells that for model problems are unconditionally stable and efficient. For the First-Order Nodal Integral Method (INIM) he derives a NAP that possesses similarly excellent spectral properties for model problems. The two most attractive features of the new technique are:(1) its cell-centered coupling stencil, which makes it more adequate for extension to multidimensional, higher order situations than the standard edge-centered or point-centered Diffusion Synthetic Acceleration (DSA) methods; and (2) its decreasing spectral radius with increasing cell thickness to the extent that immediate pointwise convergence, i.e., in one iteration, can be achieved for problems with sufficiently thick cells. He implemented these methods, augmented with appropriate boundary conditions and mixing formulas for material heterogeneities, in the test code APID that he uses to successfully verify the analytical spectral properties for homogeneous problems. Furthermore, he conducts numerical tests to demonstrate the robustness of the KAP and NAP in the presence of sharp mesh or material discontinuities. He shows that the AP for WDD is highly resilient to such discontinuities, but for INIM a few cases occur in which the scheme does not converge; however, when it converges, AP greatly reduces the number of iterations required to achieve convergence.
An implementation of differential search algorithm (DSA) for inversion of surface wave data
NASA Astrophysics Data System (ADS)
Song, Xianhai; Li, Lei; Zhang, Xueqiang; Shi, Xinchun; Huang, Jianquan; Cai, Jianchao; Jin, Si; Ding, Jianping
2014-12-01
Surface wave dispersion analysis is widely used in geophysics to infer near-surface shear (S)-wave velocity profiles for a wide variety of applications. However, inversion of surface wave data is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this work, we proposed and implemented a new Rayleigh wave dispersion curve inversion scheme based on differential search algorithm (DSA), one of recently developed swarm intelligence-based algorithms. DSA is inspired from seasonal migration behavior of species of the living beings throughout the year for solving highly nonlinear, multivariable, and multimodal optimization problems. The proposed inverse procedure is applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DSA, four noise-free and four noisy synthetic data sets are firstly inverted. Then, the performance of DSA is compared with that of genetic algorithms (GA) by two noise-free synthetic data sets. Finally, a real-world example from a waste disposal site in NE Italy is inverted to examine the applicability and robustness of the proposed approach on surface wave data. Furthermore, the performance of DSA is compared against that of GA by real data to further evaluate scores of the inverse procedure described here. Simulation results from both synthetic and actual field data demonstrate that differential search algorithm (DSA) applied to nonlinear inversion of surface wave data should be considered good not only in terms of the accuracy but also in terms of the convergence speed. The great advantages of DSA are that the algorithm is simple, robust and easy to implement. Also there are fewer control parameters to tune.
PARTICLE ACCELERATOR AND METHOD OF CONTROLLING THE TEMPERATURE THEREOF
Neal, R.B.; Gallagher, W.J.
1960-10-11
A method and means for controlling the temperature of a particle accelerator and more particularly to the maintenance of a constant and uniform temperature throughout a particle accelerator is offered. The novel feature of the invention resides in the provision of two individual heating applications to the accelerator structure. The first heating application provided is substantially a duplication of the accelerator heat created from energization, this first application being employed only when the accelerator is de-energized thereby maintaining the accelerator temperature constant with regard to time whether the accelerator is energized or not. The second heating application provided is designed to add to either the first application or energization heat in a manner to create the same uniform temperature throughout all portions of the accelerator.
Grisham, Larry R
2013-12-17
The present invention provides systems and methods for the magnetic insulation of accelerator electrodes in electrostatic accelerators. Advantageously, the systems and methods of the present invention improve the practically obtainable performance of these electrostatic accelerators by addressing, among other things, voltage holding problems and conditioning issues. The problems and issues are addressed by flowing electric currents along these accelerator electrodes to produce magnetic fields that envelope the accelerator electrodes and their support structures, so as to prevent very low energy electrons from leaving the surfaces of the accelerator electrodes and subsequently picking up energy from the surrounding electric field. In various applications, this magnetic insulation must only produce modest gains in voltage holding capability to represent a significant achievement.
Influence of litho patterning on DSA placement errors
NASA Astrophysics Data System (ADS)
Wuister, Sander; Druzhinina, Tamara; Ambesi, Davide; Laenens, Bart; Yi, Linda He; Finders, Jo
2014-03-01
Directed self-assembly of block copolymers is currently being investigated as a shrinking technique complementary to lithography. One of the critical issues about this technique is that DSA induces the placement error. In this paper, study of the relation between confinement by lithography and the placement error induced by DSA is demonstrated. Here, both 193i and EUV pre-patterns are created using a simple algorithm to confine two contact holes formed by DSA on a pitch of 45nm. Full physical numerical simulations were used to compare the impact of the confinement on DSA related placement error, pitch variations due to pattern variations and phase separation defects.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
NASA Astrophysics Data System (ADS)
Kato, Takeshi; Konishi, Junko; Ikota, Masami; Yamaguchi, Satoru; Seino, Yuriko; Sato, Hironobu; Kasahara, Yusuke; Azuma, Tsukasa
2016-03-01
Directed self-assembly (DSA) applying chemical epitaxy is one of the promising lithographic solutions for next generation semiconductor device manufacturing. Especially, DSA lithography using coordinated line epitaxy (COOL) process is obviously one of candidates which could be the first generation of DSA applying PS-b-PMMA block copolymer (BCP) for sub-15nm dense line patterning . DSA can enhance the pitch resolutions, and can mitigate CD errors to the values much smaller than those of the originally exposed guiding patterns. On the other hand, local line placement error often results in a worse value, with distinctive trends depending on the process conditions. To address this issue, we introduce an enhanced measurement technology of DSA line patterns with distinguishing their locations in order to evaluate nature of edge placement and roughness corresponding to individual pattern locations by using images of CD-SEM. Additionally correlations among edge roughness of each line and each space are evaluated and discussed. This method can visualize features of complicated roughness easily to control COOL process. As a result, we found the followings. (1) Line placement error and line placement roughness of DSA were slightly different each other depending on their relative position to the chemical guide patterns. (2) In middle frequency area of PSD (Power Spectral Density) analysis graphs, it was observed that shapes were sensitively changed by process conditions of chemical stripe guide size and anneals temperature. (3) Correlation coefficient analysis using PSD was able to clarify characteristics of latent defect corresponding to physical and chemical property of BCP materials.
34 CFR 367.11 - What assurances must a DSA include in its application?
Code of Federal Regulations, 2011 CFR
2011-07-01
...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the State plan under section 704 of the Act any new methods and approaches relating to IL services for older... individuals; (6) A comparison, if appropriate, of prior year activities with the activities of the most...
Defect source analysis of directed self-assembly process (DSA of DSA)
NASA Astrophysics Data System (ADS)
Rincon Delgadillo, Paulina; Harukawa, Ryota; Suri, Mayur; Durant, Stephane; Cross, Andrew; Nagaswami, Venkat R.; Van Den Heuvel, Dieter; Gronheid, Roel; Nealey, Paul
2013-03-01
As design rule shrinks, it is essential that the capability to detect smaller and smaller defects should improve. There is considerable effort going on in the industry to enhance Immersion Lithography using DSA for 14 nm design node and below. While the process feasibility is demonstrated with DSA, material issues as well as process control requirements are not fully characterized. The chemical epitaxy process is currently the most-preferred process option for frequency multiplication and it involves new materials at extremely small thickness. The image contrast of the lamellar Line/Space pattern at such small layer thickness is a new challenge for optical inspection tools. In this investigation, the focus is on the capability for optical inspection systems to capture DSA unique defects such as dislocations and disclination clusters over the system and wafer noise. The study is also extended to investigate wafer level data at multiple process steps and determining contribution from each process step and materials using `Defect Source Analysis' methodology. The added defect pareto and spatial distributions of added defects at each process step are discussed.
Accelerated Test Method for Corrosion Protective Coatings Project
NASA Technical Reports Server (NTRS)
Falker, John; Zeitlin, Nancy; Calle, Luz
2015-01-01
This project seeks to develop a new accelerated corrosion test method that predicts the long-term corrosion protection performance of spaceport structure coatings as accurately and reliably as current long-term atmospheric exposure tests. This new accelerated test method will shorten the time needed to evaluate the corrosion protection performance of coatings for NASA's critical ground support structures. Lifetime prediction for spaceport structure coatings has a 5-year qualification cycle using atmospheric exposure. Current accelerated corrosion tests often provide false positives and negatives for coating performance, do not correlate to atmospheric corrosion exposure results, and do not correlate with atmospheric exposure timescales for lifetime prediction.
EUV patterned templates with grapho-epitaxy DSA at the N5/N7 logic nodes
NASA Astrophysics Data System (ADS)
Gronheid, Roel; Boeckx, Carolien; Doise, Jan; Bekaert, Joost; Karageorgos, Ioannis; Ruckaert, Julien; Chan, Boon Teik; Lin, Chenxi; Zou, Yi
2016-03-01
In this paper, approaches are explored for combining EUV with DSA for via layer patterning at the N7 and N5 logic nodes. Simulations indicate opportunity for significant LCDU improvement at the N7 node without impacting the required exposure dose. A templated DSA process based on NXE:3300 exposed EUV pre-patterns has been developed and supports the simulations. The main point of improvement concerns pattern placement accuracy with this process. It is described how metrology contributes to the measured placement error numbers. Further optimization of metrology methods for determining local placement errors is required. Next, also via layer patterning at the N5 logic node is considered. On top of LCDU improvement, the combination of EUV with DSA also allows for maintaining a single mask solution at this technology node, due to the ability of the DSA process to repair merging vias. It is experimentally shown, how shaping of templates for such via multiplication helps in placement accuracy control. Peanut-shaped pre-patterns, which can be printed using EUV lithography, give significantly better placement accuracy control compared to elliptical pre-patterns.
Method Accelerates Training Of Some Neural Networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O.
1992-01-01
Three-layer networks trained faster provided two conditions are satisfied: numbers of neurons in layers are such that majority of work done in synaptic connections between input and hidden layers, and number of neurons in input layer at least as great as number of training pairs of input and output vectors. Based on modified version of back-propagation method.
Miniature plasma accelerating detonator and method of detonating insensitive materials
Bickes, Jr., Robert W.; Kopczewski, Michael R.; Schwarz, Alfred C.
1986-01-01
The invention is a detonator for use with high explosives. The detonator comprises a pair of parallel rail electrodes connected to a power supply. By shorting the electrodes at one end, a plasma is generated and accelerated toward the other end to impact against explosives. A projectile can be arranged between the rails to be accelerated by the plasma. An alternative arrangement is to a coaxial electrode construction. The invention also relates to a method of detonating explosives.
Miniature plasma accelerating detonator and method of detonating insensitive materials
Bickes, R.W. Jr.; Kopczewski, M.R.; Schwarz, A.C.
1985-01-04
The invention is a detonator for use with high explosives. The detonator comprises a pair of parallel rail electrodes connected to a power supply. By shorting the electrodes at one end, a plasma is generated and accelerated toward the other end to impact against explosives. A projectile can be arranged between the rails to be accelerated by the plasma. An alternative arrangement is to a coaxial electrode construction. The invention also relates to a method of detonating explosives. 3 figs.
Advanced CD-SEM metrology for pattern roughness and local placement of lamellar DSA
NASA Astrophysics Data System (ADS)
Kato, Takeshi; Sugiyama, Akiyuki; Ueda, Kazuhiro; Yoshida, Hiroshi; Miyazaki, Shinji; Tsutsumi, Tomohiko; Kim, JiHoon; Cao, Yi; Lin, Guanyang
2014-04-01
Directed self-assembly (DSA) applying chemical epitaxy is one of the promising lithographic solutions for next generation semiconductor device manufacturing. We introduced Fingerprint Edge Roughness (FER) as an index to evaluate edge roughness of non-guided lamella finger print pattern, and found its correlation with the Line Edge Roughness (LER) of the lines assembled on the chemical guiding patterns. In this work, we have evaluated both FER and LER at each process steps of the LiNe DSA flow utilizing PS-b-PMMA block copolymers (BCP) assembled on chemical template wafers fabricated with Focus Exposure Matrix (FEM). As a result, we found the followings. (1) Line widths and space distances of the DSA patterns slightly differ to each other depending on their relative position against the chemical guide patterns. Appropriate condition that all lines are in the same dimensions exists, but the condition is not always same for the spaces. (2) LER and LWR (Line Width Roughness) of DSA patterns neither depend on width nor LER of the guide patterns. (3) LWR of DSA patterns are proportional to the width roughness of fingerprint pattern. (4) FER is influenced not only by the BCP formulation, but also by its film thickness. We introduced new methods to optimize the BCP formulation and process conditions by using FER measurement and local CD valuation measurement. Publisher's Note: This paper, originally published on 2 April 2014, was replaced with a corrected/revised version on 14 May 2014. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
Method for phosphate-accelerated bioremediation
Looney, Brian B.; Lombard, Kenneth H.; Hazen, Terry C.; Pfiffner, Susan M.; Phelps, Tommy J.; Borthen, James W.
1996-01-01
An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.
Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware
Nakata, Susumu
2008-09-01
This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.
[Digital subtraction angiography (DSA) in the diagnosis of orbital diseases].
Song, G X
1990-09-01
DSA was used in confirming the diagnoses of 3 cases of arteriovenous aneurysm, 3 cases of arteriovenous communication in the orbit or the cavernous sinus, and 1 case each of internal carotid aneurysm and granular myoblastoma. The technique provided basis for the selection of surgical approaches. One case each of orbital apex inflammation, ophthalmic Graves disease and orbital varix displayed normal findings with DSA. PMID:2086134
"Keyhole" method for accelerating imaging of contrast agent uptake.
van Vaals, J J; Brummer, M E; Dixon, W T; Tuithof, H H; Engels, H; Nelson, R C; Gerety, B M; Chezmar, J L; den Boer, J A
1993-01-01
Magnetic resonance (MR) imaging methods with good spatial and contrast resolution are often too slow to follow the uptake of contrast agents with the desired temporal resolution. Imaging can be accelerated by skipping the acquisition of data normally taken with strong phase-encoding gradients, restricting acquisition to weak-gradient data only. If the usual procedure of substituting zeroes for the missing data is followed, blurring results. Substituting instead reference data taken before or well after contrast agent injection reduces this problem. Volunteer and patient images obtained by using such reference data show that imaging can be usefully accelerated severalfold. Cortical and medullary regions of interest and whole kidney regions were studied, and both gradient- and spin-echo images are shown. The method is believed to be compatible with other acceleration methods such as half-Fourier reconstruction and reading of more than one line of k space per excitation.
Nonlinear Acceleration Methods for Even-Parity Neutron Transport
W. J. Martin; C. R. E. De Oliveira; H. Park
2010-05-01
Convergence acceleration methods for even-parity transport were developed that have the potential to speed up transport calculations and provide a natural avenue for an implicitly coupled multiphysics code. An investigation was performed into the acceleration properties of the introduction of a nonlinear quasi-diffusion-like tensor in linear and nonlinear solution schemes. Using the tensor reduced matrix as a preconditioner for the conjugate gradients method proves highly efficient and effective. The results for the linear and nonlinear case serve as the basis for further research into the application in a full three-dimensional spherical-harmonics even-parity transport code. Once moved into the nonlinear solution scheme, the implicit coupling of the convergence accelerated transport method into codes for other physics can be done seamlessly, providing an efficient, fully implicitly coupled multiphysics code with high order transport.
Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins
NASA Astrophysics Data System (ADS)
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2014-03-01
A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.
5 CFR 1315.5 - Accelerated payment methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the payment due date. (b) Small business (as defined in FAR 19.001 (48 CFR 19.001)). Agencies may pay... § 1315.5 Accelerated payment methods. (a) A single invoice under $2,500. Payments may be made as soon as the contract, proper invoice , receipt and acceptance documents are matched except where...
Just in Time DSA the Hanford Nuclear Safety Basis Strategy
JACKSON, M.W.
2002-06-01
The U.S. Department of Energy, Richland Operations Office (RL) is responsible for 30 hazard category 2 and 3 nuclear facilities that are operated by its prime contractors, Fluor Hanford, Incorporated (FHI), Bechtel Hanford, Incorporated (BHI) and Pacific Northwest National Laboratory (PNNL). The publication of Title 10, Code of Federal Regulations, Part 830, Subpart B, Safely Basis Requirements (the Rule) in January 2001 requires that the Documented Safety Analyses (DSA) for these facilities be reviewed against the requirements of the Rule. Those DSAs that do not meet the requirements must either be upgraded to satisfy the Rule, or an exemption must be obtained. RL and its prime contractors have developed a Nuclear Safety Strategy that provides a comprehensive approach for supporting RL's efforts to meet its long-term objectives for hazard category 2 and 3 facilities while also meeting the requirements of the Rule. This approach will result in a reduction of the total number of safety basis documents that must be developed and maintained to support the remaining mission and closure of the Hanford Site and ensure that the documentation that must be developed will support: Compliance with the Rule; A ''Just-In-Time'' approach to development of Rule-compliant safety bases supported by temporary exemptions; and Consolidation of safety basis documents that support multiple facilities with a common mission (e.g. decontamination, decommissioning and demolition [DD&D], waste management, surveillance and maintenance). This strategy provides a clear path to transition the safety bases for the various Hanford facilities from support of operation and stabilization missions through DD&D to accelerate closure. This ''Just-In-Time'' Strategy can also be tailored for other DOE Sites, creating the potential for large cost savings and schedule reductions throughout the DOE complex.
Measurement of acceleration: a new method of monitoring neuromuscular function.
Viby-Mogensen, J; Jensen, E; Werner, M; Nielsen, H K
1988-01-01
A new method for monitoring neuromuscular function based on measurement of acceleration is presented. The rationale behind the method is Newton's second law, stating that the acceleration is directly proportional to the force. For measurement of acceleration, a piezo-electric ceramic wafer was used. When this piezo electrode was fixed to the thumb, an electrical signal proportional to the acceleration was produced whenever the thumb moved in response to nerve stimulation. The electrical signal was registered and analysed in a Myograph 2000 neuromuscular transmission monitor. In 35 patients anaesthetized with halothane, train-of-four ratios measured with the accelerometer (ACT-TOF) were compared with simultaneous mechanical train-of-four ratios (FDT-TOF). Control ACT-TOF ratios were significantly higher than control FDT-TOF ratios: 116 +/- 12 and 98 +/- 4 (mean +/- s.d.), respectively. In five patients not given any relaxant during the anaesthetic procedure (20-60 min), both responses were remarkably constant. In 30 patients given vecuronium, a close linear relationship was found during recovery between ACT-TOF and FDT-TOF ratios. It is concluded that the method fulfils the basic requirements for a simple and reliable clinical monitoring tool.
Acceleration of reverse analysis method using hyperbolic activation function
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2015-10-01
Hyperbolic activation function is examined for its ability to accelerate the performance of doing data mining by using a technique named as Reverse Analysis method. In this paper, we describe how Hopfield network perform better with hyperbolic activation function and able to induce logical rules from large database by using reverse analysis method: given the values of the connections of a network, we can hope to know what logical rules are entrenched in the database. We limit our analysis to Horn clauses.
Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min
2008-07-01
In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.
Feasibility of reduced-dose 3D/4D-DSA using a weighted edge preserving filter
NASA Astrophysics Data System (ADS)
Oberstar, Erick L.; Speidel, Michael A.; Davis, Brian J.; Strother, Charles; Mistretta, Charles
2016-03-01
A conventional 3D/4D digital subtraction angiogram (DSA) requires two rotational acquisitions (mask and fill) to compute the log-subtracted projections that are used to reconstruct a 3D/4D volume. Since all of the vascular information is contained in the fill acquisition, it is hypothesized that it is possible to reduce the x-ray dose of the mask acquisition substantially and still obtain subtracted projections adequate to reconstruct a 3D/4D volume with noise level comparable to a full dose acquisition. A full dose mask and fill acquisition were acquired from a clinical study to provide a known full dose reference reconstruction. Gaussian noise was added to the mask acquisition to simulate a mask acquisition acquired at 10% relative dose. Noise in the low-dose mask projections was reduced with a weighted edge preserving (WEP) filter designed to preserve bony edges while suppressing noise. 2D log-subtracted projections were computed from the filtered low-dose mask and full-dose fill projections, and then 3D/4D-DSA reconstruction algorithms were applied. Additional bilateral filtering was applied to the 3D volumes. The signal-to-noise ratio measured in the filtered 3D/4D-DSA volumes was compared to the full dose case. The average ratio of filtered low-dose SNR to full-dose SNR was 1.07 for the 3D-DSA and 1.05 for the 4D-DSA, indicating the method is a feasible approach to restoring SNR in DSA scans acquired with a low-dose mask. The method was also tested in a phantom study with full dose fill and 22% dose mask.
Myocardial ischemia during intravenous DSA in patients with cardiac disease
Hesselink, J.R.; Hayman, L.A.; Chung, K.J.; McGinnis, B.D.; Davis, K.R.; Taveras, J.M.
1984-12-01
A prospective study was performed for 48 patients who had histories of angina and were referred for digital subtraction angiography (DSA). Cardiac disease was graded according to the American Heart Association (AHA) functional classification system. Each patient received 2-5 injections of 40-ml diatrizoate meglumine and diatrizoate sodium at 15 ml per second in the superior vena cava. Of the 28 patients in functional Classes I or II, 11% had angina and 32% had definite ischemic ECG changes after the DSA injections. Of the patients in functional Class III 63% had angina, and 58% had definite ischemic ECG changes after the injections. These observed cardiac effects following bolus injections of hypertonic ionic contrast media indicate that special precautions are necessary when performing intravenous DSA examinations on this group of high risk patients.
Reproduction of natural corrosion by accelerated laboratory testing methods
Luo, J.S.; Wronkiewicz, D.J.; Mazer, J.J.; Bates, J.K.
1996-05-01
Various laboratory corrosion tests have been developed to study the behavior of glass waste forms under conditions similar to those expected in an engineered repository. The data generated by laboratory experiments are useful for understanding corrosion mechanisms and for developing chemical models to predict the long-term behavior of glass. However, it is challenging to demonstrate that these test methods produce results that can be directly related to projecting the behavior of glass waste forms over time periods of thousands of years. One method to build confidence in the applicability of the test methods is to study the natural processes that have been taking place over very long periods in environments similar to those of the repository. In this paper, we discuss whether accelerated testing methods alter the fundamental mechanisms of glass corrosion by comparing the alteration patterns that occur in naturally altered glasses with those that occur in accelerated laboratory environments. This comparison is done by (1) describing the alteration of glasses reacted in nature over long periods of time and in accelerated laboratory environments and (2) establishing the reaction kinetics of naturally altered glass and laboratory reacted glass waste forms.
Template affinity role in CH shrink by DSA planarization
NASA Astrophysics Data System (ADS)
Tiron, R.; Gharbi, A.; Pimenta Barros, P.; Bouanani, S.; Lapeyre, C.; Bos, S.; Fouquet, A.; Hazart, J.; Chevalier, X.; Argoud, M.; Chamiot-Maitral, G.; Barnola, S.; Monget, C.; Farys, V.; Berard-Bergery, S.; Perraud, L.; Navarro, C.; Nicolet, C.; Hadziioannou, G.; Fleury, G.
2015-03-01
Density multiplication and contact shrinkage of patterned templates by directed self-assembly (DSA) of block copolymers (BCP) stands out as a promising alternative to overcome the limitations of conventional lithography. The main goal of this paper is to investigate the potential of DSA to address contact and via levels patterning with high resolution by performing either CD shrink or contact multiplication. Different DSA processes are benchmarked based on several success criteria such as: CD control, defectivity (missing holes) as well as placement control. More specifically, the methodology employed to measure DSA contact overlay and the impact of process parameters on placement error control is detailed. Using the 300mm pilot line available in LETI and Arkema's materials, our approach is based on the graphoepitaxy of PS-b-PMMA block copolymers. Our integration scheme, depicted in figure 1, is based on BCP self-assembly inside organic hard mask guiding patterns obtained using 193i nm lithography. The process is monitored at different steps: the generation of guiding patterns, the directed self-assembly of block copolymers and PMMA removal, and finally the transfer of PS patterns into the metallic under layer by plasma etching. Furthermore, several process flows are investigated, either by tuning different material related parameters such as the block copolymer intrinsic period or the interaction with the guiding pattern surface (sidewall and bottom-side affinity). The final lithographic performances are finely optimized as a function of the self-assembly process parameters such as the film thickness and bake (temperature and time). Finally, DSA performances as a function of guiding patterns density are investigated. Thus, for the best integration approach, defect-free isolated and dense patterns for both contact shrink and multiplication (doubling and more) have been achieved on the same processed wafer. These results show that contact hole shrink and
The need for DSA certification: the dental surgeons' perspective.
Poon, K C; Sim, C P
2001-06-01
This study investigated the perceptions of dental surgeons on training and performance of Dental Surgery Assistants (DSAs). A questionnaire survey was sent to all practising dental surgeons in Singapore. It was found that 8.2% of respondents felt that the current standard of DSAs was good while 80.5% felt that the current standard was either adequate or poor. 71.5% felt that there was a need for DSA certification and 76.5% reported that they would send their DSAs for certification training. The results suggest an underlying need for formal training towards DSA certification. PMID:11699349
The use of eDR-71xx for DSA defect review and automated classification
NASA Astrophysics Data System (ADS)
Pathangi, Hari; Van Den Heuvel, Dieter; Bayana, Hareen; Bouckou, Loemba; Brown, Jim; Parisi, Paolo; Gosain, Rohan
2015-03-01
The Liu-Nealey (LiNe) chemo-epitaxy Directed Self Assembly flow has been screened thoroughly in the past years in terms of defects. Various types of DSA specific defects have been identified and best known methods have been developed to be able to get sufficient S/N for defect inspection to help understand the root causes for the various defect types and to reduce the defect levels to prepare the process for high volume manufacturing. Within this process development, SEM-review and defect classification play a key role. This paper provides an overview of the challenges that DSA brings also in this metrology aspect and we will provide successful solutions in terms of making the automated defect review. In addition, a new Real Time Automated Defect Classification (RT-ADC) will be introduced that can save up to 90% in the time required for manual defect classification. This will enable a much larger sampling for defect review, resulting in a better understanding of signatures and behaviors of various DSA specific defect types, such as dislocations, 1-period bridges and line wiggling.
Spectral methods and sum acceleration algorithms. Final report
Boyd, J.
1995-03-01
The principle investigator pursued his investigation of numerical algorithms during the period of the grant. The attached list of publications is so lengthy that it is impossible to describe them in detail. However, the author calls attention to the four articles on sequence acceleration and fourteen more on spectral methods, which fulfill the goals of the original proposal. He also continued his research on nonlinear waves, and wrote a dozen papers on this, too.
Method for generating a plasma wave to accelerate electrons
Umstadter, D.; Esarey, E.; Kim, J.K.
1997-06-10
The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention. 21 figs.
Method for generating a plasma wave to accelerate electrons
Umstadter, Donald; Esarey, Eric; Kim, Joon K.
1997-01-01
The invention provides a method and apparatus for generating large amplitude nonlinear plasma waves, driven by an optimized train of independently adjustable, intense laser pulses. In the method, optimal pulse widths, interpulse spacing, and intensity profiles of each pulse are determined for each pulse in a series of pulses. A resonant region of the plasma wave phase space is found where the plasma wave is driven most efficiently by the laser pulses. The accelerator system of the invention comprises several parts: the laser system, with its pulse-shaping subsystem; the electron gun system, also called beam source, which preferably comprises photo cathode electron source and RF-LINAC accelerator; electron photo-cathode triggering system; the electron diagnostics; and the feedback system between the electron diagnostics and the laser system. The system also includes plasma source including vacuum chamber, magnetic lens, and magnetic field means. The laser system produces a train of pulses that has been optimized to maximize the axial electric field amplitude of the plasma wave, and thus the electron acceleration, using the method of the invention.
GPU Accelerated Spectral Element Methods: 3D Euler equations
NASA Astrophysics Data System (ADS)
Abdi, D. S.; Wilcox, L.; Giraldo, F.; Warburton, T.
2015-12-01
A GPU accelerated nodal discontinuous Galerkin method for the solution of three dimensional Euler equations is presented. The Euler equations are nonlinear hyperbolic equations that are widely used in Numerical Weather Prediction (NWP). Therefore, acceleration of the method plays an important practical role in not only getting daily forecasts faster but also in obtaining more accurate (high resolution) results. The equation sets used in our atomospheric model NUMA (non-hydrostatic unified model of the atmosphere) take into consideration non-hydrostatic effects that become more important with high resolution. We use algorithms suitable for the single instruction multiple thread (SIMT) architecture of GPUs to accelerate solution by an order of magnitude (20x) relative to CPU implementation. For portability to heterogeneous computing environment, we use a new programming language OCCA, which can be cross-compiled to either OpenCL, CUDA or OpenMP at runtime. Finally, the accuracy and performance of our GPU implementations are veried using several benchmark problems representative of different scales of atmospheric dynamics.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Half-range acceleration for one-dimensional transport problems
Zika, M.R.; Larsen, E.W.
1998-12-31
Researchers have devoted considerable effort to developing acceleration techniques for transport iterations in highly diffusive problems. The advantages and disadvantages of source iteration, rebalance, diffusion synthetic acceleration (DSA), transport synthetic acceleration (TSA), and projection acceleration methods are documented in the literature and will not be discussed here except to note that no single method has proven to be applicable to all situations. Here, the authors describe a new acceleration method that is based solely on transport sweeps, is algebraically linear (and is therefore amenable to a Fourier analysis), and yields a theoretical spectral radius bounded by one-third for all cases. This method does not introduce spatial differencing difficulties (as is the case for DSA) nor does its theoretical performance degrade as a function of mesh and material properties (as is the case for TSA). Practical simulations of the new method agree with the theoretical predictions, except for scattering ratios very close to unity. At this time, they believe that the discrepancy is due to the effect of boundary conditions. This is discussed further.
Etch challenges for DSA implementation in CMOS via patterning
NASA Astrophysics Data System (ADS)
Pimenta Barros, P.; Barnola, S.; Gharbi, A.; Argoud, M.; Servin, I.; Tiron, R.; Chevalier, X.; Navarro, C.; Nicolet, C.; Lapeyre, C.; Monget, C.; Martinez, E.
2014-03-01
This paper reports on the etch challenges to overcome for the implementation of PS-b-PMMA block copolymer's Directed Self-Assembly (DSA) in CMOS via patterning level. Our process is based on a graphoepitaxy approach, employing an industrial PS-b-PMMA block copolymer (BCP) from Arkema with a cylindrical morphology. The process consists in the following steps: a) DSA of block copolymers inside guiding patterns, b) PMMA removal, c) brush layer opening and finally d) PS pattern transfer into typical MEOL or BEOL stacks. All results presented here have been performed on the DSA Leti's 300mm pilot line. The first etch challenge to overcome for BCP transfer involves in removing all PMMA selectively to PS block. In our process baseline, an acetic acid treatment is carried out to develop PMMA domains. However, this wet development has shown some limitations in terms of resists compatibility and will not be appropriated for lamellar BCPs. That is why we also investigate the possibility to remove PMMA by only dry etching. In this work the potential of a dry PMMA removal by using CO based chemistries is shown and compared to wet development. The advantages and limitations of each approach are reported. The second crucial step is the etching of brush layer (PS-r-PMMA) through a PS mask. We have optimized this step in order to preserve the PS patterns in terms of CD, holes features and film thickness. Several integrations flow with complex stacks are explored for contact shrinking by DSA. A study of CD uniformity has been addressed to evaluate the capabilities of DSA approach after graphoepitaxy and after etching.
300mm pilot line DSA contact hole process stability
NASA Astrophysics Data System (ADS)
Argoud, M.; Servin, I.; Gharbi, A.; Pimenta Barros, P.; Jullian, K.; Sanche, M.; Chamiot-Maitral, G.; Barnola, S.; Tiron, R.; Navarro, C.; Chevalier, X.; Nicolet, C.; Fleury, G.; Hadziioannou, G.; Asai, M.; Pieczulewski, C.
2014-03-01
Directed Self-Assembly (DSA) is today a credible alternative lithographic technology for semiconductor industry [1]. In the coming years, DSA integration could be a standard complementary step with other lithographic techniques (193nm immersion, e-beam, extreme ultraviolet). Its main advantages are a high pattern resolution (down to 10nm), a capability to decrease an initial pattern edge roughness [2], an absorption of pattern guide size variation, no requirement of a high-resolution mask and can use standard fab-equipment (tracks and etch tools). The potential of DSA must next be confirmed viable for high volume manufacturing. Developments are necessary to transfer this technology on 300mm wafers in order to demonstrate semiconductor fab-compatibility [3-7]. The challenges concern especially the stability, both uniformity and defectivity, of the entire process, including tools and Blok Co-Polymer (BCP) materials. To investigate the DSA process stability, a 300mm pilot line with DSA dedicated track (SOKUDO DUO) is used at CEALeti. BCP morphologies with PMMA cylinders in a PS matrix are investigated (about 35nm natural period). BCP selfassembly in unpatterned surface and patterned surface (graphoepitaxy) configurations are considered in this study. Unpatterned configuration will initially be used for process optimization and fix a process of record. Secondly, this process of record will be monitored with a follow-up in order to validate its stability. Steps optimization will be applied to patterned surface configurations (graphoepitaxy) for contact hole patterning application. A process window of contact hole shrink process will be defined. Process stability (CD uniformity and defectivity related to BCP lithography) will be investigated.
Analytic Method to Estimate Particle Acceleration in Flux Ropes
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Method and apparatus for varying accelerator beam output energy
Young, Lloyd M.
1998-01-01
A coupled cavity accelerator (CCA) accelerates a charged particle beam with rf energy from a rf source. An input accelerating cavity receives the charged particle beam and an output accelerating cavity outputs the charged particle beam at an increased energy. Intermediate accelerating cavities connect the input and the output accelerating cavities to accelerate the charged particle beam. A plurality of tunable coupling cavities are arranged so that each one of the tunable coupling cavities respectively connect an adjacent pair of the input, output, and intermediate accelerating cavities to transfer the rf energy along the accelerating cavities. An output tunable coupling cavity can be detuned to variably change the phase of the rf energy reflected from the output coupling cavity so that regions of the accelerator can be selectively turned off when one of the intermediate tunable coupling cavities is also detuned.
An accelerated training method for back propagation networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O. (Inventor)
1993-01-01
The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.
A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification
NASA Astrophysics Data System (ADS)
Wu, Keyi; Li, Jinglai
2016-09-01
In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.
Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method
NASA Astrophysics Data System (ADS)
Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han
2015-12-01
Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.
Design strategy for integrating DSA via patterning in sub-7 nm interconnects
NASA Astrophysics Data System (ADS)
Karageorgos, Ioannis; Ryckaert, Julien; Tung, Maryann C.; Wong, H.-S. P.; Gronheid, Roel; Bekaert, Joost; Karageorgos, Evangelos; Croes, Kris; Vandenberghe, Geert; Stucchi, Michele; Dehaene, Wim
2016-03-01
In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCPs). As a result, the insertion of DSA for IC fabrication is being actively considered for the sub-7nm nodes. At these nodes the DSA technology could alleviate costs for multiple patterning and limit the number of litho masks that would be required per metal layer. One of the most straightforward approaches for DSA implementation would be for via patterning through templated DSA, where hole patterns are readily accessible through templated confinement of cylindrical phase BCP materials. Our in-house studies show that decomposition of via layers in realistic circuits below the 7nm node would require at least many multi-patterning steps (or colors), using 193nm immersion lithography. Even the use of EUV might require double patterning in these dimensions, since the minimum via distance would be smaller than EUV resolution. The grouping of vias through templated DSA can resolve local conflicts in high density areas. This way, the number of required colors can be significantly reduced. For the implementation of this approach, a DSA-aware mask decomposition is required. In this paper, our design approach for DSA via patterning in sub-7nm nodes is discussed. We propose options to expand the list of DSA-compatible via patterns (DSA letters) and we define matching cost formulas for the optimal DSA-aware layout decomposition. The flowchart of our proposed approach tool is presented.
NASA Astrophysics Data System (ADS)
Kim, JiHoon; Yin, Jian; Cao, Yi; Her, YoungJun; Petermann, Claire; Wu, Hengpeng; Shan, Jianhui; Tsutsumi, Tomohiko; Lin, Guanyang
2015-03-01
Significant progresses on 300 mm wafer level DSA (Directed Self-Assembly) performance stability and pattern quality were demonstrated in recent years. DSA technology is now widely regarded as a leading complementary patterning technique for future node integrated circuit (IC) device manufacturing. We first published SMARTTM DSA flow in 2012. In 2013, we demonstrated that SMARTTM DSA pattern quality is comparable to that generated using traditional multiple patterning technique for pattern uniformity on a 300 mm wafer. In addition, we also demonstrated that less than 1.5 nm/3σ LER (line edge roughness) for 16 nm half pitch DSA line/space pattern is achievable through SMARTTM DSA process. In this publication, we will report impacts on SMARTTM DSA performances of key pre-pattern features and processing conditions. 300mm wafer performance process window, CD uniformity and pattern LER/LWR after etching transfer into carbon-hard mask will be discussed as well.
Turcksin, Bruno Ragusa, Jean C.
2014-10-01
In this paper, a Diffusion Synthetic Acceleration (DSA) technique applied to the S{sub n} radiation transport equation is developed using Piece-Wise Linear Discontinuous (PWLD) finite elements on arbitrary polygonal grids. The discretization of the DSA equations employs an Interior Penalty technique, as is classically done for the stabilization of the diffusion equation using discontinuous finite element approximations. The penalty method yields a system of linear equations that is Symmetric Positive Definite (SPD). Thus, solution techniques such as Preconditioned Conjugate Gradient (PCG) can be effectively employed. Algebraic MultiGrid (AMG) and Symmetric Gauss–Seidel (SGS) are employed as conjugate gradient preconditioners for the DSA system. AMG is shown to be significantly more efficient than SGS. Fourier analyses are carried out and we show that this discontinuous finite element DSA scheme is always stable and effective at reducing the spectral radius for iterative transport solves, even for grids with high-aspect ratio cells. Numerical results are presented for different grid types: quadrilateral, hexagonal, and polygonal grids as well as grids with local mesh adaptivity.
Discontinuous diffusion synthetic acceleration for Sn transport on 2D arbitrary polygonal meshes
NASA Astrophysics Data System (ADS)
Turcksin, Bruno; Ragusa, Jean C.
2014-10-01
In this paper, a Diffusion Synthetic Acceleration (DSA) technique applied to the Sn radiation transport equation is developed using Piece-Wise Linear Discontinuous (PWLD) finite elements on arbitrary polygonal grids. The discretization of the DSA equations employs an Interior Penalty technique, as is classically done for the stabilization of the diffusion equation using discontinuous finite element approximations. The penalty method yields a system of linear equations that is Symmetric Positive Definite (SPD). Thus, solution techniques such as Preconditioned Conjugate Gradient (PCG) can be effectively employed. Algebraic MultiGrid (AMG) and Symmetric Gauss-Seidel (SGS) are employed as conjugate gradient preconditioners for the DSA system. AMG is shown to be significantly more efficient than SGS. Fourier analyses are carried out and we show that this discontinuous finite element DSA scheme is always stable and effective at reducing the spectral radius for iterative transport solves, even for grids with high-aspect ratio cells. Numerical results are presented for different grid types: quadrilateral, hexagonal, and polygonal grids as well as grids with local mesh adaptivity.
Accelerated weight histogram method for exploring free energy landscapes
NASA Astrophysics Data System (ADS)
Lindahl, V.; Lidmar, J.; Hess, B.
2014-07-01
Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.
Accelerated weight histogram method for exploring free energy landscapes
Lindahl, V.; Lidmar, J.; Hess, B.
2014-07-28
Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.
DSA via hole shrink for advanced node applications
NASA Astrophysics Data System (ADS)
Chi, Cheng; Liu, Chi-Chun; Meli, Luciana; Schmidt, Kristin; Xu, Yongan; DeSilva, Ekmini Anuja; Sanchez, Martha; Farrell, Richard; Cottle, Hongyun; Kawamura, Daiji; Singh, Lovejeet; Furukawa, Tsuyoshi; Lai, Kafai; Pitera, Jed W.; Sanders, Daniel; Hetzer, David R.; Metz, Andrew; Felix, Nelson; Arnold, John; Colburn, Matthew
2016-04-01
Directed self-assembly (DSA) of block copolymers (BCPs) has become a promising patterning technique for 7nm node hole shrink process due to its material-controlled CD uniformity and process simplicity.[1] For such application, cylinder-forming BCP system has been extensively investigated compared to its counterpart, lamella-forming system, mainly because cylindrical BCPs will form multiple vias in non-circular guiding patterns (GPs), which is considered to be closer to technological needs.[2-5] This technological need to generate multiple DSA domains in a bar-shape GP originated from the resolution limit of lithography, i.e. those vias placed too close to each other will merge and short the circuit. In practice, multiple patterning and self-aligned via (SAV) processes have been implemented in semiconductor manufacturing to address this resolution issue.[6] The former approach separates one pattern layer with unresolvable dense features into several layers with resolvable features, while the latter approach simply utilizes the superposition of via bars and the pre-defined metal trench patterns in a thin hard mask layer to resolve individual vias, as illustrated in Fig 1 (upper). With proper design, using DSA to generate via bars with the SAV process could provide another approach to address the resolution issue.
METHOD OF PRODUCING AND ACCELERATING AN ION BEAM
NASA Technical Reports Server (NTRS)
Foster, John E. (Inventor)
2005-01-01
A method of producing and accelerating an ion beam comprising the steps of providing a magnetic field with a cusp that opens in an outward direction along a centerline that passes through a vertex of the cusp: providing an ionizing gas that sprays outward through at least one capillary-like orifice in a plenum that is positioned such that the orifice is on the centerline in the cusp, outward of the vortex of the cusp; providing a cathode electron source, and positioning it outward of the orifice and off of the centerline; and positively charging the plenum relative to the cathode electron source such that the plenum functions as m anode. A hot filament may be used as the cathode electron source, and permanent magnets may be used to provide the magnetic field.
Accelerated molecular dynamics methods: introduction and recent developments
Uberuaga, Blas Pedro; Voter, Arthur F; Perez, Danny; Shim, Y; Amar, J G
2009-01-01
reaction pathways may be important, we return instead to a molecular dynamics treatment, in which the trajectory itself finds an appropriate way to escape from each state of the system. Since a direct integration of the trajectory would be limited to nanoseconds, while we are seeking to follow the system for much longer times, we modify the dynamics in some way to cause the first escape to happen much more quickly, thereby accelerating the dynamics. The key is to design the modified dynamics in a way that does as little damage as possible to the probability for escaping along a given pathway - i.e., we try to preserve the relative rate constants for the different possible escape paths out of the state. We can then use this modified dynamics to follow the system from state to state, reaching much longer times than we could reach with direct MD. The dynamics within any one state may no longer be meaningful, but the state-to-state dynamics, in the best case, as we discuss in the paper, can be exact. We have developed three methods in this accelerated molecular dynamics (AMD) class, in each case appealing to TST, either implicitly or explicitly, to design the modified dynamics. Each of these methods has its own advantages, and we and others have applied these methods to a wide range of problems. The purpose of this article is to give the reader a brief introduction to how these methods work, and discuss some of the recent developments that have been made to improve their power and applicability. Note that this brief review does not claim to be exhaustive: various other methods aiming at similar goals have been proposed in the literature. For the sake of brevity, our focus will exclusively be on the methods developed by the group.
Apparatus and method for the acceleration of projectiles to hypervelocities
Hertzberg, Abraham; Bruckner, Adam P.; Bogdanoff, David W.
1990-01-01
A projectile is initially accelerated to a supersonic velocity and then injected into a launch tube filled with a gaseous propellant. The projectile outer surface and launch tube inner surface form a ramjet having a diffuser, a combustion chamber and a nozzle. A catalytic coated flame holder projecting from the projectile ignites the gaseous propellant in the combustion chamber thereby accelerating the projectile in a subsonic combustion mode zone. The projectile then enters an overdriven detonation wave launch tube zone wherein further projectile acceleration is achieved by a formed, controlled overdriven detonation wave capable of igniting the gaseous propellant in the combustion chamber. Ultrahigh velocity projectile accelerations are achieved in a launch tube layered detonation zone having an inner sleeve filled with hydrogen gas. An explosive, which is disposed in the annular zone between the inner sleeve and the launch tube, explodes responsive to an impinging shock wave emanating from the diffuser of the accelerating projectile thereby forcing the inner sleeve inward and imparting an acceleration to the projectile. For applications wherein solid or liquid high explosives are employed, the explosion thereof forces the inner sleeve inward, forming a throat behind the projectile. This throat chokes flow behind, thereby imparting an acceleration to the projectile.
Accelerating ab initio molecular dynamics simulations by linear prediction methods
NASA Astrophysics Data System (ADS)
Herr, Jonathan D.; Steele, Ryan P.
2016-09-01
Acceleration of ab initio molecular dynamics (AIMD) simulations can be reliably achieved by extrapolation of electronic data from previous timesteps. Existing techniques utilize polynomial least-squares regression to fit previous steps' Fock or density matrix elements. In this work, the recursive Burg 'linear prediction' technique is shown to be a viable alternative to polynomial regression, and the extrapolation-predicted Fock matrix elements were three orders of magnitude closer to converged elements. Accelerations of 1.8-3.4× were observed in test systems, and in all cases, linear prediction outperformed polynomial extrapolation. Importantly, these accelerations were achieved without reducing the MD integration timestep.
Demonstration recommendations for accelerated testing of concrete decontamination methods
Dickerson, K.S.; Ally, M.R.; Brown, C.H.; Morris, M.I.; Wilson-Nichols, M.J.
1995-12-01
A large number of aging US Department of Energy (DOE) surplus facilities located throughout the US require deactivation, decontamination, and decommissioning. Although several technologies are available commercially for concrete decontamination, emerging technologies with potential to reduce secondary waste and minimize the impact and risk to workers and the environment are needed. In response to these needs, the Accelerated Testing of Concrete Decontamination Methods project team described the nature and extent of contaminated concrete within the DOE complex and identified applicable emerging technologies. Existing information used to describe the nature and extent of contaminated concrete indicates that the most frequently occurring radiological contaminants are {sup 137}Cs, {sup 238}U (and its daughters), {sup 60}Co, {sup 90}Sr, and tritium. The total area of radionuclide-contaminated concrete within the DOE complex is estimated to be in the range of 7.9 {times} 10{sup 8} ft{sup 2}or approximately 18,000 acres. Concrete decontamination problems were matched with emerging technologies to recommend demonstrations considered to provide the most benefit to decontamination of concrete within the DOE complex. Emerging technologies with the most potential benefit were biological decontamination, electro-hydraulic scabbling, electrokinetics, and microwave scabbling.
Method of accelerating photons by a relativistic plasma wave
Dawson, John M.; Wilks, Scott C.
1990-01-01
Photons of a laser pulse have their group velocity accelerated in a plasma as they are placed on a downward density gradient of a plasma wave of which the phase velocity nearly matches the group velocity of the photons. This acceleration results in a frequency upshift. If the unperturbed plasma has a slight density gradient in the direction of propagation, the photon frequencies can be continuously upshifted to significantly greater values.
Just in Time DSA-The Hanford Nuclear Safety Basis Strategy
Olinger, S. J.; Buhl, A. R.
2002-02-26
The U.S. Department of Energy, Richland Operations Office (RL) is responsible for 30 hazard category 2 and 3 nuclear facilities that are operated by its prime contractors, Fluor Hanford Incorporated (FHI), Bechtel Hanford, Incorporated (BHI) and Pacific Northwest National Laboratory (PNNL). The publication of Title 10, Code of Federal Regulations, Part 830, Subpart B, Safety Basis Requirements (the Rule) in January 2001 imposed the requirement that the Documented Safety Analyses (DSA) for these facilities be reviewed against the requirements of the Rule. Those DSA that do not meet the requirements must either be upgraded to satisfy the Rule, or an exemption must be obtained. RL and its prime contractors have developed a Nuclear Safety Strategy that provides a comprehensive approach for supporting RL's efforts to meet its long term objectives for hazard category 2 and 3 facilities while also meeting the requirements of the Rule. This approach will result in a reduction of the total number of safety basis documents that must be developed and maintained to support the remaining mission and closure of the Hanford Site and ensure that the documentation that must be developed will support: compliance with the Rule; a ''Just-In-Time'' approach to development of Rule-compliant safety bases supported by temporary exemptions; and consolidation of safety basis documents that support multiple facilities with a common mission (e.g. decontamination, decommissioning and demolition [DD&D], waste management, surveillance and maintenance). This strategy provides a clear path to transition the safety bases for the various Hanford facilities from support of operation and stabilization missions through DD&D to accelerate closure. This ''Just-In-Time'' Strategy can also be tailored for other DOE Sites, creating the potential for large cost savings and schedule reductions throughout the DOE complex.
Yaqi Wang; Jean C. Ragusa
2011-10-01
Diffusion synthetic acceleration (DSA) schemes compatible with adaptive mesh refinement (AMR) grids are derived for the SN transport equations discretized using high-order discontinuous finite elements. These schemes are directly obtained from the discretized transport equations by assuming a linear dependence in angle of the angular flux along with an exact Fick's law and, therefore, are categorized as partially consistent. These schemes are akin to the symmetric interior penalty technique applied to elliptic problems and are all based on a second-order discontinuous finite element discretization of a diffusion equation (as opposed to a mixed or P1 formulation). Therefore, they only have the scalar flux as unknowns. A Fourier analysis has been carried out to determine the convergence properties of the three proposed DSA schemes for various cell optical thicknesses and aspect ratios. Out of the three DSA schemes derived, the modified interior penalty (MIP) scheme is stable and effective for realistic problems, even with distorted elements, but loses effectiveness for some highly heterogeneous configurations. The MIP scheme is also symmetric positive definite and can be solved efficiently with a preconditioned conjugate gradient method. Its implementation in an AMR SN transport code has been performed for both source iteration and GMRes-based transport solves, with polynomial orders up to 4. Numerical results are provided and show good agreement with the Fourier analysis results. Results on AMR grids demonstrate that the cost of DSA can be kept low on locally refined meshes.
Directed self-assembly (DSA) grapho-epitaxy template generation with immersion lithography
NASA Astrophysics Data System (ADS)
Ma, Yuansheng; Lei, Junjiang; Torres, J. A.; Hong, Le; Word, James; Fenger, Germain; Tritchkov, Alexander; Lippincott, George; Gupta, Rachit; Lafferty, Neal; He, Yuan; Bekaert, Joost; Vanderberghe, Geert
2015-03-01
In this paper, we present an optimization methodology for the template designs of sub-resolution contacts using directed self-assembly (DSA) with grapho-epitaxy and immersion lithography. We demonstrate the flow using a 60nm-pitch contact design in doublet with Monte Carlo simulations for DSA. We introduce the notion of Template Error Enhancement Factor (TEEF) to gauge the sensitivity of DSA printing infidelity to template printing infidelity, and evaluate optimized template designs with TEEF metrics. Our data shows that SMO is critical to achieve sub-80nm non- L0 pitches for DSA patterns using 193i.
NASA Astrophysics Data System (ADS)
Davis, Brian; Oberstar, Erick; Royalty, Kevin; Schafer, Sebastian; Strother, Charles; Mistretta, Charles
2015-03-01
Static C-Arm CT 3D FDK baseline reconstructions (3D-DSA) are unable to provide temporal information to radiologists. 4D-DSA provides a time series of 3D volumes implementing a constrained image, thresholded 3D-DSA, reconstruction utilizing temporal dynamics in the 2D projections. Volumetric limiting spatial resolution (VLSR) of 4DDSA is quantified and compared to a 3D-DSA reconstruction using the same 3D-DSA parameters. Investigated were the effects of varying over significant ranges the 4D-DSA parameters of 2D blurring kernel size applied to the projection and threshold applied to the 3D-DSA when generating the constraining image of a scanned phantom (SPH) and an electronic phantom (EPH). The SPH consisted of a 76 micron tungsten wire encased in a 47 mm O.D. plastic radially concentric thin walled support structure. An 8-second/248-frame/198° scan protocol acquired the raw projection data. VLSR was determined from averaged MTF curves generated from each 2D transverse slice of every (248) 4D temporal frame (3D). 4D results for SPH and EPH were compared to the 3D-DSA. Analysis of the 3D-DSA resulted in a VLSR of 2.28 and 1.69 lp/mm for the EPH and SPH respectively. Kernel (2D) sizes of either 10x10 or 20x20 pixels with a threshold of 10% of the 3D-DSA as a constraining image provided 4D-DSA VLSR nearest to the 3D-DSA. 4D-DSA algorithms yielded 2.21 and 1.67 lp/mm with a percent error of 3.1% and 1.2% for the EPH and SPH respectively as compared to the 3D-DSA. This research indicates 4D-DSA is capable of retaining the resolution of the 3D-DSA.
Diffusive Shock Acceleration and Reconnection Acceleration Processes
NASA Astrophysics Data System (ADS)
Zank, G. P.; Hunana, P.; Mostafavi, P.; Le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.; Cummings, A.; Stone, E.; Decker, R.
2015-12-01
Shock waves, as shown by simulations and observations, can generate high levels of downstream vortical turbulence, including magnetic islands. We consider a combination of diffusive shock acceleration (DSA) and downstream magnetic-island-reconnection-related processes as an energization mechanism for charged particles. Observations of electron and ion distributions downstream of interplanetary shocks and the heliospheric termination shock (HTS) are frequently inconsistent with the predictions of classical DSA. We utilize a recently developed transport theory for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets. Particle energization associated with the anti-reconnection electric field, a consequence of magnetic island merging, and magnetic island contraction, are considered. For the former only, we find that (i) the spectrum is a hard power law in particle speed, and (ii) the downstream solution is constant. For downstream plasmoid contraction only, (i) the accelerated spectrum is a hard power law in particle speed; (ii) the particle intensity for a given energy peaks downstream of the shock, and the distance to the peak location increases with increasing particle energy, and (iii) the particle intensity amplification for a particular particle energy, f(x,c/{c}0)/f(0,c/{c}0), is not 1, as predicted by DSA, but increases with increasing particle energy. The general solution combines both the reconnection-induced electric field and plasmoid contraction. The observed energetic particle intensity profile observed by Voyager 2 downstream of the HTS appears to support a particle acceleration mechanism that combines both DSA and magnetic-island-reconnection-related processes.
Development of wide area environment accelerator operation and diagnostics method
NASA Astrophysics Data System (ADS)
Uchiyama, Akito; Furukawa, Kazuro
2015-08-01
Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.
Diamant, Kevin David; Raitses, Yevgeny; Fisch, Nathaniel Joseph
2014-05-13
Systems and methods may be provided for cylindrical Hall thrusters with independently controllable ionization and acceleration stages. The systems and methods may include a cylindrical channel having a center axial direction, a gas inlet for directing ionizable gas to an ionization section of the cylindrical channel, an ionization device that ionizes at least a portion of the ionizable gas within the ionization section to generate ionized gas, and an acceleration device distinct from the ionization device. The acceleration device may provide an axial electric field for an acceleration section of the cylindrical channel to accelerate the ionized gas through the acceleration section, where the axial electric field has an axial direction in relation to the center axial direction. The ionization section and the acceleration section of the cylindrical channel may be substantially non-overlapping.
Development of a fast voltage control method for electrostatic accelerators
NASA Astrophysics Data System (ADS)
Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios
2014-12-01
The concept of a novel fast voltage control loop for tandem electrostatic accelerators is described. This control loop utilises high-frequency components of the ion beam current intercepted by the image slits to generate a correction voltage that is applied to the first few gaps of the low- and high-energy acceleration tubes adjoining the high voltage terminal. New techniques for the direct measurement of the transfer function of an ultra-high impedance structure, such as an electrostatic accelerator, have been developed. For the first time, the transfer function for the fast feedback loop has been measured directly. Slow voltage variations are stabilised with common corona control loop and the relationship between transfer functions for the slow and new fast control loops required for optimum operation is discussed. The main source of terminal voltage instabilities, which are due to variation of the charging current caused by mechanical oscillations of charging chains, has been analysed.
Method of and apparatus for accelerating a projectile
Goldstein, Yeshayahu S. A.; Tidman, Derek A.
1986-01-01
A projectile is accelerated along a confined path by supplying a pulsed high pressure, high velocity plasma jet to the rear of the projectile as the projectile traverses the path. The jet enters the confined path at a non-zero angle relative to the projectile path. The pulse is derived from a dielectric capillary tube having an interior wall from which plasma forming material is ablated in response to a discharge voltage. The projectile can be accelerated in response to the kinetic energy in the plasma jet or in response to a pressure increase of gases in the confined path resulting from the heat added to the gases by the plasma.
Ultrahigh impedance method to assess electrostatic accelerator performance
NASA Astrophysics Data System (ADS)
Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios
2015-06-01
This paper describes an investigation of problem-solving procedures to troubleshoot electrostatic accelerators. A novel technique to diagnose issues with high-voltage components is described. The main application of this technique is noninvasive testing of electrostatic accelerator high-voltage grading systems, measuring insulation resistance, or determining the volume and surface resistivity of insulation materials used in column posts and acceleration tubes. In addition, this technique allows verification of the continuity of the resistive divider assembly as a complete circuit, revealing if an electrical path exists between equipotential rings, resistors, tube electrodes, and column post-to-tube conductors. It is capable of identifying and locating a "microbreak" in a resistor and the experimental validation of the transfer function of the high impedance energy control element. A simple and practical fault-finding procedure has been developed based on fundamental principles. The experimental distributions of relative resistance deviations (Δ R /R ) for both accelerating tubes and posts were collected during five scheduled accelerator maintenance tank openings during 2013 and 2014. Components with measured Δ R /R >±2.5 % were considered faulty and put through a detailed examination, with faults categorized. In total, thirty four unique fault categories were identified and most would not be identifiable without the new technique described. The most common failure mode was permanent and irreversible insulator current leakage that developed after being exposed to the ambient environment. As a result of efficient in situ troubleshooting and fault-elimination techniques, the maximum values of |Δ R /R | are kept below 2.5% at the conclusion of maintenance procedures. The acceptance margin could be narrowed even further by a factor of 2.5 by increasing the test voltage from 40 V up to 100 V. Based on experience over the last two years, resistor and insulator
34 CFR 367.11 - What assurances must a DSA include in its application?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the... section 704 of the Act and subpart C of 34 CFR part 364; and (g) The applicant has been designated by the... 34 Education 2 2014-07-01 2013-07-01 true What assurances must a DSA include in its...
34 CFR 367.11 - What assurances must a DSA include in its application?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the... section 704 of the Act and subpart C of 34 CFR part 364; and (g) The applicant has been designated by the... 34 Education 2 2013-07-01 2013-07-01 false What assurances must a DSA include in its...
34 CFR 367.11 - What assurances must a DSA include in its application?
Code of Federal Regulations, 2010 CFR
2010-07-01
...) and (b), and consistent with 34 CFR 364.28, the DSA will seek to incorporate into and describe in the... section 704 of the Act and subpart C of 34 CFR part 364; and (g) The applicant has been designated by the... 34 Education 2 2010-07-01 2010-07-01 false What assurances must a DSA include in its...
Characterization of a Direct Sample Analysis (DSA) Ambient Ionization Source
NASA Astrophysics Data System (ADS)
Winter, Gregory T.; Wilhide, Joshua A.; LaCourse, William R.
2015-09-01
Water cluster ion intensity and distribution is affected by source conditions in direct sample analysis (DSA) ionization. Parameters investigated in this paper include source nozzle diameter, gas flow rate, and source positions relative to the mass spectrometer inlet. Schlieren photography was used to image the gas flow profile exiting the nozzle. Smaller nozzle diameters and higher flow rates produced clusters of the type [H + (H2O)n]+ with greater n and higher intensity than larger nozzles and lower gas flow rates. At high gas flow rates, the gas flow profile widened compared with the original nozzle diameter. At lower flow rates, the amount of expansion was reduced, which suggests that lowering the flow rate may allow for improvements in sampling spatial resolution.
DSA template optimization for contact layer in 1D standard cell design
NASA Astrophysics Data System (ADS)
Xiao, Zigang; Du, Yuelin; Tian, Haitong; Wong, Martin D. F.; Yi, He; Wong, H.-S. Philip
2014-03-01
At the 7 nm technology node, the contact layers of integrated circuits (IC) are too dense to be printed by single exposure lithography. Block copolymer directed self-assembly (DSA) has shown its advantage in contact/via patterning with high throughput and low cost. To pattern contacts with DSA, guiding templates are usually printed first with conventional lithography, e.g., 193 nm immersion lithography (193i) that has a coarser pitch resolution. Contact holes are then patterned with DSA process. The guiding templates play the role of controlling the DSA patterns inside, which have a finer resolution than the templates. The DSA contact pitch depends on the chemical property of block copolymer and it can be adjusted within a certain range under strong lateral confinement to deviate from the natural pitch. As a result, different patterns can be obtained through different parameters. Although the guiding template shapes can be arbitrary, the overlay accuracy of the contact holes patterned are different and largely depend on the templates. Thus, the guiding templates that have tolerable variations are considered as feasible, and those have large overlays are considered as infeasible. To pattern the contact layer in a layout with DSA technology, we must ensure that all the DSA templates in the layout are feasible. However, the original layout may not be designed in a DSA-friendly way. Moreover, the routing process may introduce contacts that can only be patterned by infeasible templates. In this paper, we propose an optimization algorithm that optimize the contact layer for DSA patterning in 1D standard cell design. In particular, the algorithm modifies the layout via wire permutation technique to redistribute the contacts such that the use of infeasible templates is avoided and the feasible patterns that with better overlay control are favored. The experimental result demonstrate the ability of the proposed algorithm in helping to reduce the design and manufacturing
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
GPU-accelerated discontinuous Galerkin methods on hybrid meshes
NASA Astrophysics Data System (ADS)
Chan, Jesse; Wang, Zheng; Modave, Axel; Remacle, Jean-Francois; Warburton, T.
2016-08-01
We present a time-explicit discontinuous Galerkin (DG) solver for the time-domain acoustic wave equation on hybrid meshes containing vertex-mapped hexahedral, wedge, pyramidal and tetrahedral elements. Discretely energy-stable formulations are presented for both Gauss-Legendre and Gauss-Legendre-Lobatto (Spectral Element) nodal bases for the hexahedron. Stable timestep restrictions for hybrid meshes are derived by bounding the spectral radius of the DG operator using order-dependent constants in trace and Markov inequalities. Computational efficiency is achieved under a combination of element-specific kernels (including new quadrature-free operators for the pyramid), multi-rate timestepping, and acceleration using Graphics Processing Units.
Enablement of DSA for VIA layer with a metal SIT process flow
NASA Astrophysics Data System (ADS)
Schneider, L.; Farys, V.; Serret, E.; Fenouillet-Beranger, C.
2016-03-01
For technologies beyond 10 nm, 1D gridded designs are commonly used. This practice is common particularly in the case of Self-Aligned Double Patterning (SADP) metal processes where Vertical Interconnect Access (VIA) connecting metal line layers are placed along a discrete grid thus limiting the number of VIA pitches. In order to create a Vertical Interconnect Access (VIA) layer, graphoepitaxy Directed Self-Assembly (DSA) is the prevailing candidate. The technique relies on the creation of a confinement guide using optical microlithography methods, in which the BCP is allowed to separate into distinct regions. The resulting patterns are etched to obtain an ordered VIA layer. Guiding pattern variations impact directly on the placement of the target and one must ensure that it does not interfere with circuit performance. To prevent flaws, design rules are set. In this study, for the first time, an original framework is presented to find a consistent set of design rules for enabling the use of DSA in a production flow using Self Aligned Double Patterning (SADP) for metal line layer printing. In order to meet electrical requirements, the intersecting area between VIA and metal lines must be sufficient to ensure correct electrical connection. The intersecting area is driven by both VIA placement variability and metal line printing variability. Based on multiple process assumptions for a 10 nm node, the Monte Carlo method is used to set a maximum threshold for VIA placement error. In addition, to determine a consistent set of design rules, representative test structures have been created and tested with our in-house placement estimator: the topological skeleton of the guiding pattern [1]. Using this technique, structures with deviation above the maximum tolerated threshold are considered as infeasible and the appropriate set of design rules is extracted. In a final step, the design rules are verified with further test structures that are randomly generated using
NASA Astrophysics Data System (ADS)
Zank, G. P.; Hunana, P.; Mostafavi, P.; le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.
2015-09-01
As a consequence of the evolutionary conditions [28; 29], shock waves can generate high levels of downstream vortical turbulence. Simulations [32-34] and observations [30; 31] support the idea that downstream magnetic islands (also called plasmoids or flux ropes) result from the interaction of shocks with upstream turbulence. Zank et al. [18] speculated that a combination of diffusive shock acceleration (DSA) and downstream reconnection-related effects associated with the dynamical evolution of a “sea of magnetic islands” would result in the energization of charged particles. Here, we utilize the transport theory [18; 19] for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets to investigate a combined DSA and downstream multiple magnetic island charged particle acceleration mechanism. We consider separately the effects of the anti-reconnection electric field that is a consequence of magnetic island merging [17], and magnetic island contraction [14]. For the merging plasmoid reconnection- induced electric field only, we find i) that the particle spectrum is a power law in particle speed, flatter than that derived from conventional DSA theory, and ii) that the solution is constant downstream of the shock. For downstream plasmoid contraction only, we find that i) the accelerated particle spectrum is a power law in particle speed, flatter than that derived from conventional DSA theory; ii) for a given energy, the particle intensity peaks downstream of the shock, and the peak location occurs further downstream of the shock with increasing particle energy, and iii) the particle intensity amplification for a particular particle energy, f(x, c/c0)/f(0, c/c0), is not 1, as predicted by DSA theory, but increases with increasing particle energy. These predictions can be tested against observations of electrons and ions accelerated at interplanetary shocks and the heliospheric
Comparative imaging study in ultrasound, MRI, CT, and DSA using a multimodality renal artery phantom
King, Deirdre M.; Fagan, Andrew J.; Moran, Carmel M.; Browne, Jacinta E.
2011-02-15
Purpose: A range of anatomically realistic multimodality renal artery phantoms consisting of vessels with varying degrees of stenosis was developed and evaluated using four imaging techniques currently used to detect renal artery stenosis (RAS). The spatial resolution required to visualize vascular geometry and the velocity detection performance required to adequately characterize blood flow in patients suffering from RAS are currently ill-defined, with the result that no one imaging modality has emerged as a gold standard technique for screening for this disease. Methods: The phantoms, which contained a range of stenosis values (0%, 30%, 50%, 70%, and 85%), were designed for use with ultrasound, magnetic resonance imaging, x-ray computed tomography, and x-ray digital subtraction angiography. The construction materials used were optimized with respect to their ultrasonic speed of sound and attenuation coefficient, MR relaxometry (T{sub 1},T{sub 2}) properties, and Hounsfield number/x-ray attenuation coefficient, with a design capable of tolerating high-pressure pulsatile flow. Fiducial targets, incorporated into the phantoms to allow for registration of images among modalities, were chosen to minimize geometric distortions. Results: High quality distortion-free images of the phantoms with good contrast between vessel lumen, fiducial markers, and background tissue to visualize all stenoses were obtained with each modality. Quantitative assessments of the grade of stenosis revealed significant discrepancies between modalities, with each underestimating the stenosis severity for the higher-stenosed phantoms (70% and 85%) by up to 14%, with the greatest discrepancy attributable to DSA. Conclusions: The design and construction of a range of anatomically realistic renal artery phantoms containing varying degrees of stenosis is described. Images obtained using the main four diagnostic techniques used to detect RAS were free from artifacts and exhibited adequate contrast
DSA patterning options for FinFET formation at 7nm node
NASA Astrophysics Data System (ADS)
Liu, Chi-Chun C.; Franke, Elliott; Lie, Fee Li; Sieg, Stuart; Tsai, Hsinyu; Lai, Kafai; Truong, Hoa; Farrell, Richard; Somervell, Mark; Sanders, Daniel; Felix, Nelson; Guillorn, Michael; Burns, Sean; Hetzer, David; Ko, Akiteru; Arnold, John; Colburn, Matthew
2016-03-01
Several 27nm-pitch directed self-assembly (DSA) processes targeting fin formation for FinFET device fabrication are studied in a 300mm pilot line environment, including chemoepitaxy for a conventional Fin arrays, graphoepitaxy for a customization approach and a hybrid approach for self-aligned Fin cut. The trade-off between each DSA flow is discussed in terms of placement error, Fin CD/profile uniformity, and restricted design. Challenges in pattern transfer are observed and process optimization are discussed. Finally, silicon Fins with 100nm depth and on-target CD using different DSA options with either lithographic or self-aligned customization approach are demonstrated.
NASA Astrophysics Data System (ADS)
le Roux, J. A.; Zank, G. P.; Webb, G. M.; Khabarova, O. V.
2016-08-01
Computational and observational evidence is accruing that heliospheric shocks, as emitters of vorticity, can produce downstream magnetic flux ropes and filaments. This led Zank et al. to investigate a new paradigm whereby energetic particle acceleration near shocks is a combination of diffusive shock acceleration (DSA) with downstream acceleration by many small-scale contracting and reconnecting (merging) flux ropes. Using a model where flux-rope acceleration involves a first-order Fermi mechanism due to the mean compression of numerous contracting flux ropes, Zank et al. provide theoretical support for observations that power-law spectra of energetic particles downstream of heliospheric shocks can be harder than predicted by DSA theory and that energetic particle intensities should peak behind shocks instead of at shocks as predicted by DSA theory. In this paper, a more extended formalism of kinetic transport theory developed by le Roux et al. is used to further explore this paradigm. We describe how second-order Fermi acceleration, related to the variance in the electromagnetic fields produced by downstream small-scale flux-rope dynamics, modifies the standard DSA model. The results show that (i) this approach can qualitatively reproduce observations of particle intensities peaking behind the shock, thus providing further support for the new paradigm, and (ii) stochastic acceleration by compressible flux ropes tends to be more efficient than incompressible flux ropes behind shocks in modifying the DSA spectrum of energetic particles.
The Lozanov Method for Accelerating the Learning of Foreign Languages.
ERIC Educational Resources Information Center
Stanton, H. E.
1978-01-01
Discusses the Lozanov Method of teaching foreign languages developed by Lozanov in Bulgaria. This method (also known as Suggestopedia) uses various techniques such as physical relaxation exercises, mental concentration, classical music, and ego-enhancing suggestions. (CFM)
Computer control of large accelerators design concepts and methods
Beck, F.; Gormley, M.
1984-05-01
Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.
Electrochemical cell design for the impedance studies of chlorine evolution at DSA(®) anodes.
Silva, J F; Dias, A C; Araújo, P; Brett, C M A; Mendes, A
2016-08-01
A new electrochemical cell design suitable for the electrochemical impedance spectroscopy (EIS) studies of chlorine evolution on Dimensionally Stable Anodes (DSA(®)) has been developed. Despite being considered a powerful tool, EIS has rarely been used to study the kinetics of chlorine evolution at DSA anodes. Cell designs in the open literature are unsuitable for the EIS analysis at high DSA anode current densities for chlorine evolution because they allow gas accumulation at the electrode surface. Using the new cell, the impedance spectra of the DSA anode during chlorine evolution at high sodium chloride concentration (5 mol dm(-3) NaCl) and high current densities (up to 140 mA cm(-2)) were recorded. Additionally, polarization curves and voltammograms were obtained showing little or no noise. EIS and polarization curves evidence the role of the adsorption step in the chlorine evolution reaction, compatible with the Volmer-Heyrovsky and Volmer-Tafel mechanisms.
High chi block copolymer DSA to improve pattern quality for FinFET device fabrication
NASA Astrophysics Data System (ADS)
Tsai, HsinYu; Miyazoe, Hiroyuki; Vora, Ankit; Magbitang, Teddie; Arellano, Noel; Liu, Chi-Chun; Maher, Michael J.; Durand, William J.; Dawes, Simon J.; Bucchignano, James J.; Gignac, Lynne; Sanders, Daniel P.; Joseph, Eric A.; Colburn, Matthew E.; Willson, C. Grant; Ellison, Christopher J.; Guillorn, Michael A.
2016-03-01
Directed self-assembly (DSA) with block-copolymers (BCP) is a promising lithography extension technique to scale below 30nm pitch with 193i lithography. Continued scaling toward 20nm pitch or below will require material system improvements from PS-b-PMMA. Pattern quality for DSA features, such as line edge roughness (LER), line width roughness (LWR), size uniformity, and placement, is key to DSA manufacturability. In this work, we demonstrate finFET devices fabricated with DSA-patterned fins and compare several BCP systems for continued pitch scaling. Organic-organic high chi BCPs at 24nm and 21nm pitches show improved low to mid-frequency LER/LWR after pattern transfer.
Zheng, C; Feng, G; Yang, J; Liang, H; Tian, Z
1996-01-01
From 1989, 15 cases of renal angiomyolipoma (AML) have been diagnosed by ultrasonography. CT scanning and digital subtraction angiography (DSA) at our hospital. In 8 patients with uneven hyperechoes on B-mode ultrasonography (B-US) (8/15) and 7 with low density of fat on CT scanning (7/12) accurate diagnosis was established preoperatively. DSA revealed the "berry-like" pseudoaneurysms in the arterial phase (14 cases), the defined lucent area in the nephrogram phase (10 cases) and the "onion-peel appearances" during venous phases (8 cases), correct diagnosis was achieved in all patients. 8 cases were surgically treated and 7 treated by subselective embolization of renal artery. Effects in all cases were good. The diagnostic value of B-US, CT scanning, DSA and interventional treatment of AML was discussed. It was believed that the diagnosis with DSA was a technique with high specificity, and embolization therapy was simple and effective for AML. PMID:9389091
Electrochemical cell design for the impedance studies of chlorine evolution at DSA(®) anodes.
Silva, J F; Dias, A C; Araújo, P; Brett, C M A; Mendes, A
2016-08-01
A new electrochemical cell design suitable for the electrochemical impedance spectroscopy (EIS) studies of chlorine evolution on Dimensionally Stable Anodes (DSA(®)) has been developed. Despite being considered a powerful tool, EIS has rarely been used to study the kinetics of chlorine evolution at DSA anodes. Cell designs in the open literature are unsuitable for the EIS analysis at high DSA anode current densities for chlorine evolution because they allow gas accumulation at the electrode surface. Using the new cell, the impedance spectra of the DSA anode during chlorine evolution at high sodium chloride concentration (5 mol dm(-3) NaCl) and high current densities (up to 140 mA cm(-2)) were recorded. Additionally, polarization curves and voltammograms were obtained showing little or no noise. EIS and polarization curves evidence the role of the adsorption step in the chlorine evolution reaction, compatible with the Volmer-Heyrovsky and Volmer-Tafel mechanisms. PMID:27587166
[2011 Shanghai customer satisfaction report of DSA/X-ray equipment's after-service].
Li, Bin; Qian, Jianguo; Cao, Shaoping; Zheng, Yunxin; Xu, Zitian; Wang, Lijun
2012-11-01
To improve the manufacturer's medical equipment after-sale service, the fifth Shanghai zone customer satisfaction survey was launched by the end of 2011. The DSA/X-ray equipment was setup as an independent category for the first time. From the survey we can show that the DSA/X-ray equipment's CSI is higher than last year, the customer satisfaction scores of preventive maintenance and service contract are lower than others, and CSI of local brand is lower than imported brand.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
NASA Astrophysics Data System (ADS)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-01
We formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remain small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
Willert, Jeffrey; Park, H.; Knoll, D.A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton–Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
NASA Astrophysics Data System (ADS)
Willert, Jeffrey; Park, H.; Knoll, D. A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
Applying ILT mask synthesis for co-optimizing design rules and DSA process characteristics
NASA Astrophysics Data System (ADS)
Dam, Thuc; Stanton, William
2014-03-01
During early stage development of a DSA process, there are many unknown interactions between design, DSA process, RET, and mask synthesis. The computational resolution of these unknowns can guide development towards a common process space whereby manufacturing success can be evaluated. This paper will demonstrate the use of existing Inverse Lithography Technology (ILT) to co-optimize the multitude of parameters. ILT mask synthesis will be applied to a varied hole design space in combination with a range of DSA model parameters under different illumination and RET conditions. The design will range from 40 nm pitch doublet to random DSA designs with larger pitches, while various effective DSA characteristics of shrink bias and corner smoothing will be assumed for the DSA model during optimization. The co-optimization of these design parameters and process characteristics under different SMO solutions and RET conditions (dark/bright field tones and binary/PSM mask types) will also help to provide a complete process mapping of possible manufacturing options. The lithographic performances for masks within the optimized parameter space will be generated to show a common process space with the highest possibility for success.
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; Kwon, Jake
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions. PMID:26723628
Constraint methods that accelerate free-energy simulations of biomolecules
NASA Astrophysics Data System (ADS)
Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.
2015-12-01
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Constraint methods that accelerate free-energy simulations of biomolecules
Perez, Alberto; MacCallum, Justin L.; Coutsias, Evangelos A.; Dill, Ken A.
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann’s law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Time Acceleration Methods for Advection on the Cubed Sphere
Archibald, Richard K; Evans, Katherine J; White III, James B; Drake, John B
2009-01-01
Climate simulation will not grow to the ultrascale without new algorithms to overcome the scalability barriers blocking existing implementations. Until recently, climate simulations concentrated on the question of whether the climate is changing. The emphasis is now shifting to impact assessments, mitigation and adaptation strategies, and regional details. Such studies will require significant increases in spatial resolution and model complexity while maintaining adequate throughput. The barrier to progress is the resulting decrease in time step without increasing single-thread performance. In this paper we demonstrate how to overcome this time barrier for the first standard test defined for the shallow-water equations on a sphere. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact linear part time-evolution schemes can overcome the time barrier for advection equations on a sphere. The discontinuous Galerkin method is a high-order method that is conservative, flexible, and scalable. The addition of multiwavelets to discontinuous Galerkin provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. Exact linear part time-evolution schemes are explicit schemes that remain stable for implicit-size time steps.
GPU acceleration of particle-in-cell methods
NASA Astrophysics Data System (ADS)
Cowan, Benjamin; Cary, John; Meiser, Dominic
2015-11-01
Graphics processing units (GPUs) have become key components in many supercomputing systems, as they can provide more computations relative to their cost and power consumption than conventional processors. However, to take full advantage of this capability, they require a strict programming model which involves single-instruction multiple-data execution as well as significant constraints on memory accesses. To bring the full power of GPUs to bear on plasma physics problems, we must adapt the computational methods to this new programming model. We have developed a GPU implementation of the particle-in-cell (PIC) method, one of the mainstays of plasma physics simulation. This framework is highly general and enables advanced PIC features such as high order particles and absorbing boundary conditions. The main elements of the PIC loop, including field interpolation and particle deposition, are designed to optimize memory access. We describe the performance of these algorithms and discuss some of the methods used. Work supported by DARPA contract W31P4Q-15-C-0061 (SBIR).
Clean Slate Environmental Remediation DSA for 10 CFR 830 Compliance
James L. Traynor, Stephen L. Nicolosi, Michael L. Space, Louis F. Restrepo
2006-08-01
Clean Slate Sites II and III are scheduled for environmental remediation (ER) to remove elevated levels of radionuclides in soil. These sites are contaminated with legacy remains of non-nuclear yield nuclear weapons experiments at the Nevada Test Site, that involved high explosive, fissile, and related materials. The sites may also hold unexploded ordnance (UXO) from military training activities in the area over the intervening years. Regulation 10 CFR 830 (Ref. 1) identifies DOE-STD-1120-98 (Ref. 2) and 29 CFR 1910.120 (Ref. 3) as the safe harbor methodologies for performing these remediation operations. Of these methodologies, DOE-STD-1120-98 has been superseded by DOE-STD-1120-2005 (Ref. 4). The project adopted DOE-STD-1120-2005, which includes an approach for ER projects, in combination with 29 CFR 1910.120, as the basis documents for preparing the documented safety analysis (DSA). To securely implement the safe harbor methodologies, we applied DOE-STD-1027-92 (Ref. 5) and DOE-STD-3009-94 (Ref. 6), as needed, to develop a robust hazard classification and hazards analysis that addresses non-standard hazards such as radionuclides and UXO. The hazard analyses provided the basis for identifying Technical Safety Requirements (TSR) level controls. The DOE-STD-1186-2004 (Ref. 7) methodology showed that some controls warranted elevation to Specific Administrative Control (SAC) status. In addition to the Evaluation Guideline (EG) of DOE-STD-3009-94, we also applied the DOE G 420.1 (Ref. 8) annual, radiological dose, siting criterion to define a controlled area around the operation to protect the maximally exposed offsite individual (MOI).
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Apparatus and method for phosphate-accelerated bioremediation
Looney, B.B.; Pfiffner, S.M.; Phelps, T.J.; Lombard, K.H.; Hazen, T.C.; Borthen, J.W.
1998-05-19
An apparatus and method are provided for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site and provides for the use of a passive delivery system. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate. 8 figs.
Apparatus and method for phosphate-accelerated bioremediation
Looney, B.B.; Phelps, T.J.; Hazen, T.C.; Pfiffner, S.M.; Lombard, K.H.; Borthen, J.W.
1994-01-01
An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in fluid communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.
Apparatus and method for phosphate-accelerated bioremediation
Looney, Brian B.; Pfiffner, Susan M.; Phelps, Tommy J.; Lombard, Kenneth H.; Hazen, Terry C.; Borthen, James W.
1998-01-01
An apparatus and method for supplying a vapor-phase nutrient to contaminated soil for in situ bioremediation. The apparatus includes a housing adapted for containing a quantity of the liquid nutrient, a conduit in communication with the interior of the housing, means for causing a gas to flow through the conduit, and means for contacting the gas with the liquid so that a portion thereof evaporates and mixes with the gas. The mixture of gas and nutrient vapor is delivered to the contaminated site via a system of injection and extraction wells configured to the site and provides for the use of a passive delivery system. The mixture has a partial pressure of vaporized nutrient that is no greater than the vapor pressure of the liquid. If desired, the nutrient and/or the gas may be heated to increase the vapor pressure and the nutrient concentration of the mixture. Preferably, the nutrient is a volatile, substantially nontoxic and nonflammable organic phosphate that is a liquid at environmental temperatures, such as triethyl phosphate or tributyl phosphate.
Multigrid lattice Boltzmann method for accelerated solution of elliptic equations
NASA Astrophysics Data System (ADS)
Patil, Dhiraj V.; Premnath, Kannan N.; Banerjee, Sanjoy
2014-05-01
A new solver for second-order elliptic partial differential equations (PDEs) based on the lattice Boltzmann method (LBM) and the multigrid (MG) technique is presented. Several benchmark elliptic equations are solved numerically with the inclusion of multiple grid-levels in two-dimensional domains at an optimal computational cost within the LB framework. The results are compared with the corresponding analytical solutions and numerical solutions obtained using the Stone's strongly implicit procedure. The classical PDEs considered in this article include the Laplace and Poisson equations with Dirichlet boundary conditions, with the latter involving both constant and variable coefficients. A detailed analysis of solution accuracy, convergence and computational efficiency of the proposed solver is given. It is observed that the use of a high-order stencil (for smoothing) improves convergence and accuracy for an equivalent number of smoothing sweeps. The effect of the type of scheduling cycle (V- or W-cycle) on the performance of the MG-LBM is analyzed. Next, a parallel algorithm for the MG-LBM solver is presented and then its parallel performance on a multi-core cluster is analyzed. Lastly, a practical example is provided wherein the proposed elliptic PDE solver is used to compute the electro-static potential encountered in an electro-chemical cell, which demonstrates the effectiveness of this new solver in complex coupled systems. Several orders of magnitude gains in convergence and parallel scaling for the canonical problems, and a factor of 5 reduction for the multiphysics problem are achieved using the MG-LBM.
Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers
Danby, G.T.; Jackson, J.W.
1990-03-19
A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations (dB/dt) in the particle beam.
Method of correcting eddy current magnetic fields in particle accelerator vacuum chambers
Danby, Gordon T.; Jackson, John W.
1991-01-01
A method for correcting magnetic field aberrations produced by eddy currents induced in a particle accelerator vacuum chamber housing is provided wherein correction windings are attached to selected positions on the housing and the windings are energized by transformer action from secondary coils, which coils are inductively coupled to the poles of electro-magnets that are powered to confine the charged particle beam within a desired orbit as the charged particles are accelerated through the vacuum chamber by a particle-driving rf field. The power inductively coupled to the secondary coils varies as a function of variations in the power supplied by the particle-accelerating rf field to a beam of particles accelerated through the vacuum chamber, so the current in the energized correction coils is effective to cancel eddy current flux fields that would otherwise be induced in the vacuum chamber by power variations in the particle beam.
NASA Astrophysics Data System (ADS)
Olivan Bescos, Javier; Slob, Marian; Sluzewski, Menno; van Rooij, Willem J.; Slump, Cornelis H.
2003-05-01
A cerebral aneurysm is a persistent localized dilatation of the wall of a cerebral vessel. One of the techniques applied to treat cerebral aneurysms is the Guglielmi detachable coil (GDC) embolization. The goal of this technique is to embolize the aneurysm with a mesh of platinum coils to reduce the risk of aneurysm rupture. However, due to the blood pressure it is possible that the platinum wire is deformed. In this case, re-embolization of the aneurysm is necessary. The aim of this project is to develop a computer program to estimate the volume of cerebral aneurysms from archived laser hard copies of biplane digital subtraction angiography (DSA) images. Our goal is to determine the influence of the packing percentage, i.e., the ratio between the volume of the aneurysm and the volume of the coil mesh, on the stability of the coil mesh in time. The method we apply to estimate the volume of the cerebral aneurysms is based on the generation of a 3-D geometrical model of the aneurysm from two biplane DSA images. This 3-D model can be seen as an stack of 2-D ellipsis. The volume of the aneurysm is the result of performing a numerical integration of this stack. The program was validated using balloons filled with contrast agent. The availability of 3-D data for some of the aneurysms enabled to perform a comparison of the results of this method with techniques based on 3-D data.
Detecting chaos in particle accelerators through the frequency map analysis method.
Papaphilippou, Yannis
2014-06-01
The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.
Detecting chaos in particle accelerators through the frequency map analysis method
Papaphilippou, Yannis
2014-06-01
The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.
Predictive Simulation and Design of Materials by Quasicontinuum and Accelerated Dynamics Methods
Luskin, Mitchell; James, Richard; Tadmor, Ellad
2014-03-30
This project developed the hyper-QC multiscale method to make possible the computation of previously inaccessible space and time scales for materials with thermally activated defects. The hyper-QC method combines the spatial coarse-graining feature of a finite temperature extension of the quasicontinuum (QC) method (aka “hot-QC”) with the accelerated dynamics feature of hyperdynamics. The hyper-QC method was developed, optimized, and tested from a rigorous mathematical foundation.
Kinetic Simulations of Particle Acceleration at Shocks
Caprioli, Damiano; Guo, Fan
2015-07-16
Collisionless shocks are mediated by collective electromagnetic interactions and are sources of non-thermal particles and emission. The full particle-in-cell approach and a hybrid approach are sketched, simulations of collisionless shocks are shown using a multicolor presentation. Results for SN 1006, a case involving ion acceleration and B field amplification where the shock is parallel, are shown. Electron acceleration takes place in planetary bow shocks and galaxy clusters. It is concluded that acceleration at shocks can be efficient: >15%; CRs amplify B field via streaming instability; ion DSA is efficient at parallel, strong shocks; ions are injected via reflection and shock drift acceleration; and electron DSA is efficient at oblique shocks.
Sequential electrochemical treatment of dairy wastewater using aluminum and DSA-type anodes.
Borbón, Brenda; Oropeza-Guzman, Mercedes Teresita; Brillas, Enric; Sirés, Ignasi
2014-01-01
Dairy wastewater is characterized by a high content of hardly biodegradable dissolved, colloidal, and suspended organic matter. This work firstly investigates the performance of two individual electrochemical treatments, namely electrocoagulation (EC) and electro-oxidation (EO), in order to finally assess the mineralization ability of a sequential EC/EO process. EC with an Al anode was employed as a primary pretreatment for the conditioning of 800 mL of wastewater. A complete reduction of turbidity, as well as 90 and 81% of chemical oxygen demand (COD) and total organic carbon (TOC) removal, respectively, were achieved after 120 min of EC at 9.09 mA cm(-2). For EO, two kinds of dimensionally stable anodes (DSA) electrodes (Ti/IrO₂-Ta₂O₅ and Ti/IrO₂-SnO₂-Sb₂O₅) were prepared by the Pechini method, obtaining homogeneous coatings with uniform composition and high roughness. The (·)OH formed at the DSA surface from H₂O oxidation were not detected by electron spin resonance. However, their indirect determination by means of H₂O₂ measurements revealed that Ti/IrO₂-SnO₂-Sb₂O₅ is able to produce partially physisorbed radicals. Since the characterization of the wastewater revealed the presence of indole derivatives, preliminary bulk electrolyses were done in ultrapure water containing 1 mM indole in sulfate and/or chloride media. The performance of EO with the Ti/IrO₂-Ta₂O₅ anode was evaluated from the TOC removal and the UV/Vis absorbance decay. The mineralization was very poor in 0.05 M Na₂SO₄, whereas it increased considerably at a greater Cl(-) content, meaning that the oxidation mediated by electrogenerated species such as Cl₂, HClO, and/or ClO(-) competes and even predominates over the (·)OH-mediated oxidation. The EO treatment of EC-pretreated dairy wastewater allowed obtaining a global 98 % TOC removal, decreasing from 1,062 to <30 mg L(-1). PMID:24671400
Multifunctional hardmask neutral layer for directed self-assembly (DSA) patterning
NASA Astrophysics Data System (ADS)
Guerrero, Douglas J.; Hockey, Mary Ann; Wang, Yubao; Calderas, Eric
2013-03-01
Micro-phase separation for directed self-assembly (DSA) can be executed successfully only when the substrate surface on which the block co-polymer (BCP) is coated has properties that are ideal for attraction to each polymer type. The neutral underlayer (NUL) is an essential and critical component in DSA feasibility. Properties conducive for BCP patterning are primarily dependent on "brush" or "crosslinked" random co-polymer underlayers. Most DSA flows also require a lithography step (reflection control) and pattern transfer schemes at the end of the patterning process. A novel multifunctional hardmask neutral layer (HM NL) was developed to provide reflection control, surface energy matching, and pattern transfer capabilities in a grapho-epitaxy DSA process flow. It was found that the ideal surface energy for the HM NL is in the range of 38-45 dyn/cm. The robustness of the HM NL against exposure to process solvents and developers was identified. Process characteristics of the BCP (thickness, bake time and temperature) on the HM NL were defined. Using the HM NL instead of three distinct layers - bottom anti-reflective coating (BARC) and neutral and hardmask layers - in DSA line-space pitch tripling and contact hole shrinking processes was demonstrated. Finally, the capability of the HM NL to transfer a pattern into a 100-nm spin-on carbon (SOC) layer was shown.
Three dimensional finite element methods: Their role in the design of DC accelerator systems
Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.
2013-04-19
High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 Multiplication-Sign 300 mm{sup 2}. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.
An accelerated iterative method for the dynamics of constrained multibody systems
NASA Astrophysics Data System (ADS)
Lee, Kisu
1993-01-01
An accelerated iterative method is suggested for the dynamic analysis of multibody systems consisting of interconnected rigid bodies. The Lagrange multipliers associated with the kinematic constraints are iteratively computed by the monotone reduction of the constraint error vector, and the resulting equations of motion are easily time-integrated by a well established ODE technique. The velocity and acceleration constraints as well as the position constraints are made to be satisfied at the joints at each time step. Exact solution is obtained without the time demanding procedures such as selection of the independent coordinates, decomposition of the constraint Jacobian matrix, and Newton Raphson iterations. An acceleration technique is employed for the faster convergence of the iterative scheme and the convergence analysis of the proposed iterative method is presented. Numerical solutions for the verification problems are presented to demonstrate the efficiency and accuracy of the suggested technique.
Electrochemical cell design for the impedance studies of chlorine evolution at DSA anodes
NASA Astrophysics Data System (ADS)
Silva, J. F.; Dias, A. C.; Araújo, P.; Brett, C. M. A.; Mendes, A.
2016-08-01
A new electrochemical cell design suitable for the electrochemical impedance spectroscopy (EIS) studies of chlorine evolution on Dimensionally Stable Anodes (DSA®) has been developed. Despite being considered a powerful tool, EIS has rarely been used to study the kinetics of chlorine evolution at DSA anodes. Cell designs in the open literature are unsuitable for the EIS analysis at high DSA anode current densities for chlorine evolution because they allow gas accumulation at the electrode surface. Using the new cell, the impedance spectra of the DSA anode during chlorine evolution at high sodium chloride concentration (5 mol dm-3 NaCl) and high current densities (up to 140 mA cm-2) were recorded. Additionally, polarization curves and voltammograms were obtained showing little or no noise. EIS and polarization curves evidence the role of the adsorption step in the chlorine evolution reaction, compatible with the Volmer-Heyrovsky and Volmer-Tafel mechanisms.
DSA volumetric 3D reconstructions of intracranial aneurysms: A pictorial essay
Cieściński, Jakub; Serafin, Zbigniew; Strześniewski, Piotr; Lasek, Władysław; Beuth, Wojciech
2012-01-01
Summary A gold standard of cerebral vessel imaging remains the digital subtraction angiography (DSA) performed in three projections. However, in specific clinical cases, many additional projections are required, or a complete visualization of a lesion may even be impossible with 2D angiography. Three-dimensional (3D) reconstructions of rotational angiography were reported to improve the performance of DSA significantly. In this pictorial essay, specific applications of this technique are presented in the management of intracranial aneurysms, including: preoperative aneurysm evaluation, intraoperative imaging, and follow-up. Volumetric reconstructions of 3D DSA are a valuable tool for cerebral vessels imaging. They play a vital role in the assessment of intracranial aneurysms, especially in evaluation of the aneurysm neck and the aneurysm recanalization. PMID:22844309
Pattern fidelity improvement of chemo-epitaxy DSA process for high-volume manufacturing
NASA Astrophysics Data System (ADS)
Muramatsu, Makoto; Nishi, Takanori; You, Gen; Saito, Yusuke; Ido, Yasuyuki; Ito, Kiyohito; Tobana, Toshikatsu; Hosoya, Masanori; Chen, Weichien; Nakamura, Satoru; Somervell, Mark; Kitano, Takahiro
2016-03-01
Directed self-assembly (DSA) is one of the candidates for next generation lithography. Over the past few years, cylindrical and lamellar structures dictated by the block co-polymer (BCP) composition have been investigated for use in patterning contact holes or lines, and, Tokyo Electron Limited (TEL is a registered trademark or a trademark of Tokyo Electron Limited in Japan and /or other countries.) has presented the evaluation results and the advantages of each-1-5. In this report, we will present the latest results regarding the defect reduction work on a model line/space system. Especially it is suggested that the defectivity of the neutral layer has a large impact on the defectivity of the DSA patterns. Also, LER/LWR reduction results will be presented with a focus on the improvements made during the etch transferring the DSA patterns into the underlayer.
New estimation method of neutron skyshine for a high-energy particle accelerator
NASA Astrophysics Data System (ADS)
Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook
2016-09-01
A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.
NASA Astrophysics Data System (ADS)
Velikina, J. V.; Samsonov, A. A.
2016-02-01
Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.
Means and method for the focusing and acceleration of parallel beams of charged particles
Maschke, Alfred W.
1983-07-05
A novel apparatus and method for focussing beams of charged particles comprising planar arrays of electrostatic quadrupoles. The quadrupole arrays may comprise electrodes which are shared by two or more quadrupoles. Such quadrupole arrays are particularly adapted to providing strong focussing forces for high current, high brightness, beams of charged particles, said beams further comprising a plurality of parallel beams, or beamlets, each such beamlet being focussed by one quadrupole of the array. Such arrays may be incorporated in various devices wherein beams of charged particles are accelerated or transported, such as linear accelerators, klystron tubes, beam transport lines, etc.
Zuo Pingbing; Zhang Ming; Gamayunov, Konstantin; Rassoul, Hamid; Luo Xi
2011-09-10
The focused transport equation (FTE) includes all the necessary physics for modeling the shock acceleration of energetic particles with a unified description of first-order Fermi acceleration, shock drift acceleration, and shock surfing acceleration. It can treat the acceleration and transport of particles with an anisotropic distribution. In this study, the energy spectrum of pickup ions accelerated at shocks of various obliquities is investigated based on the FTE. We solve the FTE by using a stochastic approach. The shock acceleration leads to a two-component energy spectrum. The low-energy component of the spectrum is made up of particles that interact with shock one to a few times. For these particles, the pitch angle distribution is highly anisotropic, and the energy spectrum is variable depending on the momentum and pitch angle of injected particles. At high energies, the spectrum approaches a power law consistent with the standard diffusive shock acceleration (DSA) theory. For a parallel shock, the high-energy component of the power-law spectrum, with the spectral index being the same as the prediction of DSA theory, starts just a few times the injection speed. For an oblique or quasi-perpendicular shock, the high-energy component of the spectrum exhibits a double power-law distribution: a harder power-law spectrum followed by another power-law spectrum with a slope the same as the spectral index of DSA. The shock acceleration will eventually go into the DSA regime at higher energies even if the anisotropy is not small. The intensity of the energy spectrum given by the FTE, in the high-energy range where particles get efficient acceleration in the DSA regime, is different from that given by the standard DSA theory for the same injection source. We define the injection efficiency {eta} as the ratio between them. For a parallel shock, the injection efficiency is less than 1, but for an oblique shock or a quasi-perpendicular shock it could be greater.
Accelerating mesh-based Monte Carlo method on modern CPU architectures.
Fang, Qianqian; Kaeli, David R
2012-12-01
In this report, we discuss the use of contemporary ray-tracing techniques to accelerate 3D mesh-based Monte Carlo photon transport simulations. Single Instruction Multiple Data (SIMD) based computation and branch-less design are exploited to accelerate ray-tetrahedron intersection tests and yield a 2-fold speed-up for ray-tracing calculations on a multi-core CPU. As part of this work, we have also studied SIMD-accelerated random number generators and math functions. The combination of these techniques achieved an overall improvement of 22% in simulation speed as compared to using a non-SIMD implementation. We applied this new method to analyze a complex numerical phantom and both the phantom data and the improved code are available as open-source software at http://mcx.sourceforge.net/mmc/.
Kingsley, D P; Butler, P; Rowe, G M; Travis, R C; Wylie, I G
1989-01-01
A four year study has been undertaken into the effects on the workload and cost implications of the introduction of digital subtraction angiography (DSA) in a large United Kingdom teaching hospital. The increase in workload has been entirely due to the ability to perform intravenous angiography. DSA is cheaper than conventional angiography if more than 210 cases are undertaken each year. This difference is accounted for by the reduced use of X-ray film. However, intravenous angiography is more expensive because of the use of large volumes of nonionic medium. PMID:2674769
[2011 Shanghai customer satisfaction report of DSA/X-ray equipment's after-service].
Li, Bin; Qian, Jianguo; Cao, Shaoping; Zheng, Yunxin; Xu, Zitian; Wang, Lijun
2012-11-01
To improve the manufacturer's medical equipment after-sale service, the fifth Shanghai zone customer satisfaction survey was launched by the end of 2011. The DSA/X-ray equipment was setup as an independent category for the first time. From the survey we can show that the DSA/X-ray equipment's CSI is higher than last year, the customer satisfaction scores of preventive maintenance and service contract are lower than others, and CSI of local brand is lower than imported brand. PMID:23461127
A review of vector convergence acceleration methods, with applications to linear algebra problems
NASA Astrophysics Data System (ADS)
Brezinski, C.; Redivo-Zaglia, M.
In this article, in a few pages, we will try to give an idea of convergence acceleration methods and extrapolation procedures for vector sequences, and to present some applications to linear algebra problems and to the treatment of the Gibbs phenomenon for Fourier series in order to show their effectiveness. The interested reader is referred to the literature for more details. In the bibliography, due to space limitation, we will only give the more recent items, and, for older ones, we refer to Brezinski and Redivo-Zaglia, Extrapolation methods. (Extrapolation Methods. Theory and Practice, North-Holland, 1991). This book also contains, on a magnetic support, a library (in Fortran 77 language) for convergence acceleration algorithms and extrapolation methods.
Balandin, Vladimir; Golubeva, Nina
1997-02-01
The equations of classical spin-orbit motion can be extended to a Hamiltonian system in 9-dimensional phase space by introducing a coupled spin-orbit Poisson bracket and Hamiltonian function. After this extension it becomes possible to apply the methods of the theory of Hamiltonian systems to the study of polarized particles beam dynamics in circular accelerators and storage rings. Some of those methods have been implemented in the computer code FORGET-ME-NOT.
Centrifugal accelerator, system and method for removing unwanted layers from a surface
Foster, Christopher A.; Fisher, Paul W.
1995-01-01
A cryoblasting process having a centrifugal accelerator for accelerating frozen pellets of argon or carbon dioxide toward a target area utilizes an accelerator throw wheel designed to induce, during operation, the creation of a low-friction gas bearing within internal passages of the wheel which would otherwise retard acceleration of the pellets as they move through the passages. An associated system and method for removing paint from a surface with cryoblasting techniques involves the treating, such as a preheating, of the painted surface to soften the paint prior to the impacting of frozen pellets thereagainst to increase the rate of paint removal. A system and method for producing large quantities of frozen pellets from a liquid material, such as liquid argon or carbon dioxide, for use in a cryoblasting process utilizes a chamber into which the liquid material is introduced in the form of a jet which disintegrates into droplets. A non-condensible gas, such as inert helium or air, is injected into the chamber at a controlled rate so that the droplets freeze into bodies of relatively high density.
NASA Astrophysics Data System (ADS)
García-Pareja, S.; Vilches, M.; Lallena, A. M.
2007-09-01
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the "hot" regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
A Monte Carlo Synthetic-Acceleration Method for Solving the Thermal Radiation Diffusion Equation
Evans, Thomas M; Mosher, Scott W; Slattery, Stuart
2014-01-01
We present a novel synthetic-acceleration based Monte Carlo method for solving the equilibrium thermal radiation diusion equation in three dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that not only can our Monte Carlo method be an eective solver for sparse matrix systems, but also that it performs competitively with deterministic methods including preconditioned Conjugate Gradient while producing numerically identical results. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
A Monte Carlo synthetic-acceleration method for solving the thermal radiation diffusion equation
NASA Astrophysics Data System (ADS)
Evans, Thomas M.; Mosher, Scott W.; Slattery, Stuart R.; Hamilton, Steven P.
2014-02-01
We present a novel synthetic-acceleration-based Monte Carlo method for solving the equilibrium thermal radiation diffusion equation in three spatial dimensions. The algorithm performance is compared against traditional solution techniques using a Marshak benchmark problem and a more complex multiple material problem. Our results show that our Monte Carlo method is an effective solver for sparse matrix systems. For solutions converged to the same tolerance, it performs competitively with deterministic methods including preconditioned conjugate gradient and GMRES. We also discuss various aspects of preconditioning the method and its general applicability to broader classes of problems.
Skeleton-based OPC application for DSA full chip mask correction
NASA Astrophysics Data System (ADS)
Schneider, L.; Farys, V.; Serret, E.; Fenouillet-Beranger, C.
2015-09-01
Recent industrial results around directed self-assembly (DSA) of block copolymers (BCP) have demonstrated the high potential of this technique [1-2]. The main advantage being cost reduction thanks to a reduced number of lithographic steps. Meanwhile, the associated correction for mask creation must account for the introduction of this new technique, maintaining a high level of accuracy and reliability. In order to create VIA (Vertical Interconnect Layer) layer, graphoepitaxy DSA can be used. The technique relies on the creation of a confinement guides where the BCP can separate into distinct regions and resulting patterns are etched in order to obtain an ordered series of VIA contact. The printing of the guiding pattern requires the use of classical lithography. Optical proximity correction (OPC) is applied to obtain the best suited guiding pattern allowing to match a specific design target. In this study, an original approach for DSA full chip mask optical proximity correction based on a skeleton representation of a guiding pattern is proposed. The cost function for an OPC process is based on minimizing the Central Placement Error (CPE), defined as the difference between an ideal skeleton target and a generated skeleton from a guiding contour. The high performance of this approach for DSA OPC full chip correction and its ability to minimize variability error on via placement is demonstrated and reinforced by the comparison with a rigorous model. Finally this Skeleton approach is highlighted as an appropriate tool for Design rules definition.
34 CFR 367.11 - What assurances must a DSA include in its application?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 34 Education 2 2012-07-01 2012-07-01 false What assurances must a DSA include in its application? 367.11 Section 367.11 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION INDEPENDENT LIVING SERVICES FOR OLDER INDIVIDUALS WHO ARE...
Amans, Matthew R; Cooke, Daniel L; Vella, Maya; Dowd, Christopher F; Halbach, Van V; Higashida, Randall T; Hetts, Steven W
2014-01-01
Contrast staining of brain parenchyma identified on non-contrast CT performed after DSA in patients with acute ischemic stroke (AIS) is an incompletely understood imaging finding. We hypothesize contrast staining to be an indicator of brain injury and suspect the fate of involved parenchyma to be cerebral infarction. Seventeen years of AIS data were retrospectively analyzed for contrast staining. Charts were reviewed and outcomes of the stained parenchyma were identified on subsequent CT and MRI. Thirty-six of 67 patients meeting inclusion criteria (53.7%) had contrast staining on CT obtained within 72 hours after DSA. Brain parenchyma with contrast staining in patients with AIS most often evolved into cerebral infarction (81%). Hemorrhagic transformation was less likely in cases with staining compared with hemorrhagic transformation in the cohort that did not have contrast staining of the parenchyma on post DSA CT (6% versus 25%, respectively, OR 0.17, 95% CI 0.017 - 0.98, p = 0.02). Brain parenchyma with contrast staining on CT after DSA in AIS patients was likely to infarct and unlikely to hemorrhage. PMID:24556308
Accurate analysis of blood vessel sizes and stenotic lesions using stereoscopic DSA system.
Fencil, L E; Doi, K; Hoffman, K R
1988-01-01
We have developed a technique to determine accurately the magnification factor and three-dimensional orientation of a vessel segment from a stereoscopic pair of digital subtraction angiograms (DSA). Our DSA system includes a stereoscopic x-ray tube with a 25-mm focal spot shift. The magnification and orientation of a selected vessel segment are determined from the distance and direction of the focal spot shift and the stereoscopic discrepancy in image positions for that segment. Our results indicate that the accuracies of determining the magnification and orientation are less than 1% and approximately 5 degrees, respectively. After the magnification and orientation are determined accurately, an iterative deconvolution technique for the measurement of vessel image size is applied to the selected vessel segment. This iterative deconvolution technique provides the best estimate of vessel image size by taking into account the unsharpness of the digital system. With this technique, the vessel image size can be determined to an accuracy of approximately 1.0 mm, which corresponds to one third the pixel size of our DSA system. Information derived from stereoscopic analysis and iterative deconvolution thus allows accurate calculation of actual vascular dimensions from DSA images.
34 CFR 367.10 - How does a designated State agency (DSA) apply for an award?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 2 2011-07-01 2010-07-01 true How does a designated State agency (DSA) apply for an award? 367.10 Section 367.10 Education Regulations of the Offices of the Department of Education... LIVING SERVICES FOR OLDER INDIVIDUALS WHO ARE BLIND What Are the Application Requirements? § 367.10...
Interlaboratory reproducibility of standard accelerated aging methods for oxidation of UHMWPE.
Kurtz, S M; Muratoglu, O K; Buchanan, F; Currier, B; Gsell, R; Greer, K; Gualtieri, G; Johnson, R; Schaffner, S; Sevo, K; Spiegelberg, S; Shen, F W; Yau, S S
2001-07-01
During accelerating aging, experimental uncertainty may arise due to variability in the oxidation process, or due to limitations in the technique that is ultimately used to measure oxidation. The purpose of the present interlaboratory study was to quantify the repeatability and reproducibility of standard accelerated aging methods for ultra-high molecular weight polyethylene (UHMWPE). Sections (200 microm thick) were microtomed from the center of an extruded rod of GUR 4150 HP, gamma irradiated in air or nitrogen, and circulated to 12 institutions in the United States and Europe for characterization of oxidation before and after accelerated aging. Specimens were aged for 3 weeks at 80 degrees C in an air circulating oven or for 2 weeks at 70 degrees C in an oxygen bomb (maintained at 503 kPa (5 atm.) of O2) in accordance with the two standard protocols described in ASTM F 2003-00. FTIR spectra were collected from each specimen within 24 h of the start and finish of accelerated aging, and oxidation indices were calculated by normalizing the peak area of the carbonyl region by the reference peak areas at 1370 or 2022 cm(-1). The mean relative interlaboratory uncertainty of the oxidation data was 78.5% after oven aging and 129.1% after bomb aging. The oxidation index measurement technique was not found to be a significant factor in the reproducibility. Comparable relative intrainstitutional uncertainty was observed after oven aging and bomb aging. For both aging methods, institutions successfully discriminated between air-irradiated and control specimens. However, the large interinstitutional variation suggests that absolute performance standards for the oxidation index of UHMWPE after accelerated aging may not be practical at the present time.
NASA Technical Reports Server (NTRS)
Lathrop, J. W.
1985-01-01
If thin film cells are to be considered a viable option for terrestrial power generation their reliability attributes will need to be explored and confidence in their stability obtained through accelerated testing. Development of a thin film accelerated test program will be more difficult than was the case for crystalline cells because of the monolithic construction nature of the cells. Specially constructed test samples will need to be fabricated, requiring committment to the concept of accelerated testing by the manufacturers. A new test schedule appropriate to thin film cells will need to be developed which will be different from that used in connection with crystalline cells. Preliminary work has been started to seek thin film schedule variations to two of the simplest tests: unbiased temperature and unbiased temperature humidity. Still to be examined are tests which involve the passage of current during temperature and/or humidity stress, either by biasing in the forward (or reverse) directions or by the application of light during stress. Investigation of these current (voltage) accelerated tests will involve development of methods of reliably contacting the thin conductive films during stress.
Donor-specific antibodies accelerate arteriosclerosis after kidney transplantation.
Hill, Gary S; Nochy, Dominique; Bruneval, Patrick; Duong van Huyen, J P; Glotz, Denis; Suberbielle, Caroline; Zuber, Julien; Anglicheau, Dany; Empana, Jean-Philippe; Legendre, Christophe; Loupy, Alexandre
2011-05-01
In biopsies of renal allografts, arteriosclerosis is often more severe than expected based on the age of the donor, even without a history of rejection vasculitis. To determine whether preformed donor-specific antibodies (DSAs) may contribute to the severity of arteriosclerosis, we examined protocol biopsies from patients with (n=40) or without (n=59) DSA after excluding those with any evidence of vasculitis. Among DSA-positive patients, arteriosclerosis significantly progressed between month 3 and month 12 after transplant (mean Banff cv score 0.65 ± 0.11 to 1.12 ± 0.10, P=0.014); in contrast, among DSA-negative patients, we did not detect a statistically significant progression during the same timeframe (mean Banff cv score 0.65 ± 0.11 to 0.81 ± 0.10, P=not significant). Available biopsies at later time points supported a rate of progression of arteriosclerosis in DSA-negative patients that was approximately one third that in DSA-positive patients. Accelerated arteriosclerosis was significantly associated with peritubular capillary leukocytic infiltration, glomerulitis, subclinical antibody-mediated rejection, and interstitial inflammation. In conclusion, these data support the hypothesis that donor-specific antibodies dramatically accelerate post-transplant progression of arteriosclerosis.
Monte Carlo method for calculating the radiation skyshine produced by electron accelerators
NASA Astrophysics Data System (ADS)
Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin
2005-06-01
Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.
A simplified spherical harmonic method for coupled electron-photon transport calculations
Josef, J.A.
1997-12-01
In this thesis the author has developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. He has performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. The theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. In addition, he has applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well as for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. The author has investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems.
Kauffman, R.
1993-04-01
This report presents results of a literature search performed to identify analytical techniques suitable for accelerated screening of chemical and thermal stabilities of different refrigerant/lubricant combinations. Search focused on three areas: Chemical stability data of HFC-134a and other non-chlorine containing refrigerant candidates; chemical stability data of CFC-12, HCFC-22, and other chlorine containing refrigerants; and accelerated thermal analytical techniques. Literature was catalogued and an abstract was written for each journal article or technical report. Several thermal analytical techniques were identified as candidates for development into accelerated screening tests. They are easy to operate, are common to most laboratories, and are expected to produce refrigerant/lubricant stability evaluations which agree with the current stability test ANSI/ASHRAE (American National Standards Institute/American Society of Heating, Refrigerating, and Air-Conditioning Engineers) Standard 97-1989, ``Sealed Glass Tube Method to Test the Chemical Stability of Material for Use Within Refrigerant Systems.`` Initial results of one accelerated thermal analytical candidate, DTA, are presented for CFC-12/mineral oil and HCFC-22/mineral oil combinations. Also described is research which will be performed in Part II to optimize the selected candidate.
On the Use of Accelerated Aging Methods for Screening High Temperature Polymeric Composite Materials
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Grayson, Michael A.
1999-01-01
A rational approach to the problem of accelerated testing of high temperature polymeric composites is discussed. The methods provided are considered tools useful in the screening of new materials systems for long-term application to extreme environments that include elevated temperature, moisture, oxygen, and mechanical load. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for specific aging mechanisms.
NASA Technical Reports Server (NTRS)
Fay, John F.
1990-01-01
A calculation is made of the stability of various relaxation schemes for the numerical solution of partial differential equations. A multigrid acceleration method is introduced, and its effects on stability are explored. A detailed stability analysis of a simple case is carried out and verified by numerical experiment. It is shown that the use of multigrids can speed convergence by several orders of magnitude without adversely affecting stability.
Fattebert, J
2008-07-29
We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.
Toward sub-20nm pitch Fin patterning and integration with DSA
NASA Astrophysics Data System (ADS)
Sayan, Safak; Marzook, Taisir; Chan, BT; Vandenbroeck, Nadia; Singh, Arjun; Laidler, David; Sanchez, Efrain A.; Leray, Philippe; R. Delgadillo, Paulina; Gronheid, Roel; Vandenberghe, Geert; Clark, William; Juncker, Aurelie
2016-03-01
Directed Self Assembly (DSA) has gained increased momentum in recent years as a cost-effective means for extending lithography to sub-30nm pitch, primarily presenting itself as an alternative to mainstream 193i pitch division approaches such as SADP and SAQP. Towards these goals, IMEC has excelled at understanding and implementing directed self-assembly based on PS-b-PMMA block co-polymers (BCPs) using LiNe flow [1]. These efforts increase the understanding of how block copolymers might be implemented as part of HVM compatible DSA integration schemes. In recent contributions, we have proposed and successfully demonstrated two state-of-the-art CMOS process flows which employed DSA based on the PS-b-PMMA, LiNe flow at IMEC (pitch = 28 nm) to form FinFET arrays via both a `cut-last' and `cut-first' approach [2-4]. Therein, we described the relevant film stacks (hard mask and STI stacks) to achieve robust patterning and pattern transfer into IMEC's FEOL device film stacks. We also described some of the pattern placement and overlay challenges associated with these two strategies. In this contribution, we will present materials and processes for FinFET patterning and integration towards sub-20 nm pitch technology nodes. This presents a noteworthy challenge for DSA using BCPs as the ultimate resolution for PS-b-PMMA may not achieve such dimensions. The emphasis will continue to be towards patterning approaches, wafer alignment strategies, the effects of DSA processing on wafer alignment and overlay.
Scatter correction of vessel dropout behind highly attenuating structures in 4D-DSA
NASA Astrophysics Data System (ADS)
Hermus, James; Mistretta, Charles; Szczykutowicz, Timothy P.
2015-03-01
In Computed Tomographic (CT) image reconstruction for 4 dimensional digital subtraction angiography (4D-DSA), loss of vessel contrast has been observed behind highly attenuating anatomy, such as large contrast filled aneurysms. Although this typically occurs only in a limited range of projection angles, the observed contrast time course can be altered. In this work we propose an algorithm to correct for highly attenuating anatomy within the fill projection data, i.e. aneurysms. The algorithm uses a 3D-SA volume to create a correction volume that is multiplied by the 4D-DSA volume in order to correct for signal dropout within the 4D-DSA volume. The algorithm was designed to correct for highly attenuating material in the fill volume only, however with alterations to a single step of the algorithm, artifacts due to highly attenuating materials in the mask volume (i.e. dental implants) can be mitigated as well. We successfully applied our algorithm to a case of vessel dropout due to the presence of a large attenuating aneurysm. The performance was qualified visually as the affected vessel no longer dropped out on corrected 4D-DSA time frames. The correction was quantified by plotting the signal intensity along the vessel. Our analysis demonstrated our correction does not alter vessel signal values outside of the vessel dropout region but does increase the vessel values within the dropout region as expected. We have demonstrated that this correction algorithm acts to correct vessel dropout in areas with highly attenuating materials.
Acceleration of curing of resin composite at the bottom surface using slow-start curing methods.
Yoshikawa, Takako; Morigami, Makoto; Sadr, Alireza; Tagami, Junji
2013-01-01
The aim of this study was to evaluate the effect of two slow-start curing methods on acceleration of the curing of resin composite specimens at the bottom surface. The light-cured resin composite was polymerized using one of three curing techniques: (1) 600 mW/cm(2) for 60 s, (2) 270 mW/cm(2) for 10 s+0-s interval+600 mW/cm(2) for 50 s, and (3) 270 mW/cm(2) for 10 s+5-s interval+600 mW/cm(2) for 50 s. After light curing, Knoop hardness number was measured at the top and bottom surfaces of the resin specimens. The slow-start curing method with the 5-s interval caused greater acceleration of curing of the resin composite at the bottom surface of the specimens than the slow-start curing method with the 0-s interval. The light-cured resin composite, which had increased contrast ratios during polymerization, showed acceleration of curing at the bottom surface.
Novel methods in the Particle-In-Cell accelerator Code-Framework Warp
Vay, J-L; Grote, D. P.; Cohen, R. H.; Friedman, A.
2012-12-26
The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.
GPU-Accelerated Finite Element Method for Modelling Light Transport in Diffuse Optical Tomography
Schweiger, Martin
2011-01-01
We introduce a GPU-accelerated finite element forward solver for the computation of light transport in scattering media. The forward model is the computationally most expensive component of iterative methods for image reconstruction in diffuse optical tomography, and performance optimisation of the forward solver is therefore crucial for improving the efficiency of the solution of the inverse problem. The GPU forward solver uses a CUDA implementation that evaluates on the graphics hardware the sparse linear system arising in the finite element formulation of the diffusion equation. We present solutions for both time-domain and frequency-domain problems. A comparison with a CPU-based implementation shows significant performance gains of the graphics accelerated solution, with improvements of approximately a factor of 10 for double-precision computations, and factors beyond 20 for single-precision computations. The gains are also shown to be dependent on the mesh complexity, where the largest gains are achieved for high mesh resolutions. PMID:22013431
GPU-Accelerated Finite Element Method for Modelling Light Transport in Diffuse Optical Tomography.
Schweiger, Martin
2011-01-01
We introduce a GPU-accelerated finite element forward solver for the computation of light transport in scattering media. The forward model is the computationally most expensive component of iterative methods for image reconstruction in diffuse optical tomography, and performance optimisation of the forward solver is therefore crucial for improving the efficiency of the solution of the inverse problem. The GPU forward solver uses a CUDA implementation that evaluates on the graphics hardware the sparse linear system arising in the finite element formulation of the diffusion equation. We present solutions for both time-domain and frequency-domain problems. A comparison with a CPU-based implementation shows significant performance gains of the graphics accelerated solution, with improvements of approximately a factor of 10 for double-precision computations, and factors beyond 20 for single-precision computations. The gains are also shown to be dependent on the mesh complexity, where the largest gains are achieved for high mesh resolutions.
Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method.
Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim
2016-08-01
In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm. PMID:27587137
Influence of tungsten fiber's slow drift on the measurement of G with angular acceleration method
NASA Astrophysics Data System (ADS)
Luo, Jie; Wu, Wei-Huang; Xue, Chao; Shao, Cheng-Gang; Zhan, Wen-Ze; Wu, Jun-Fei; Milyukov, Vadim
2016-08-01
In the measurement of the gravitational constant G with angular acceleration method, the equilibrium position of torsion pendulum with tungsten fiber undergoes a linear slow drift, which results in a quadratic slow drift on the angular velocity of the torsion balance turntable under feedback control unit. The accurate amplitude determination of the useful angular acceleration signal with known frequency is biased by the linear slow drift and the coupling effect of the drifting equilibrium position and the room fixed gravitational background signal. We calculate the influences of the linear slow drift and the complex coupling effect on the value of G, respectively. The result shows that the bias of the linear slow drift on G is 7 ppm, and the influence of the coupling effect is less than 1 ppm.
Balandin, V. |; Golubeva, N.
1997-02-01
The equations of classical spin-orbit motion can be extended to a {bold Hamiltonian system} in 9-dimensional phase space by introducing a coupled spin-orbit {bold Poisson bracket} (3) and {bold Hamiltonian function} (5). After this extension it becomes possible to apply the {bold methods of the theory of Hamiltonian systems} to the study of polarized particles beam dynamics in circular accelerators and storage rings. Some of those methods have been implemented in the computer code {bold FORGET-ME-NOT} [1], [2]. {copyright} {ital 1997 American Institute of Physics.}
Adams, M.L. ); Wareing, T.A. )
1993-01-01
We study diffusion-synthetic acceleration (DSA) for within-group scattering iterations in discrete ordinates calculations. We consider analytic (not spatially discretized) equations in Cartesian coordinates with linearly anisotropic scattering. We place no restrictions on the discrete ordinates quadrature set. We assume an infinite homogeneous medium. Our main results are as follows: 1. DSA is unstable in two dimensions (2D) and three dimensions (3D), given forward-peaked scattering. It can be stabilized by taking extra transport sweeps each iteration. 2. Standard DSA is unstable, given any quadrature set that does not correctly integrate linear functions of angle. 3. Relative to one dimension (ID), DSA's performance is degraded in 2D and 3D.
A hybrid data acquisition system for magnetic measurements of accelerator magnets
Wang, X.; Hafalia, R.; Joseph, J.; Lizarazo, J.; Martchevsky, M.; Sabbi, G. L.
2011-06-03
A hybrid data acquisition system was developed for magnetic measurement of superconducting accelerator magnets at LBNL. It consists of a National Instruments dynamic signal acquisition (DSA) card and two Metrolab fast digital integrator (FDI) cards. The DSA card records the induced voltage signals from the rotating probe while the FDI cards records the flux increment integrated over a certain angular step. This allows the comparison of the measurements performed with two cards. In this note, the setup and test of the system is summarized. With a probe rotating at a speed of 0.5 Hz, the multipole coefficients of two magnets were measured with the hybrid system. The coefficients from the DSA and FDI cards agree with each other, indicating that the numerical integration of the raw voltage acquired by the DSA card is comparable to the performance of the FDI card in the current measurement setup.
MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA
Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D
2013-01-01
Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.
A chain-of-states acceleration method for the efficient location of minimum energy paths
Hernández, E. R. Herrero, C. P.; Soler, J. M.
2015-11-14
We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.
ERIC Educational Resources Information Center
Parks, Paula L.
2014-01-01
Most developmental community college students are not completing the composition sequence successfully. This mixed-methods study examined acceleration as a way to help developmental community college students complete the composition sequence more quickly and more successfully. Acceleration is a curricular redesign that includes challenging…
NASA Astrophysics Data System (ADS)
Fridrichová, Marcela; Dvořák, Karel; Gazdič, Dominik
2016-03-01
The single most reliable indicator of a material's durability is its performance in long-term tests, which cannot always be carried out due to a limited time budget. The second option is to perform some kind of accelerated durability tests. The aim of the work described in this article was to develop a method for the accelerated durability testing of binders. It was decided that the Arrhenius equation approach and the theory of chemical reaction kinetics would be applied in this case. The degradation process has been simplified to a single quantifiable parameter, which became compressive strength. A model hydraulic binder based on fluidised bed combustion ash (FBC ash) was chosen as the test subject for the development of the method. The model binder and its hydration products were tested by high-temperature X-ray diffraction analysis. The main hydration product of this binder was ettringite. Due to the thermodynamic instability of this mineral, it was possible to verify the proposed method via long term testing. In order to accelerate the chemical reactions in the binder, four combinations of two temperatures (65 and 85°C) and two different relative humidities (14 and 100%) were used. The upper temperature limit was chosen because of the results of the high-temperature x-ray testing of the ettringite's decomposition. The calculation formulae for the accelerated durability tests were derived on the basis of data regarding the decrease in compressive strength under the conditions imposed by the four above-mentioned combinations. The mineralogical composition of the binder after degradation was also described. The final degradation product was gypsum under dry conditions and monosulphate under wet conditions. The validity of the method and formula was subsequently verified by means of long-term testing. A very good correspondence between the calculated and real values was achieved. The deviation of these values did not exceed 5 %. The designed and verified method
Melendez, Johan H; Santaus, Tonya M; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A; Geddes, Chris D
2016-10-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by detection of the genomic target often involving polymerase chain reaction (PCR)-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (gonorrhea, GC) DNA. Our approach is based on the use of highly focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the current study, we show that highly focused microwaves at 2.45 GHz, using 12.3-mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification, in less than 10 min total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward toward the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503
Xue, Chao; Quan, Li-Di; Yang, Shan-Qing; Wang, Bing-Peng; Wu, Jun-Fei; Shao, Cheng-Gang; Tu, Liang-Cheng; Milyukov, Vadim; Luo, Jun
2014-01-01
This paper describes the preliminary measurement of the Newtonian gravitational constant G with the angular acceleration feedback method at HUST. The apparatus has been built, and preliminary measurement performed, to test all aspects of the experimental design, particularly the feedback function, which was recently discussed in detail by Quan et al. The experimental results show that the residual twist angle of the torsion pendulum at the signal frequency introduces 0.4 ppm to the value of G. The relative uncertainty of the angular acceleration of the turntable is approximately 100 ppm, which is mainly limited by the stability of the apparatus. Therefore, the experiment has been modified with three features: (i) the height of the apparatus is reduced almost by half, (ii) the aluminium shelves were replaced with shelves made from ultra-low expansion material and (iii) a perfect compensation of the laboratory-fixed gravitational background will be carried out. With these improvements, the angular acceleration is expected to be determined with an uncertainty of better than 10 ppm, and a reliable value of G with 20 ppm or below will be obtained in the near future. PMID:25201996
Xue, Chao; Quan, Li-Di; Yang, Shan-Qing; Wang, Bing-Peng; Wu, Jun-Fei; Shao, Cheng-Gang; Tu, Liang-Cheng; Milyukov, Vadim; Luo, Jun
2014-10-13
This paper describes the preliminary measurement of the Newtonian gravitational constant G with the angular acceleration feedback method at HUST. The apparatus has been built, and preliminary measurement performed, to test all aspects of the experimental design, particularly the feedback function, which was recently discussed in detail by Quan et al. The experimental results show that the residual twist angle of the torsion pendulum at the signal frequency introduces 0.4 ppm to the value of G. The relative uncertainty of the angular acceleration of the turntable is approximately 100 ppm, which is mainly limited by the stability of the apparatus. Therefore, the experiment has been modified with three features: (i) the height of the apparatus is reduced almost by half, (ii) the aluminium shelves were replaced with shelves made from ultra-low expansion material and (iii) a perfect compensation of the laboratory-fixed gravitational background will be carried out. With these improvements, the angular acceleration is expected to be determined with an uncertainty of better than 10 ppm, and a reliable value of G with 20 ppm or below will be obtained in the near future.
Vibration-Based Method Developed to Detect Cracks in Rotors During Acceleration Through Resonance
NASA Technical Reports Server (NTRS)
Sawicki, Jerzy T.; Baaklini, George Y.; Gyekenyesi, Andrew L.
2004-01-01
In recent years, there has been an increasing interest in developing rotating machinery shaft crack-detection methodologies and online techniques. Shaft crack problems present a significant safety and loss hazard in nearly every application of modern turbomachinery. In many cases, the rotors of modern machines are rapidly accelerated from rest to operating speed, to reduce the excessive vibrations at the critical speeds. The vibration monitoring during startup or shutdown has been receiving growing attention (ref. 1), especially for machines such as aircraft engines, which are subjected to frequent starts and stops, as well as high speeds and acceleration rates. It has been recognized that the presence of angular acceleration strongly affects the rotor's maximum response to unbalance and the speed at which it occurs. Unfortunately, conventional nondestructive evaluation (NDE) methods have unacceptable limits in terms of their application for online crack detection. Some of these techniques are time consuming and inconvenient for turbomachinery service testing. Almost all of these techniques require that the vicinity of the damage be known in advance, and they can provide only local information, with no indication of the structural strength at a component or system level. In addition, the effectiveness of these experimental techniques is affected by the high measurement noise levels existing in complex turbomachine structures. Therefore, the use of vibration monitoring along with vibration analysis has been receiving increasing attention.
Krylov iterative methods and synthetic acceleration for transport in binary statistical media
Fichtl, Erin D; Warsa, James S; Prinja, Anil K
2008-01-01
In particle transport applications there are numerous physical constructs in which heterogeneities are randomly distributed. The quantity of interest in these problems is the ensemble average of the flux, or the average of the flux over all possible material 'realizations.' The Levermore-Pomraning closure assumes Markovian mixing statistics and allows a closed, coupled system of equations to be written for the ensemble averages of the flux in each material. Generally, binary statistical mixtures are considered in which there are two (homogeneous) materials and corresponding coupled equations. The solution process is iterative, but convergence may be slow as either or both materials approach the diffusion and/or atomic mix limits. A three-part acceleration scheme is devised to expedite convergence, particularly in the atomic mix-diffusion limit where computation is extremely slow. The iteration is first divided into a series of 'inner' material and source iterations to attenuate the diffusion and atomic mix error modes separately. Secondly, atomic mix synthetic acceleration is applied to the inner material iteration and S{sup 2} synthetic acceleration to the inner source iterations to offset the cost of doing several inner iterations per outer iteration. Finally, a Krylov iterative solver is wrapped around each iteration, inner and outer, to further expedite convergence. A spectral analysis is conducted and iteration counts and computing cost for the new two-step scheme are compared against those for a simple one-step iteration, to which a Krylov iterative method can also be applied.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
NASA Astrophysics Data System (ADS)
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
NASA Astrophysics Data System (ADS)
Schindler, Matthias; Kretschmer, Wolfgang; Scharf, Andreas; Tschekalinskij, Alexander
2016-05-01
Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound sampling method wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).
Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids.
Deng, Jie; Zimmerman, Jonathan A.; Thompson, Aidan Patrick; Brown, William Michael; Plimpton, Steven James; Zhou, Xiao Wang; Wagner, Gregory John; Erickson, Lindsay Crowl
2011-09-01
Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-ups that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.
Dynamic inversion method based on the time-staggered stereo-modeling scheme and its acceleration
NASA Astrophysics Data System (ADS)
Jing, Hao; Yang, Dinghui; Wu, Hao
2016-09-01
A set of second-order differential equations describing the space-time behavior of derivatives of displacement with respect to model parameters (i.e. waveform sensitivities) is obtained via taking the derivative of the original wave equations. The dynamic inversion method obtains sensitivities of the seismic displacement field with respect to earth properties directly by solving differential equations for them instead of constructing sensitivities from the displacement field itself. In this study, we have taken a new perspective on the dynamic inversion method and used acceleration approaches to reduce the computational time and memory usage to improve its ability of performing high-resolution imaging. The dynamic inversion method, which can simultaneously use different waves and multi-component observation data, is appropriate for directly inverting elastic parameters, medium density or wave velocities. Full wave-field information is utilized as much as possible at the expense of a larger amount of calculations. To mitigate the computational burden, two ways are proposed to accelerate the method from a computer-implementation point of view. One is source encoding which uses a linear combination of all shots, and the other is to reduce the amount of calculations on forward modeling. We applied a new finite difference method to the dynamic inversion to improve the computational accuracy and speed up the performance. Numerical experiments indicated that the new finite difference method can effectively suppress the numerical dispersion caused by the discretization of wave equations, resulting in enhanced computational efficiency with less memory cost for seismic modeling and inversion based on the full wave equations. We present some inversion results to demonstrate the validity of this method through both checkerboard and Marmousi models. It shows that this method is also convergent even with big deviations for the initial model. Besides, parallel calculations can be
An improved method for calibrating the gantry angles of linear accelerators.
Higgins, Kyle; Treas, Jared; Jones, Andrew; Fallahian, Naz Afarin; Simpson, David
2013-11-01
Linear particle accelerators (linacs) are widely used in radiotherapy procedures; therefore, accurate calibrations of gantry angles must be performed to prevent the exposure of healthy tissue to excessive radiation. One of the common methods for calibrating these angles is the spirit level method. In this study, a new technique for calibrating the gantry angle of a linear accelerator was examined. A cubic phantom was constructed of Styrofoam with small lead balls, embedded at specific locations in this foam block. Several x-ray images were taken of this phantom at various gantry angles using an electronic portal imaging device on the linac. The deviation of the gantry angles were determined by analyzing the images using a customized computer program written in ImageJ (National Institutes of Health). Gantry angles of 0, 90, 180, and 270 degrees were chosen and the results of both calibration methods were compared for each of these angles. The results revealed that the image method was more precise than the spirit level method. For the image method, the average of the measured values for the selected angles of 0, 90, 180, and 270 degrees were found to be -0.086 ± 0.011, 90.018 ± 0.011, 180.178 ± 0.015, and 269.972 ± 0.006 degrees, respectively. The corresponding average values using the spirit level method were 0.2 ± 0.03, 90.2 ± 0.04, 180.1 ± 0.01, and 269.9 ± 0.05 degrees, respectively. Based on these findings, the new method was shown to be a reliable technique for calibrating the gantry angle.
An improved method for calibrating the gantry angles of linear accelerators.
Higgins, Kyle; Treas, Jared; Jones, Andrew; Fallahian, Naz Afarin; Simpson, David
2013-11-01
Linear particle accelerators (linacs) are widely used in radiotherapy procedures; therefore, accurate calibrations of gantry angles must be performed to prevent the exposure of healthy tissue to excessive radiation. One of the common methods for calibrating these angles is the spirit level method. In this study, a new technique for calibrating the gantry angle of a linear accelerator was examined. A cubic phantom was constructed of Styrofoam with small lead balls, embedded at specific locations in this foam block. Several x-ray images were taken of this phantom at various gantry angles using an electronic portal imaging device on the linac. The deviation of the gantry angles were determined by analyzing the images using a customized computer program written in ImageJ (National Institutes of Health). Gantry angles of 0, 90, 180, and 270 degrees were chosen and the results of both calibration methods were compared for each of these angles. The results revealed that the image method was more precise than the spirit level method. For the image method, the average of the measured values for the selected angles of 0, 90, 180, and 270 degrees were found to be -0.086 ± 0.011, 90.018 ± 0.011, 180.178 ± 0.015, and 269.972 ± 0.006 degrees, respectively. The corresponding average values using the spirit level method were 0.2 ± 0.03, 90.2 ± 0.04, 180.1 ± 0.01, and 269.9 ± 0.05 degrees, respectively. Based on these findings, the new method was shown to be a reliable technique for calibrating the gantry angle. PMID:24077078
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H., Jr.
2015-01-01
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Quan, Li-Di; Xue, Chao; Shao, Cheng-Gang; Yang, Shan-Qing; Tu, Liang-Cheng; Luo, Jun; Wang, Yong-Ji
2014-01-15
The performance of the feedback control system is of central importance in the measurement of the Newton's gravitational constant G with angular acceleration method. In this paper, a PID (Proportion-Integration-Differentiation) feedback loop is discussed in detail. Experimental results show that, with the feedback control activated, the twist angle of the torsion balance is limited to 7.3×10{sup −7} rad /√( Hz ) at the signal frequency of 2 mHz, which contributes a 0.4 ppm uncertainty to the G value.
Practical method and device for enhancing pulse contrast ratio for lasers and electron accelerators
Zhang, Shukui; Wilson, Guy
2014-09-23
An apparatus and method for enhancing pulse contrast ratios for drive lasers and electron accelerators. The invention comprises a mechanical dual-shutter system wherein the shutters are placed sequentially in series in a laser beam path. Each shutter of the dual shutter system has an individually operated trigger for opening and closing the shutter. As the triggers are operated individually, the delay between opening and closing first shutter and opening and closing the second shutter is variable providing for variable differential time windows and enhancement of pulse contrast ratio.
Acceleration of ensemble machine learning methods using many-core devices
NASA Astrophysics Data System (ADS)
Tamerus, A.; Washbrook, A.; Wyeth, D.
2015-12-01
We present a case study into the acceleration of ensemble machine learning methods using many-core devices in collaboration with Toshiba Medical Visualisation Systems Europe (TMVSE). The adoption of GPUs to execute a key algorithm in the classification of medical image data was shown to significantly reduce overall processing time. Using a representative dataset and pre-trained decision trees as input we will demonstrate how the decision forest classification method can be mapped onto the GPU data processing model. It was found that a GPU-based version of the decision forest method resulted in over 138 times speed-up over a single-threaded CPU implementation with further improvements possible. The same GPU-based software was then directly applied to a suitably formed dataset to benefit supervised learning techniques applied in High Energy Physics (HEP) with similar improvements in performance.
Sirin Karaarslan, Emine; Bulbul, Mehmet; Yildiz, Esma; Secilmis, Asli; Sari, Fatih; Usumez, Aslihan
2013-01-01
The purpose of this study was to evaluate the effect of polishing procedures on the color stability of different types of composites after aging. Forty disk-shaped specimens (Ø10×2 mm) were prepared for each composite resin type (an ormocer, a packable, a nanohybrid, and a microhybrid) for a total of 160 specimens. Each composite group was divided into four subgroups according to polishing method (n=10): control (no finishing and polishing), polishing disk, polishing wheel, and glaze material. Color parameters (L*, a*, and b*) and surface roughness were measured before and after accelerated aging. Of the polishing methods, glazed specimens showed the lowest color change (∆E*), ∆L*, and ∆b* values (p<0.05). Of the composite resins, the microhybrid composite showed the lowest ∆E* value, whereas the ormocer showed the highest (p<0.05). For all composite types, the surface roughness of their control groups decreased after aging (p<0.05). In conclusion, all composite resins showed color changes after accelerated aging, with the use of glaze material resulting in the lowest color change.
Dental movement acceleration: Literature review by an alternative scientific evidence method
Camacho, Angela Domínguez; Cujar, Sergio Andres Velásquez
2014-01-01
The aim of this study was to analyze the majority of publications using effective methods to speed up orthodontic treatment and determine which publications carry high evidence-based value. The literature published in Pubmed from 1984 to 2013 was reviewed, in addition to well-known reports that were not classified under this database. To facilitate evidence-based decision making, guidelines such as the Consolidation Standards of Reporting Trials, Preferred Reporting items for systematic Reviews and Meta-analyses, and Transparent Reporting of Evaluations with Non-randomized Designs check list were used. The studies were initially divided into three groups: local application of cell mediators, physical stimuli, and techniques that took advantage of the regional acceleration phenomena. The articles were classified according to their level of evidence using an alternative method for orthodontic scientific article classification. 1a: Systematic Reviews (SR) of randomized clinical trials (RCTs), 1b: Individual RCT, 2a: SR of cohort studies, 2b: Individual cohort study, controlled clinical trials and low quality RCT, 3a: SR of case-control studies, 3b: Individual case-control study, low quality cohort study and short time following split mouth designs. 4: Case-series, low quality case-control study and non-systematic review, and 5: Expert opinion. The highest level of evidence for each group was: (1) local application of cell mediators: the highest level of evidence corresponds to a 3B level in Prostaglandins and Vitamin D; (2) physical stimuli: vibratory forces and low level laser irradiation have evidence level 2b, Electrical current is classified as 3b evidence-based level, Pulsed Electromagnetic Field is placed on the 4th level on the evidence scale; and (3) regional acceleration phenomena related techniques: for corticotomy the majority of the reports belong to level 4. Piezocision, dentoalveolar distraction, alveocentesis, monocortical tooth dislocation and ligament
ACCELERATION OF LOW-ENERGY IONS AT PARALLEL SHOCKS WITH A FOCUSED TRANSPORT MODEL
Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.
2013-04-10
We present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by parallel shocks with a focused transport model. The focused transport equation contains all necessary physics of shock acceleration, but avoids the limitation of diffusive shock acceleration (DSA) that requires a small pitch angle anisotropy. This simulation verifies that the particles with speeds of a fraction of to a few times the shock speed can indeed be directly injected and accelerated into the DSA regime by parallel shocks. At higher energies starting from a few times the shock speed, the energy spectrum of accelerated particles is a power law with the same spectral index as the solution of standard DSA theory, although the particles are highly anisotropic in the upstream region. The intensity, however, is different from that predicted by DSA theory, indicating a different level of injection efficiency. It is found that the shock strength, the injection speed, and the intensity of an electric cross-shock potential (CSP) jump can affect the injection efficiency of the low-energy particles. A stronger shock has a higher injection efficiency. In addition, if the speed of injected particles is above a few times the shock speed, the produced power-law spectrum is consistent with the prediction of standard DSA theory in both its intensity and spectrum index with an injection efficiency of 1. CSP can increase the injection efficiency through direct particle reflection back upstream, but it has little effect on the energetic particle acceleration once the speed of injected particles is beyond a few times the shock speed. This test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection.
Gallacher, J. G.; Anania, M. P.; Brunetti, E.; Ersfeld, B.; Islam, M. R.; Reitsma, A. J. W.; Shanks, R. P.; Wiggins, S. M.; Jaroszynski, D. A.; Budde, F.; Debus, A.; Haupt, K.; Schwoerer, H.; Jaeckel, O.; Pfotenhauer, S.; Rohwer, E.; Schlenvoigt, H.-P.
2009-09-15
In this paper a new method of determining the energy spread of a relativistic electron beam from a laser-driven plasma wakefield accelerator by measuring radiation from an undulator is presented. This could be used to determine the beam characteristics of multi-GeV accelerators where conventional spectrometers are very large and cumbersome. Simultaneous measurement of the energy spectra of electrons from the wakefield accelerator in the 55-70 MeV range and the radiation spectra in the wavelength range of 700-900 nm of synchrotron radiation emitted from a 50 period undulator confirm a narrow energy spread for electrons accelerated over the dephasing distance where beam loading leads to energy compression. Measured energy spreads of less than 1% indicates the potential of using a wakefield accelerator as a driver of future compact and brilliant ultrashort pulse synchrotron sources and free-electron lasers that require high peak brightness beams.
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
Proposition of an Accelerated Ageing Method for Natural Fibre/Polylactic Acid Composite
NASA Astrophysics Data System (ADS)
Zandvliet, Clio; Bandyopadhyay, N. R.; Ray, Dipa
2015-10-01
Natural fibre composite based on polylactic acid (PLA) composite is of special interest because it is entirely from renewable resources and biodegradable. Some samples of jute/PLA composite and PLA alone made 6 years ago and kept in tropical climate on a shelf shows too fast ageing degradation. In this work, an accelerated ageing method for natural fibres/PLA composite is proposed and tested. Experiment was carried out with jute and flax fibre/PLA composite. The method was compared with the standard ISO 1037-06a. The residual flexural strength after ageing test was compared with the one of common wood-based panels and of real aged samples prepared 6 years ago.
GPU-accelerated 3D neutron diffusion code based on finite difference method
Xu, Q.; Yu, G.; Wang, K.
2012-07-01
Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)
Turcksin, B.; Ragusa, J. C.
2013-07-01
A DSA technique to accelerate the iterative convergence of S{sub n} transport solves is derived for bilinear discontinuous (BLD) finite elements on rectangular grids. The diffusion synthetic acceleration equations are discretized using BLD elements by adapting the Modified Interior Penalty technique, introduced in [4] for triangular grids. The MIP-DSA equations are SPD and thus are solved using a preconditioned CG technique. Fourier analyses and implementation of the technique in a BLD S{sub n} transport code show that the technique is stable is effective. (authors)
Lattice Boltzmann methods on the ClearSpeed Advance™ accelerator board
NASA Astrophysics Data System (ADS)
Heuveline, V.; Weiß, J.-P.
2009-04-01
Numerical analysts and programmers are currently facing a conceptual change in processor technology. Multicore concepts, coprocessors and accelerators are becoming a vital part in scientific computing. The new hardware techno- logies lead to new paradigms and require adapted methodologies and techniques in numerical simulation. These developments play an important role in computational fluid dynamics (CFD) where many highly CPU-time demanding problems arise. In this paper, we propose a parallel lattice Boltzmann method (LBM) in the context of a coprocessor technology, the ClearSpeed Advance™ accelerator board. Implementations of LBMs on parallel architectures benefit from localities of the necessary interactions and the regular structure of the underlying meshes. The considered board supports high-level parallelism and double precision conforming to the IEEE 754 standard. However, the solution process relies on a huge amount of data which needs to propagate along the mesh. This prototypical fact shows up the bottleneck of internal communication bandwidth and indicates the limits of this type of small-scale parallel systems.
Accelerated Discovery in Photocatalysis using a Mechanism-Based Screening Method.
Hopkinson, Matthew N; Gómez-Suárez, Adrián; Teders, Michael; Sahoo, Basudev; Glorius, Frank
2016-03-18
Herein, we report a conceptually novel mechanism-based screening approach to accelerate discovery in photocatalysis. In contrast to most screening methods, which consider reactions as discrete entities, this approach instead focuses on a single constituent mechanistic step of a catalytic reaction. Using luminescence spectroscopy to investigate the key quenching step in photocatalytic reactions, an initial screen of 100 compounds led to the discovery of two promising substrate classes. Moreover, a second, more focused screen provided mechanistic insights useful in developing proof-of-concept reactions. Overall, this fast and straightforward approach both facilitated the discovery and aided the development of new light-promoted reactions and suggests that mechanism-based screening strategies could become useful tools in the hunt for new reactivity.
Accelerated Discovery in Photocatalysis using a Mechanism-Based Screening Method.
Hopkinson, Matthew N; Gómez-Suárez, Adrián; Teders, Michael; Sahoo, Basudev; Glorius, Frank
2016-03-18
Herein, we report a conceptually novel mechanism-based screening approach to accelerate discovery in photocatalysis. In contrast to most screening methods, which consider reactions as discrete entities, this approach instead focuses on a single constituent mechanistic step of a catalytic reaction. Using luminescence spectroscopy to investigate the key quenching step in photocatalytic reactions, an initial screen of 100 compounds led to the discovery of two promising substrate classes. Moreover, a second, more focused screen provided mechanistic insights useful in developing proof-of-concept reactions. Overall, this fast and straightforward approach both facilitated the discovery and aided the development of new light-promoted reactions and suggests that mechanism-based screening strategies could become useful tools in the hunt for new reactivity. PMID:27000485
On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials
NASA Technical Reports Server (NTRS)
Gates, Thomas S.
2003-01-01
A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.
Research on acceleration method of reactor physics based on FPGA platforms
Li, C.; Yu, G.; Wang, K.
2013-07-01
The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecture achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)
NASA Astrophysics Data System (ADS)
Feonychev, A. I.; Kalachinskaya, I. S.
2001-07-01
The numerical investigation of the impact of time-dependent accelerations (vibrations) on the flow and heat and mass transfer in the melt is carried out for the case of modeling the crystal growth by the floating zone method under conditions of microgravity that exist onboard spacecraft. The effects of the Archimedean buoyancy force and of vibrations of the free surface of fluid are considered separately. When solving the problem of the effect of the vibrations of the free surface of fluid, the previously obtained data were used. It is shown that vibrations of the free surface have a much stronger effect on the processes under consideration than the buoyancy. Some problems that are related to the newly discovered effects are discussed. The use of vibroprotected systems and a rotating magnetic field can help solve these problems. We plan to continue our investigations in future spacecraft experiments, in particular, at the International Space Station, which is under construction at the moment.
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
Kauffman, R.
1995-09-01
The research reported herein was performed to develop an accelerated screening method for determining the chemical and thermal stabilities of refrigerant/lubricant mixtures. The developed screening method was designed to be safe and to produce accelerated stability rankings that are in agreement with the rankings determined by the current test, Sealed Glass Tube Method to Test the Chemical Stability of Material for Use Within Refrigerant Systems, ANSI/ASHRAE Method 97-1989. The accelerated screening test developed was designed to be independent of refrigerant and lubricant compositions and to be used with a wide variety of construction materials. The studied refrigerants included CFC-11, CFC-12, HCFC-22, HFC-134a, and HFC-32/HFC-134a (zeotrope 30:70 by weight). The studied lubricants were selected from the chemical classes of mineral oil, alkylbenzene oil, polyglycols, and polyolesters. The work reported herein was performed in three phases. In the first phase, previously identified thermal analytical techniques were evaluated for development into an accelerated screening method for refrigerant/lubricant mixtures. The identified thermal analytical techniques used in situ measurements of color, temperature, or conductivity to monitor the degradation of the heated refrigerant/lubricant mixtures. The identified thermal analytical techniques also used catalysts such as ferric fluoride to accelerate the degradation of the heated refrigerant/lubricant mixtures. The thermal analytical technique employing in situ conductivity measurements was determined to be the most suitable for development into an accelerated screening method.
Abdel-Aal, El-Sayed M; Akhtar, Humayoun; Rabalski, Iwona; Bryan, Michael
2014-02-01
Anthocyanins are important dietary components with diverse positive functions in human health. This study investigates effects of accelerated solvent extraction (ASE) and microwave-assisted extraction (MAE) on anthocyanin composition and extraction efficiency from blue wheat, purple corn, and black rice in comparison with the commonly used solvent extraction (CSE). Factorial experimental design was employed to study effects of ASE and MAE variables, and anthocyanin extracts were analyzed by spectrophotometry, high-performance liquid chromatography-diode array detector (DAD), and liquid chromatography-mass spectrometry chromatography. The extraction efficiency of ASE and MAE was comparable with CSE at the optimal conditions. The greatest extraction by ASE was achieved at 50 °C, 2500 psi, 10 min using 5 cycles, and 100% flush. For MAE, a combination of 70 °C, 300 W, and 10 min in MAE was the most effective in extracting anthocyanins from blue wheat and purple corn compared with 50 °C, 1200 W, and 20 min for black rice. The anthocyanin composition of grain extracts was influenced by the extraction method. The ASE extraction method seems to be more appropriate in extracting anthocyanins from the colored grains as being comparable with the CSE method based on changes in anthocyanin composition. The method caused lower structural changes in anthocaynins compared with the MAE method. Changes in blue wheat anthocyanins were lower in comparison with purple corn or black rice perhaps due to the absence of acylated anthocyanin compounds in blue wheat. The results show significant differences in anthocyanins among the 3 extraction methods, which indicate a need to standardize a method for valid comparisons among studies and for quality assurance purposes. PMID:24547694
Abdel-Aal, El-Sayed M; Akhtar, Humayoun; Rabalski, Iwona; Bryan, Michael
2014-02-01
Anthocyanins are important dietary components with diverse positive functions in human health. This study investigates effects of accelerated solvent extraction (ASE) and microwave-assisted extraction (MAE) on anthocyanin composition and extraction efficiency from blue wheat, purple corn, and black rice in comparison with the commonly used solvent extraction (CSE). Factorial experimental design was employed to study effects of ASE and MAE variables, and anthocyanin extracts were analyzed by spectrophotometry, high-performance liquid chromatography-diode array detector (DAD), and liquid chromatography-mass spectrometry chromatography. The extraction efficiency of ASE and MAE was comparable with CSE at the optimal conditions. The greatest extraction by ASE was achieved at 50 °C, 2500 psi, 10 min using 5 cycles, and 100% flush. For MAE, a combination of 70 °C, 300 W, and 10 min in MAE was the most effective in extracting anthocyanins from blue wheat and purple corn compared with 50 °C, 1200 W, and 20 min for black rice. The anthocyanin composition of grain extracts was influenced by the extraction method. The ASE extraction method seems to be more appropriate in extracting anthocyanins from the colored grains as being comparable with the CSE method based on changes in anthocyanin composition. The method caused lower structural changes in anthocaynins compared with the MAE method. Changes in blue wheat anthocyanins were lower in comparison with purple corn or black rice perhaps due to the absence of acylated anthocyanin compounds in blue wheat. The results show significant differences in anthocyanins among the 3 extraction methods, which indicate a need to standardize a method for valid comparisons among studies and for quality assurance purposes.
NASA Astrophysics Data System (ADS)
Kawakami, Taiki; Okubo, Kan; Uchida, Naoki; Takeuchi, Nobunao; Matsuzawa, Toru
2013-04-01
Repeating earthquakes are occurring on the similar asperity at the plate boundary. These earthquakes have an important property; the seismic waveforms observed at the identical observation site are very similar regardless of their occurrence time. The slip histories of repeating earthquakes could reveal the existence of asperities: The Analysis of repeating earthquakes can detect the characteristics of the asperities and realize the temporal and spatial monitoring of the slip in the plate boundary. Moreover, we are expecting the medium-term predictions of earthquake at the plate boundary by means of analysis of repeating earthquakes. Although the previous works mostly clarified the existence of asperity and repeating earthquake, and relationship between asperity and quasi-static slip area, the stable and robust method for automatic detection of repeating earthquakes has not been established yet. Furthermore, in order to process the enormous data (so-called big data) the speedup of the signal processing is an important issue. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for the signal processing in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. That is, a PC (personal computer) with GPUs might be a personal supercomputer. GPU computing gives us the high-performance computing environment at a lower cost than before. Therefore, the use of GPUs contributes to a significant reduction of the execution time in signal processing of the huge seismic data. In this study, first, we applied the band-limited Fourier phase correlation as a fast method of detecting repeating earthquake. This method utilizes only band-limited phase information and yields the correlation values between two seismic signals. Secondly, we employ coherence function using three orthogonal components (East-West, North-South, and Up-Down) of seismic data as a
NASA Astrophysics Data System (ADS)
Balasubramoniam, A.; Bednarek, D. R.; Rudin, S.; Ionita, C. N.
2016-03-01
An evaluation of the relation between parametric imaging results obtained from Digital Subtraction Angiography (DSA) images and blood-flow velocity measured using Doppler ultrasound in patient-specific neurovascular phantoms is provided. A silicone neurovascular phantom containing internal carotid artery, middle cerebral artery and anterior communicating artery was embedded in a tissue equivalent gel. The gel prevented movement of the vessels when blood mimicking fluid was pumped through it to obtain Colour Doppler images. The phantom was connected to a peristaltic pump, simulating physiological flow conditions. To obtain the parametric images, water was pumped through the phantom at various flow rates (100, 120 and 160 ml/min) and 10 ml contrast boluses were injected. DSA images were obtained at 10 frames/sec from the Toshiba C-arm and DSA image sequences were input into LabVIEW software to get parametric maps from time-density curves. The parametric maps were compared with velocities determined by Doppler ultrasound at the internal carotid artery. The velocities measured by the Doppler ultrasound were 38, 48 and 65 cm/s for flow rates of 100, 120 and 160 ml/min, respectively. For the 20% increase in flow rate, the percentage change of blood velocity measured by Doppler ultrasound was 26.3%. Correspondingly, there was a 20% decrease of Bolus Arrival Time (BAT) and 14.3% decrease of Mean Transit Time (MTT), showing strong inverse correlation with Doppler measured velocity. The parametric imaging parameters are quite sensitive to velocity changes and are well correlated to the velocities measured by Doppler ultrasound.
Qiao, Jixin; Hou, Xiaolin; Steier, Peter; Nielsen, Sven; Golser, Robin
2015-07-21
An automated analytical method implemented in a flow injection (FI) system was developed for rapid determination of (236)U in 10 L seawater samples. (238)U was used as a chemical yield tracer for the whole procedure, in which extraction chromatography (UTEVA) was exploited to purify uranium, after an effective iron hydroxide coprecipitation. Accelerator mass spectrometry (AMS) was applied for quantifying the (236)U/(238)U ratio, and inductively coupled plasma mass spectrometry (ICPMS) was used to determine the absolute concentration of (238)U; thus, the concentration of (236)U can be calculated. The key experimental parameters affecting the analytical effectiveness were investigated and optimized in order to achieve high chemical yields and simple and rapid analysis as well as low procedure background. Besides, the operational conditions for the target preparation prior to the AMS measurement were optimized, on the basis of studying the coprecipitation behavior of uranium with iron hydroxide. The analytical results indicate that the developed method is simple and robust, providing satisfactory chemical yields (80-100%) and high analysis speed (4 h/sample), which could be an appealing alternative to conventional manual methods for (236)U determination in its tracer application. PMID:26105019
The cell-in-series method: A technique for accelerated electrode degradation in redox flow batteries
Pezeshki, Alan M.; Sacci, Robert L.; Veith, Gabriel M.; Zawodzinski, Thomas A.; Mench, Matthew M.
2015-11-21
Here, we demonstrate a novel method to accelerate electrode degradation in redox flow batteries and apply this method to the all-vanadium chemistry. Electrode performance degradation occurred seven times faster than in a typical cycling experiment, enabling rapid evaluation of materials. This method also enables the steady-state study of electrodes. In this manner, it is possible to delineate whether specific operating conditions induce performance degradation; we found that both aggressively charging and discharging result in performance loss. Post-mortem x-ray photoelectron spectroscopy of the degraded electrodes was used to resolve the effects of state of charge (SoC) and current on the electrodemore » surface chemistry. For the electrode material tested in this work, we found evidence that a loss of oxygen content on the negative electrode cannot explain decreased cell performance. Furthermore, the effects of decreased electrode and membrane performance on capacity fade in a typical cycling battery were decoupled from crossover; electrode and membrane performance decay were responsible for a 22% fade in capacity, while crossover caused a 12% fade.« less
The cell-in-series method: A technique for accelerated electrode degradation in redox flow batteries
Pezeshki, Alan M.; Sacci, Robert L.; Veith, Gabriel M.; Zawodzinski, Thomas A.; Mench, Matthew M.
2015-11-21
Here, we demonstrate a novel method to accelerate electrode degradation in redox flow batteries and apply this method to the all-vanadium chemistry. Electrode performance degradation occurred seven times faster than in a typical cycling experiment, enabling rapid evaluation of materials. This method also enables the steady-state study of electrodes. In this manner, it is possible to delineate whether specific operating conditions induce performance degradation; we found that both aggressively charging and discharging result in performance loss. Post-mortem x-ray photoelectron spectroscopy of the degraded electrodes was used to resolve the effects of state of charge (SoC) and current on the electrode surface chemistry. For the electrode material tested in this work, we found evidence that a loss of oxygen content on the negative electrode cannot explain decreased cell performance. Furthermore, the effects of decreased electrode and membrane performance on capacity fade in a typical cycling battery were decoupled from crossover; electrode and membrane performance decay were responsible for a 22% fade in capacity, while crossover caused a 12% fade.
Accelerated determination of selenomethionine in selenized yeast: validation of analytical method.
Ward, Patrick; Connolly, Cathal; Murphy, Richard
2013-03-01
The purpose of this study was to reduce the extraction time, to hours instead of days, for quantification of the selenomethionine (SeMet) content of selenized yeast. An accelerated method using microwave-assisted enzymatic extraction and ultrasonication was optimized and applied to certified reference material (selenized yeast reference material (SELM)-1). Quantitation of SeMet in the extracts was performed by liquid chromatography with inductively coupled plasma mass spectrometry. The limits of detection and quantitation were 5 ppb SeMet and 15 ppb SeMet respectively and the signal response was linear up to 1,500 ppb SeMet. The average recovery of spiked SeMet from the selenized yeast matrix was 97.7 %. Analysis of an SELM-1 using this method resulted in 100.9 % recovery of the certified value (3448 ± 146 ppm SeMet). This method is suitable for fast reliable determination of SeMet in selenized yeast. PMID:23242921
An ultrasonic-accelerated oxidation method for determining the oxidative stability of biodiesel.
Avila Orozco, Francisco D; Sousa, Antonio C; Domini, Claudia E; Ugulino Araujo, Mario Cesar; Fernández Band, Beatriz S
2013-05-01
Biodiesel is considered an alternative energy because it is produced from fats and vegetable oils by means of transesterification. Furthermore, it consists of fatty acid alkyl esters (FAAS) which have a great influence on biodiesel fuel properties and in the storage lifetime of biodiesel itself. The biodiesel storage stability is directly related to the oxidative stability parameter (Induction Time - IT) which is determined by means of the Rancimat® method. This method uses condutimetric monitoring and induces the degradation of FAAS by heating the sample at a constant temperature. The European Committee for Standardization established a standard (EN 14214) to determine the oxidative stability of biodiesel, which requires it to reach a minimum induction period of 6h as tested by Rancimat® method at 110°C. In this research, we aimed at developing a fast and simple alternative method to determine the induction time (IT) based on the FAAS ultrasonic-accelerated oxidation. The sonodegradation of biodiesel samples was induced by means of an ultrasonic homogenizer fitted with an immersible horn at 480Watts of power and 20 duty cycles. The UV-Vis spectrometry was used to monitor the FAAS sonodegradation by measuring the absorbance at 270nm every 2. Biodiesel samples from different feedstock were studied in this work. In all cases, IT was established as the inflection point of the absorbance versus time curve. The induction time values of all biodiesel samples determined using the proposed method was in accordance with those measured through the Rancimat® reference method by showing a R(2)=0.998.
NASA Astrophysics Data System (ADS)
Zhan, W.; Sun, Y.
2015-12-01
High frequency strong motion data, especially near field acceleration data, have been recorded widely through different observation station systems among the world. Due to tilting and a lot other reasons, recordings from these seismometers usually have baseline drift problems when big earthquake happens. It is hard to obtain a reasonable and precision co-seismic displacement through simply double integration. Here presents a combined method using wavelet transform and several simple liner procedures. Owning to the lack of dense high rate GNSS data in most of region of the world, we did not contain GNSS data in this method first but consider it as an evaluating mark of our results. This semi-automatic method unpacks a raw signal into two portions, a summation of high ranks and a low ranks summation using a cubic B-spline wavelet decomposition procedure. Independent liner treatments are processed against these two summations, which are then composed together to recover useable and reasonable result. We use data of 2008 Wenchuan earthquake and choose stations with a near GPS recording to validate this method. Nearly all of them have compatible co-seismic displacements when compared with GPS stations or field survey. Since seismometer stations and GNSS stations from observation systems in China are sometimes quite far from each other, we also test this method with some other earthquakes (1999 Chi-Chi earthquake and 2011 Tohoku earthquake). And for 2011 Tohoku earthquake, we will introduce GPS recordings to this combined method since the existence of a dense GNSS systems in Japan.
A GPU Accelerated Discontinuous Galerkin Conservative Level Set Method for Simulating Atomization
NASA Astrophysics Data System (ADS)
Jibben, Zechariah J.
This dissertation describes a process for interface capturing via an arbitrary-order, nearly quadrature free, discontinuous Galerkin (DG) scheme for the conservative level set method (Olsson et al., 2005, 2008). The DG numerical method is utilized to solve both advection and reinitialization, and executed on a refined level set grid (Herrmann, 2008) for effective use of processing power. Computation is executed in parallel utilizing both CPU and GPU architectures to make the method feasible at high order. Finally, a sparse data structure is implemented to take full advantage of parallelism on the GPU, where performance relies on well-managed memory operations. With solution variables projected into a kth order polynomial basis, a k + 1 order convergence rate is found for both advection and reinitialization tests using the method of manufactured solutions. Other standard test cases, such as Zalesak's disk and deformation of columns and spheres in periodic vortices are also performed, showing several orders of magnitude improvement over traditional WENO level set methods. These tests also show the impact of reinitialization, which often increases shape and volume errors as a result of level set scalar trapping by normal vectors calculated from the local level set field. Accelerating advection via GPU hardware is found to provide a 30x speedup factor comparing a 2.0GHz Intel Xeon E5-2620 CPU in serial vs. a Nvidia Tesla K20 GPU, with speedup factors increasing with polynomial degree until shared memory is filled. A similar algorithm is implemented for reinitialization, which relies on heavier use of shared and global memory and as a result fills them more quickly and produces smaller speedups of 18x.
A semiempirical method for the description of off-center ratios at depth from linear accelerators
Tsalafoutas, I.A.; Xenofos, S.; Yakoumakis, E.; Nikoletopoulos, S
2003-06-30
A semiempirical method for the description of the off-center ratios (OCR) at depth from linear accelerators is presented, which is based on a method originally developed for cobalt-60 {sup 60}Co units. The OCR profile is obtained as the sum of 2 components: the first describes an OCR similar to that from a {sup 60}Co unit, which approximates that resulting from the modification of the original x-ray intensity distribution by the flattening filter; the second takes into account the variable effect of the flattening filter on dose profile for different depths and field sizes, by considering the existence of a block and employing the negative field concept. The above method is formulated in a mathematical expression, where the parameters involved are obtained by fitting to the measured OCRs. Using this method, OCRs for various depths and field sizes, from a Philips SL-20 for the 6 MV x-ray beam and a Siemens Primus 23, for both the 6-MV and 23-MV x-ray beams, were reproduced with good accuracy. Furthermore, OCRs for other fields and depths that were not included in the fitting procedure were calculated using linear interpolation to estimate the values of the parameters. The results indicate that this method can be used to calculate OCR profiles for a wide range of depths and field sizes from a measured set of data and may be used for monitor unit calculations for off-axis points using a standard geometry. It may also be useful as a quality control tool to verify the accuracy of lacking profiles calculated by a treatment planning system.
Rodgers, J.E.; Elebi, M.
2011-01-01
The 1994 Northridge earthquake caused brittle fractures in steel moment frame building connections, despite causing little visible building damage in most cases. Future strong earthquakes are likely to cause similar damage to the many un-retrofitted pre-Northridge buildings in the western US and elsewhere. Without obvious permanent building deformation, costly intrusive inspections are currently the only way to determine if major fracture damage that compromises building safety has occurred. Building instrumentation has the potential to provide engineers and owners with timely information on fracture occurrence. Structural dynamics theory predicts and scale model experiments have demonstrated that sudden, large changes in structure properties caused by moment connection fractures will cause transient dynamic response. A method is proposed for detecting the building-wide level of connection fracture damage, based on observing high-frequency, fracture-induced transient dynamic responses in strong motion accelerograms. High-frequency transients are short (<1 s), sudden-onset waveforms with frequency content above 25 Hz that are visually apparent in recorded accelerations. Strong motion data and damage information from intrusive inspections collected from 24 sparsely instrumented buildings following the 1994 Northridge earthquake are used to evaluate the proposed method. The method's overall success rate for this data set is 67%, but this rate varies significantly with damage level. The method performs reasonably well in detecting significant fracture damage and in identifying cases with no damage, but fails in cases with few fractures. Combining the method with other damage indicators and removing records with excessive noise improves the ability to detect the level of damage. ?? 2010 Elsevier B.V. All rights reserved.
A coupled ordinates method for solution acceleration of rarefied gas dynamics simulations
Das, Shankhadeep; Mathur, Sanjay R.; Alexeenko, Alina; Murthy, Jayathi Y.
2015-05-15
Non-equilibrium rarefied flows are frequently encountered in a wide range of applications, including atmospheric re-entry vehicles, vacuum technology, and microscale devices. Rarefied flows at the microscale can be effectively modeled using the ellipsoidal statistical Bhatnagar–Gross–Krook (ESBGK) form of the Boltzmann kinetic equation. Numerical solutions of these equations are often based on the finite volume method (FVM) in physical space and the discrete ordinates method in velocity space. However, existing solvers use a sequential solution procedure wherein the velocity distribution functions are implicitly coupled in physical space, but are solved sequentially in velocity space. This leads to explicit coupling of the distribution function values in velocity space and slows down convergence in systems with low Knudsen numbers. Furthermore, this also makes it difficult to solve multiscale problems or problems in which there is a large range of Knudsen numbers. In this paper, we extend the coupled ordinates method (COMET), previously developed to study participating radiative heat transfer, to solve the ESBGK equations. In this method, at each cell in the physical domain, distribution function values for all velocity ordinates are solved simultaneously. This coupled solution is used as a relaxation sweep in a geometric multigrid method in the spatial domain. Enhancements to COMET to account for the non-linearity of the ESBGK equations, as well as the coupled implementation of boundary conditions, are presented. The methodology works well with arbitrary convex polyhedral meshes, and is shown to give significantly faster solutions than the conventional sequential solution procedure. Acceleration factors of 5–9 are obtained for low to moderate Knudsen numbers on single processor platforms.
Park, Jaehong; Caprioli, Damiano; Spitkovsky, Anatoly
2015-02-27
We study diffusive shock acceleration (DSA) of protons and electrons at nonrelativistic, high Mach number, quasiparallel, collisionless shocks by means of self-consistent 1D particle-in-cell simulations. For the first time, both species are found to develop power-law distributions with the universal spectral index -4 in momentum space, in agreement with the prediction of DSA. We find that scattering of both protons and electrons is mediated by right-handed circularly polarized waves excited by the current of energetic protons via nonresonant hybrid (Bell) instability. Protons are injected into DSA after a few gyrocycles of shock drift acceleration (SDA), while electrons are first preheated via SDA, then energized via a hybrid acceleration process that involves both SDA and Fermi-like acceleration mediated by Bell waves, before eventual injection into DSA. Using the simulations we can measure the electron-proton ratio in accelerated particles, which is of paramount importance for explaining the cosmic ray fluxes measured on Earth and the multiwavelength emission of astrophysical objects such as supernova remnants, radio supernovae, and galaxy clusters. We find the normalization of the electron power law is ≲10^{-2} of the protons for strong nonrelativistic shocks.
The accelerated polyvinyl-alcohol method for GSR collection--PVAL 2.0.
Schyma, C; Placidi, P
2000-11-01
The polyvinyl-alcohol collection method (PVAL) is used in forensic practice to gather topographical information about gunshot residues (GSR) from the hands to decide if the subject has made use of firearms. The results allow a distinction between suicide and homicide. The only inconvenience of PVAL was that the procedure took about 60 min because three layers of liquid PVAL had to be applied and dried. Therefore, the collection method was only applied to corpses. The improved and accelerated PVAL 2.0 uses a sandwich technique. Cotton gauze for stabilization is moistened with a 10% PVAL solution. A solid film of PVAL (Solublon) is spread on the cotton mesh. The gauze is then modeled to the hand and dried with a hair dryer. After removing the cotton gauze, the traces are embedded in the water-soluble PVAL. The procedure does not take more than 15 min. The results demonstrate the qualities and advantages of PVAL: topographical distribution of GSR, highest gain of GSR, sampling of all other traces like blood, backspatter etc., and humidity does not reduce the gain. In addition, with the new PVAL 2.0 dislocation of GSR or contamination are excluded. PVAL 2.0 can also be applied on live suspects.
Method for Direct Measurement of Cosmic Acceleration by 21-cm Absorption Systems
NASA Astrophysics Data System (ADS)
Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li
2014-07-01
So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively.
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W; Wagner, John C
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
Challenges in LER/CDU metrology in DSA: placement error and cross-line correlations
NASA Astrophysics Data System (ADS)
Constantoudis, Vassilios; Kuppuswamy, Vijaya-Kumar M.; Gogolides, Evangelos; Pret, Alessandro V.; Pathangi, Hari; Gronheid, Roel
2016-03-01
DSA lithography poses new challenges in LER/LWR metrology due to its self-organized and pitch-based nature. To cope with these challenges, a novel characterization approach with new metrics and updating the older ones is required. To this end, we focus on two specific challenges of DSA line patterns: a) the large correlations between the left and right edges of a line (line wiggling, rms(LWR)
Mask free intravenous 3D digital subtraction angiography (IV 3D-DSA) from a single C-arm acquisition
NASA Astrophysics Data System (ADS)
Li, Yinsheng; Niu, Kai; Yang, Pengfei; Aagaard-Kienitz, Beveley; Niemann, David B.; Ahmed, Azam S.; Strother, Charles; Chen, Guang-Hong
2016-03-01
Currently, clinical acquisition of IV 3D-DSA requires two separate scans: one mask scan without contrast medium and a filled scan with contrast injection. Having two separate scans adds radiation dose to the patient and increases the likelihood of suffering inadvertent patient motion induced mis-registration and the associated mis-registraion artifacts in IV 3D-DSA images. In this paper, a new technique, SMART-RECON is introduced to generate IV 3D-DSA images from a single Cone Beam CT (CBCT) acquisition to eliminate the mask scan. Potential benefits of eliminating mask scan would be: (1) both radiation dose and scan time can be reduced by a factor of 2; (2) intra-sweep motion can be eliminated; (3) inter-sweep motion can be mitigated. Numerical simulations were used to validate the algorithm in terms of contrast recoverability and the ability to mitigate limited view artifacts.
WDS/DSA Certification - International collaboration for a trustworthy research data infrastructure
NASA Astrophysics Data System (ADS)
Mokrane, Mustapha; Hugo, Wim; Harrison, Sandy
2016-04-01
, German Institute for Standardization (DIN) standard 31644, Trustworthy Repositories Audit and Certification (TRAC) criteria and the International Organization for Standardization (ISO) standard 16363. In addition, the Data Seal of Approval (DSA) and WDS have set up core certification mechanisms for trusted digital repositories in 2009, which are increasingly recognized as de facto standards. While DSA emerged in Europe in the Humanities and Social Sciences, WDS started as an international initiative with historical roots in the Earth and Space Sciences. Their catalogues of requirements and review procedures are based on the same principles of openness, transparency. A unique feature of the DSA and WDS certification is that it strikes a balance between simplicity, robustness and the effort required to complete. A successful international cross-project collaboration was initiated between WDS and DSA under the umbrella of the Research Data Alliance (RDA), an international initiative started in 2013 to promote data interoperability which provided a useful and neutral forum. A joint working group was established in early 2014 to reconcile and simplify the array of certification options and improve and stimulate core certification for scientific data services. The outputs of this collaboration are a Catalogue of Common Requirements (https://goo.gl/LJZqDo) and a Catalogue of Common Procedures (https://goo.gl/vNR0q1) which will be implemented jointly by WDS and DSA.
A method for accelerating the molecular dynamics simulation of infrequent events
Voter, A.F.
1997-03-01
For infrequent-event systems, transition state theory (TST) is a powerful approach for overcoming the time scale limitations of the molecular dynamics (MD) simulation method, provided one knows the locations of the potential-energy basins (states) and the TST dividing surfaces (or the saddle points) between them. Often, however, the states to which the system will evolve are not known in advance. We present a new, TST-based method for extending the MD time scale that does not require advanced knowledge of the states of the system or the transition states that separate them. The potential is augmented by a bias potential, designed to raise the energy in regions {ital other} than at the dividing surfaces. State to state evolution on the biased potential occurs in the proper sequence, but at an accelerated rate with a nonlinear time scale. Time is no longer an independent variable, but becomes a statistically estimated property that converges to the exact result at long times. The long-time dynamical behavior is exact if there are no TST-violating correlated dynamical events, and appears to be a good approximation even when this condition is not met. We show that for strongly coupled (i.e., solid state) systems, appropriate bias potentials can be constructed from properties of the Hessian matrix. This new {open_quotes}hyper-MD{close_quotes} method is demonstrated on two model potentials and for the diffusion of a Ni atom on a Ni(100) terrace for a duration of 20 {mu}s. {copyright} {ital 1997 American Institute of Physics.}
Recent advances in high-performance modeling of plasma-based acceleration using the full PIC method
NASA Astrophysics Data System (ADS)
Vay, J.-L.; Lehe, R.; Vincenti, H.; Godfrey, B. B.; Haber, I.; Lee, P.
2016-09-01
Numerical simulations have been critical in the recent rapid developments of plasma-based acceleration concepts. Among the various available numerical techniques, the particle-in-cell (PIC) approach is the method of choice for self-consistent simulations from first principles. The fundamentals of the PIC method were established decades ago, but improvements or variations are continuously being proposed. We report on several recent advances in PIC-related algorithms that are of interest for application to plasma-based accelerators, including (a) detailed analysis of the numerical Cherenkov instability and its remediation for the modeling of plasma accelerators in laboratory and Lorentz boosted frames, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, and (c) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of perfectly matched layers in high-order and pseudo-spectral solvers.
k-t Group sparse: a method for accelerating dynamic MRI.
Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G
2011-10-01
Compressed sensing (CS) is a data-reduction technique that has been applied to speed up the acquisition in MRI. However, the use of this technique in dynamic MR applications has been limited in terms of the maximum achievable reduction factor. In general, noise-like artefacts and bad temporal fidelity are visible in standard CS MRI reconstructions when high reduction factors are used. To increase the maximum achievable reduction factor, additional or prior information can be incorporated in the CS reconstruction. Here, a novel CS reconstruction method is proposed that exploits the structure within the sparse representation of a signal by enforcing the support components to be in the form of groups. These groups act like a constraint in the reconstruction. The information about the support region can be easily obtained from training data in dynamic MRI acquisitions. The proposed approach was tested in two-dimensional cardiac cine MRI with both downsampled and undersampled data. Results show that higher acceleration factors (up to 9-fold), with improved spatial and temporal quality, can be obtained with the proposed approach in comparison to the standard CS reconstructions. PMID:21394781
Accelerated path integral methods for atomistic simulations at ultra-low temperatures
NASA Astrophysics Data System (ADS)
Uhl, Felix; Marx, Dominik; Ceriotti, Michele
2016-08-01
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.
Deken, Jean Marie; /SLAC
2009-06-19
Advocating for the good of the SLAC Archives and History Office (AHO) has not been a one-time affair, nor has it been a one-method procedure. It has required taking time to ascertain the current and perhaps predict the future climate of the Laboratory, and it has required developing and implementing a portfolio of approaches to the goal of building a stronger archive program by strengthening and appropriately expanding its resources. Among the successful tools in the AHO advocacy portfolio, the Archives Program Review Committee has been the most visible. The Committee and the role it serves as well as other formal and informal advocacy efforts are the focus of this case study My remarks today will begin with a brief introduction to advocacy and outreach as I understand them, and with a description of the Archives and History Office's efforts to understand and work within the corporate culture of the SLAC National Accelerator Laboratory. I will then share with you some of the tools we have employed to advocate for the Archives and History Office programs and activities; and finally, I will talk about how well - or badly - those tools have served us over the past decade.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S{sub n}) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Accelerated path integral methods for atomistic simulations at ultra-low temperatures.
Uhl, Felix; Marx, Dominik; Ceriotti, Michele
2016-08-01
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.
Accelerated path integral methods for atomistic simulations at ultra-low temperatures.
Uhl, Felix; Marx, Dominik; Ceriotti, Michele
2016-08-01
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state. PMID:27497533
ERIC Educational Resources Information Center
Manche, Emanuel P.
1979-01-01
Describes a compact and portable apparatus for the measurement, with a high degree of precision, the value of the gravitational acceleration g. The apparatus consists of a falling mercury drop and an electronic timing circuit. (GA)
ERIC Educational Resources Information Center
Huang, SuHua
2012-01-01
The mixed-method explanatory research design was employed to investigate the effectiveness of the Accelerated Reader (AR) program on middle school students' reading achievement and motivation. A total of 211 sixth to eighth-grade students provided quantitative data by completing an AR Survey. Thirty of the 211 students were randomly selected to…
Optimization of accelerator parameters using normal form methods on high-order transfer maps
Snopok, Pavel
2007-05-01
Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented
NASA Astrophysics Data System (ADS)
Nishiuchi, M.; Sakaki, H.; Esirkepov, T. Zh.; Nishio, K.; Pikuz, T. A.; Faenov, A. Ya.; Skobelev, I. Yu.; Orlandi, R.; Pirozhkov, A. S.; Sagisaka, A.; Ogura, K.; Kanasaki, M.; Kiriyama, H.; Fukuda, Y.; Koura, H.; Kando, M.; Yamauchi, T.; Watanabe, Y.; Bulanov, S. V.; Kondo, K.; Imai, K.; Nagamiya, S.
2016-04-01
A combination of a petawatt laser and nuclear physics techniques can crucially facilitate the measurement of exotic nuclei properties. With numerical simulations and laser-driven experiments we show prospects for the Laser-driven Exotic Nuclei extraction-acceleration method proposed in [M. Nishiuchi et al., Phys, Plasmas 22, 033107 (2015)]: a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly charged short-lived heavy exotic nuclei created in the target via nuclear reactions.
Hassanein, Ahmed; Konkashbaev, Isak
2006-10-03
A device and method for generating extremely short-wave ultraviolet electromagnetic wave uses two intersecting plasma beams generated by two plasma accelerators. The intersection of the two plasma beams emits electromagnetic radiation and in particular radiation in the extreme ultraviolet wavelength. In the preferred orientation two axially aligned counter streaming plasmas collide to produce an intense source of electromagnetic radiation at the 13.5 nm wavelength. The Mather type plasma accelerators can utilize tin, or lithium covered electrodes. Tin, lithium or xenon can be used as the photon emitting gas source.
Chapinal, N; de Passillé, A M; Pastell, M; Hänninen, L; Munksgaard, L; Rushen, J
2011-06-01
The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall acceleration, as well as the asymmetry of variance of acceleration within the front and rear pair of legs. In experiment 1, the asymmetry of variance of acceleration in the front and rear legs was positively correlated with overall gait and the visually assessed asymmetry of the steps (r ≥ 0.6). Walking speed was negatively correlated with the asymmetry of variance of the rear legs (r=-0.8) and positively correlated with the acceleration and the variance of acceleration of each leg and back (r ≥ 0.7). In experiment 2, cows had lower gait scores [2.3 vs. 2.6; standard error of the difference (SED)=0.1, measured on a 5-point scale] and lower scores for asymmetry of the steps (18.0 vs. 23.1; SED=2.2, measured on a continuous 100-unit scale) when they walked on rubber compared with concrete, and their walking speed increased (1.28 vs. 1.22 m/s; SED=0.02). The acceleration of the front (1.67 vs. 1.72 g; SED=0.02) and rear (1.62 vs. 1.67 g; SED=0.02) legs and the variance of acceleration of the rear legs (0.88 vs. 0.94 g; SED=0.03) were lower when cows walked on rubber
Grassi, G.
2006-07-01
We present a non-linear space-angle two-level acceleration scheme for the method of the characteristics (MOC). To the fine level on which the MOC transport calculation is performed, we associate a more coarsely discretized phase space in which a low-order problem is solved as an acceleration step. Cross sections on the coarse level are obtained by a flux-volume homogenisation technique, which entails the non-linearity of the acceleration. Discontinuity factors per surface are introduced as additional degrees of freedom on the coarse level in order to ensure the equivalence of the heterogeneous and the homogenised problem. After each fine transport iteration, a low-order transport problem is iteratively solved on the homogenised grid. The solution of this problem is then used to correct the angular moments of the flux resulting from the previous free transport sweep. Numerical tests for a given benchmark have been performed. Results are discussed. (authors)
Advanced 3D Poisson solvers and particle-in-cell methods for accelerator modeling
NASA Astrophysics Data System (ADS)
Serafini, David B.; McCorquodale, Peter; Colella, Phillip
2005-01-01
We seek to improve on the conventional FFT-based algorithms for solving the Poisson equation with infinite-domain (open) boundary conditions for large problems in accelerator modeling and related areas. In particular, improvements in both accuracy and performance are possible by combining several technologies: the method of local corrections (MLC); the James algorithm; and adaptive mesh refinement (AMR). The MLC enables the parallelization (by domain decomposition) of problems with large domains and many grid points. This improves on the FFT-based Poisson solvers typically used as it doesn't require the all-to-all communication pattern that parallel 3d FFT algorithms require, which tends to be a performance bottleneck on current (and foreseeable) parallel computers. In initial tests, good scalability up to 1000 processors has been demonstrated for our new MLC solver. An essential component of our approach is a new version of the James algorithm for infinite-domain boundary conditions for the case of three dimensions. By using a simplified version of the fast multipole method in the boundary-to-boundary potential calculation, we improve on the performance of the Hockney algorithm typically used by reducing the number of grid points by a factor of 8, and the CPU costs by a factor of 3. This is particularly important for large problems where computer memory limits are a consideration. The MLC allows for the use of adaptive mesh refinement, which reduces the number of grid points and increases the accuracy in the Poisson solution. This improves on the uniform grid methods typically used in PIC codes, particularly in beam problems where the halo is large. Also, the number of particles per cell can be controlled more closely with adaptivity than with a uniform grid. To use AMR with particles is more complicated than using uniform grids. It affects depositing particles on the non-uniform grid, reassigning particles when the adaptive grid changes and maintaining the load
Statistical Method for Nonequilibrium Systems with Application to Accelerator Beam Dynamics
NASA Astrophysics Data System (ADS)
Meller, Robert Edwin
In this thesis, a method is developed for calculating the limit cycle distribution of a many-particle system in weak contact with a heat bath. Both externally driven systems and unstable systems with mean-field collective interaction are considered. The system is described by a Fokker-Planck equation, and then the single particle motion is transformed to action -angle coordinates to separate the thermal and mechanical time dependencies. The equation is then averaged over angle variables to remove the mechanical motion and produce an equation with only thermal motion in action space. The limit cycle is the time-independent solution of the averaged equation. As an example of a driven system, the distribution of driven oscillators is calculated in the region of action space near a nonlinear resonance, and the perpetual currents known as resonance streaming are shown. As an example of collective instability, the thermodynamic stability of a system of oscillators with a long range cosine potential is considered. For the case of an attractive potential, time dependent limit cycles are found with lower free energy than equilibrium. Hence, this is a conservative many-body system which oscillates spontaneously when placed in contact with a heat bath. This prediction is verified with numerical simulations. The phenomenon of accelerator bunch lengthening is then explained as an example of thermal instability which has been enhanced by the nonconservative nature of the wake field coupling force. The threshold of thermal instability is shown to be related to the total energy loss of the charge bunch, rather than to the collective frequency shift as predicted for the threshold of mechanical instability by the linearized Vlasov equation. Numerical calculations of bunch lengthening in the electron storage ring SPEAR are presented, and compared with simulations.
Wells, Brian J
2016-01-01
Background Most patients presenting to US Emergency Departments (ED) with chest pain are hospitalized for comprehensive testing. These evaluations cost the US health system >$10 billion annually, but have a diagnostic yield for acute coronary syndrome (ACS) of <10%. The history/ECG/age/risk factors/troponin (HEART) Pathway is an accelerated diagnostic protocol (ADP), designed to improve care for patients with acute chest pain by identifying patients for early ED discharge. Prior efficacy studies demonstrate that the HEART Pathway safely reduces cardiac testing, while maintaining an acceptably low adverse event rate. Objective The purpose of this study is to determine the effectiveness of HEART Pathway ADP implementation within a health system. Methods This controlled before-after study will accrue adult patients with acute chest pain, but without ST-segment elevation myocardial infarction on electrocardiogram for two years and is expected to include approximately 10,000 patients. Outcomes measures include hospitalization rate, objective cardiac testing rates (stress testing and angiography), length of stay, and rates of recurrent cardiac care for participants. Results In pilot data, the HEART Pathway decreased hospitalizations by 21%, decreased hospital length (median of 12 hour reduction), without increasing adverse events or recurrent care. At the writing of this paper, data has been collected on >5000 patient encounters. The HEART Pathway has been fully integrated into health system electronic medical records, providing real-time decision support to our providers. Conclusions We hypothesize that the HEART Pathway will safely reduce healthcare utilization. This study could provide a model for delivering high-value care to the 8-10 million US ED patients with acute chest pain each year. ClinicalTrial Clinicaltrials.gov NCT02056964; https://clinicaltrials.gov/ct2/show/NCT02056964 (Archived by WebCite at http://www.webcitation.org/6ccajsgyu) PMID:26800789
Simpson, J.D.
1990-01-01
The search for new methods to accelerate particle beams to high energy using high gradients has resulted in a number of candidate schemes. One of these, wakefield acceleration, has been the subject of considerable R D in recent years. This effort has resulted in successful proof of principle experiments and in increased understanding of many of the practical aspects of the technique. Some wakefield basics plus the status of existing and proposed experimental work is discussed, along with speculations on the future of wake field acceleration. 10 refs., 6 figs.
NASA Astrophysics Data System (ADS)
Aldana, M.; Costanzo-Alvarez, V.; Gonzalez, C.; Gomez, L.
2009-05-01
During the last few years we have performed surface reservoir characterization at some Venezuelan oil fields using rock magnetic properties. We have tried to identify, at shallow levels, the "oil magnetic signature" of subjacent reservoirs. Recent data obtained from eastern Venezuela (San Juan field) emphasizes the differences between rock magnetic data from eastern and western oil fields. These results support the hypothesis of different authigenic processes. To better characterize hydrocarbon microseepage in both cases, we apply a new method to analyze IRM curves in order to find out the main magnetic phases responsible for the observed magnetic susceptibility (MS) anomalies. This alternative method is based on a Direct Signal Analysis (DSA) of the IRM in order to identify the number and type of magnetic components. According to this method, the IRM curve is decomposed as the sum of N elementary curves (modeled using the expression proposed by Robertson and France, 1994) whose mean coercivities vary in the interval of the measured magnetic field. The result is an adjusted spectral histogram from which the number of main contributions, their widths and mean coercivities, associated with the number and type of magnetic minerals, can be obtained. This analysis indicates that in western fields the main magnetic mineralogy is magnetite. Conversely in eastern fields, the MS anomalies are mainly caused by the presence of Fe sulphides (i.e. greigite). These results support the hypothesis of two different processes. In western fields a net electron transfer from the organic matter, degraded by hydrocarbon gas leakage, should occur precipitating Fe(II) magnetic minerals (e.g. magnetite). On the other hand, high concentrations of H2S at shallow depth levels, might allow the formation of secondary Fe-sulphides in eastern fields.
Application of the Euler-Lagrange method in determination of the coordinate acceleration
NASA Astrophysics Data System (ADS)
Sfarti, A.
2016-05-01
In a recent comment published in this journal (2015 Eur. J. Phys. 36 038001), Khrapko derived the relationship between coordinate acceleration and coordinate speed for the case of radial motion in Schwarzschild coordinates. We will show an alternative derivation based on the Euler-Lagrange formalism. The Euler-Lagrange formalism has the advantage that it circumvents the tedious calculations of the Christoffel symbols and it is more intuitive. Another aspect of our comment is that one should not give much physical meaning to coordinate dependent entities, GR is a coordinate free field, so, a relationship between two coordinate dependent entities, like the acceleration being dependent on speed, should not be given much importance. By contrast, the proper acceleration and proper speed, are meaningful entities and their relationship is relevant. The comment is intended for graduate students and for the instructors who teach GR.
NASA Astrophysics Data System (ADS)
Ye, Junye; le Roux, Jakobus A.; Arthur, Aaron D.
2016-08-01
We study the physics of locally born interstellar pickup proton acceleration at the nearly perpendicular solar wind termination shock (SWTS) in the presence of a random magnetic field spiral angle using a focused transport model. Guided by Voyager 2 observations, the spiral angle is modeled with a q-Gaussian distribution. The spiral angle fluctuations, which are used to generate the perpendicular diffusion of pickup protons across the SWTS, play a key role in enabling efficient injection and rapid diffusive shock acceleration (DSA) when these particles follow field lines. Our simulations suggest that variation of both the shape (q-value) and the standard deviation (σ-value) of the q-Gaussian distribution significantly affect the injection speed, pitch-angle anisotropy, radial distribution, and the efficiency of the DSA of pickup protons at the SWTS. For example, increasing q and especially reducing σ enhances the DSA rate.
Zuo, Pingbing; Zhang, Ming; Rassoul, Hamid K.
2013-10-20
The focused transport theory is appropriate to describe the injection and acceleration of low-energy particles at shocks as an extension of diffusive shock acceleration (DSA). In this investigation, we aim to characterize the role of cross-shock potential (CSP) originated in the charge separation across the shock ramp on pickup ion (PUI) acceleration at various types of shocks with a focused transport model. The simulation results of energy spectrum and spatial density distribution for the cases with and without CSP added in the model are compared. With sufficient acceleration time, the focused transport acceleration finally falls into the DSA regime with the power-law spectral index equal to the solution of the DSA theory. The CSP can affect the shape of the spectrum segment at lower energies, but it does not change the spectral index of the final power-law spectrum at high energies. It is found that the CSP controls the injection efficiency which is the fraction of PUIs reaching the DSA regime. A stronger CSP jump results in a dramatically improved injection efficiency. Our simulation results also show that the injection efficiency of PUIs is mass-dependent, which is lower for species with a higher mass. In addition, the CSP is able to enhance the particle reflection upstream to produce a stronger intensity spike at the shock front. We conclude that the CSP is a non-negligible factor that affects the dynamics of PUIs at shocks.
NASA Technical Reports Server (NTRS)
Kolyer, J. M.; Mann, N. R.
1977-01-01
Methods of accelerated and abbreviated testing were developed and applied to solar cell encapsulants. These encapsulants must provide protection for as long as 20 years outdoors at different locations within the United States. Consequently, encapsulants were exposed for increasing periods of time to the inherent climatic variables of temperature, humidity, and solar flux. Property changes in the encapsulants were observed. The goal was to predict long term behavior of encapsulants based upon experimental data obtained over relatively short test periods.
Hull, J.R.
2000-06-27
Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operating temperature at or below 77 K, whereby cooling may be accomplished with liquid nitrogen.
Hull, John R.
1998-11-06
Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operative temperature at or below 77K, whereby cooling maybe accomplished with liquid nitrogen.
Pierpont, D. M.; Hicks, M. T.; Turner, P. L.; Watschke, T. M.
2005-11-01
For the successful commercialization of fuel cell technology, it is imperative that membrane electrode assembly (MEA) durability is understood and quantified. MEA lifetimes of 40,000 hours remain a key target for stationary power applications. Since it is impractical to wait 40,000 hours for durability results, it is critical to learn as much information as possible in as short a time period as possible to determine if an MEA sample will survive past its lifetime target. Consequently, 3M has utilized accelerated testing and statistical lifetime modeling tools to develop a methodology for evaluating MEA lifetime. Construction and implementation of a multi-cell test stand have allowed for multiple accelerated tests and stronger statistical data for learning about durability.
New methods for high current fast ion beam production by laser-driven acceleration
Margarone, D.; Krasa, J.; Prokupek, J.; Velyhan, A.; Laska, L.; Jungwirth, K.; Mocek, T.; Korn, G.; Rus, B.; Torrisi, L.; Gammino, S.; Cirrone, P.; Cutroneo, M.; Romano, F.; Picciotto, A.; Serra, E.; Giuffrida, L.; Mangione, A.; Rosinski, M.; Parys, P.; and others
2012-02-15
An overview of the last experimental campaigns on laser-driven ion acceleration performed at the PALS facility in Prague is given. Both the 2 TW, sub-nanosecond iodine laser system and the 20 TW, femtosecond Ti:sapphire laser, recently installed at PALS, are used along our experiments performed in the intensity range 10{sup 16}-10{sup 19} W/cm{sup 2}. The main goal of our studies was to generate high energy, high current ion streams at relatively low laser intensities. The discussed experimental investigations show promising results in terms of maximum ion energy and current density, which make the laser-accelerated ion beams a candidate for new-generation ion sources to be employed in medicine, nuclear physics, matter physics, and industry.
Hull, John R.
2000-01-01
Gravitational acceleration is measured in all spatial dimensions with improved sensitivity by utilizing a high temperature superconducting (HTS) gravimeter. The HTS gravimeter is comprised of a permanent magnet suspended in a spaced relationship from a high temperature superconductor, and a cantilever having a mass at its free end is connected to the permanent magnet at its fixed end. The permanent magnet and superconductor combine to form a bearing platform with extremely low frictional losses, and the rotational displacement of the mass is measured to determine gravitational acceleration. Employing a high temperature superconductor component has the significant advantage of having an operating temperature at or below 77K, whereby cooling may be accomplished with liquid nitrogen.
Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes
2008-08-28
Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset.
Dana A. Knoll; H. Park; Kord Smith
2011-02-01
The use of the Jacobian-free Newton-Krylov (JFNK) method within the context of nonlinear diffusion acceleration (NDA) of source iteration is explored. The JFNK method is a synergistic combination of Newton's method as the nonlinear solver and Krylov methods as the linear solver. JFNK methods do not form or store the Jacobian matrix, and Newton's method is executed via probing the nonlinear discrete function to approximate the required matrix-vector products. Current application of NDA relies upon a fixed-point, or Picard, iteration to resolve the nonlinearity. We show that the JFNK method can be used to replace this Picard iteration with a Newton iteration. The Picard linearization is retained as a preconditioner. We show that the resulting JFNK-NDA capability provides benefit in some regimes. Furthermore, we study the effects of a two-grid approach, and the required intergrid transfers when the higher-order transport method is solved on a fine mesh compared to the low-order acceleration problem.
Tajima, Toshiki
2006-04-18
A system and method of accelerating ions in an accelerator to optimize the energy produced by a light source. Several parameters may be controlled in constructing a target used in the accelerator system to adjust performance of the accelerator system. These parameters include the material, thickness, geometry and surface of the target.
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1976-01-01
The application of high temperature accelerated test techniques was shown to be an effective method of microcircuit defect screening. Comprehensive microcircuit evaluations and a series of high temperature (473 K to 573 K) life tests demonstrated that a freak or early failure population of surface contaminated devices could be completely screened in thirty two hours of test at an ambient temperature of 523 K. Equivalent screening at 398 K, as prescribed by current Military and NASA specifications, would have required in excess of 1,500 hours of test. All testing was accomplished with a Texas Instruments' 54L10, low power triple-3 input NAND gate manufactured with a titanium- tungsten (Ti-W), Gold (Au) metallization system. A number of design and/or manufacturing anomalies were also noted with the Ti-W, Au metallization system. Further study of the exact nature and cause(s) of these anomalies is recommended prior to the use of microcircuits with Ti-W, Au metallization in long life/high reliability applications. Photomicrographs of tested circuits are included.
Ravi Samtaney
2009-02-10
We present a numerical method to solve the linear stability of impulsively accelerated density interfaces in two dimensions such as those arising in the Richtmyer-Meshkov instability. The method uses an Eulerian approach, and is based on an unwind method to compute the temporally evolving base state and a flux vector splitting method for the perturbations. The method is applicable to either gas dynamics or magnetohydrodynamics. Numerical examples are presented for cases in which a hydrodynamic shock interacts with a single or double density interface, and a doubly shocked single density interface. Convergence tests show that the method is spatially second order accurate for smooth flows, and between first and second order accurate for flows with shocks.
Injection to Rapid Diffusive Shock Acceleration at Perpendicular Shocks in Partially Ionized Plasmas
NASA Astrophysics Data System (ADS)
Ohira, Yutaka
2016-08-01
We present a three-dimensional hybrid simulation of a collisionless perpendicular shock in a partially ionized plasma for the first time. In this simulation, the shock velocity and upstream ionization fraction are v sh ≈ 1333 km s‑1 and f i ˜ 0.5, which are typical values for isolated young supernova remnants (SNRs) in the interstellar medium. We confirm previous two-dimensional simulation results showing that downstream hydrogen atoms leak into the upstream region and are accelerated by the pickup process in the upstream region, and large magnetic field fluctuations are generated both in the upstream and downstream regions. In addition, we find that the magnetic field fluctuations have three-dimensional structures and the leaking hydrogen atoms are injected into the diffusive shock acceleration (DSA) at the perpendicular shock after the pickup process. The observed DSA can be interpreted as shock drift acceleration with scattering. In this simulation, particles are accelerated to v ˜ 100 v sh ˜ 0.3 c within ˜100 gyroperiods. The acceleration timescale is faster than that of DSA in parallel shocks. Our simulation results suggest that SNRs can accelerate cosmic rays to 1015.5 eV (the knee) during the Sedov phase.
Injection to Rapid Diffusive Shock Acceleration at Perpendicular Shocks in Partially Ionized Plasmas
NASA Astrophysics Data System (ADS)
Ohira, Yutaka
2016-08-01
We present a three-dimensional hybrid simulation of a collisionless perpendicular shock in a partially ionized plasma for the first time. In this simulation, the shock velocity and upstream ionization fraction are v sh ≈ 1333 km s-1 and f i ˜ 0.5, which are typical values for isolated young supernova remnants (SNRs) in the interstellar medium. We confirm previous two-dimensional simulation results showing that downstream hydrogen atoms leak into the upstream region and are accelerated by the pickup process in the upstream region, and large magnetic field fluctuations are generated both in the upstream and downstream regions. In addition, we find that the magnetic field fluctuations have three-dimensional structures and the leaking hydrogen atoms are injected into the diffusive shock acceleration (DSA) at the perpendicular shock after the pickup process. The observed DSA can be interpreted as shock drift acceleration with scattering. In this simulation, particles are accelerated to v ˜ 100 v sh ˜ 0.3 c within ˜100 gyroperiods. The acceleration timescale is faster than that of DSA in parallel shocks. Our simulation results suggest that SNRs can accelerate cosmic rays to 1015.5 eV (the knee) during the Sedov phase.
Pollock, B B; Ross, J S; Tynan, G R; Divol, L; Glenzer, S H; Leurent, V; Palastro, J P; Ralph, J E; Froula, D H; Clayton, C E; Marsh, K A; Pak, A E; Wang, T L; Joshi, C
2009-04-24
Laser Wakefield Acceleration (LWFA) experiments have been performed at the Jupiter Laser Facility, Lawrence Livermore National Laboratory. In order to unambiguously determine the output electron beam energy and deflection angle at the plasma exit, we have implemented a two-screen electron spectrometer. This system is comprised of a dipole magnet followed by two image plates. By measuring the electron beam deviation from the laser axis on each plate, both the energy and deflection angle at the plasma exit are determined through the relativistic equation of motion.
Accelerator-based neutron source for boron neutron capture therapy (BNCT) and method
Yoon, W.Y.; Jones, J.L.; Nigg, D.W.; Harker, Y.D.
1999-05-11
A source for boron neutron capture therapy (BNCT) comprises a body of photoneutron emitter that includes heavy water and is closely surrounded in heat-imparting relationship by target material; one or more electron linear accelerators for supplying electron radiation having energy of substantially 2 to 10 MeV and for impinging such radiation on the target material, whereby photoneutrons are produced and heat is absorbed from the target material by the body of photoneutron emitter. The heavy water is circulated through a cooling arrangement to remove heat. A tank, desirably cylindrical or spherical, contains the heavy water, and a desired number of the electron accelerators circumferentially surround the tank and the target material as preferably made up of thin plates of metallic tungsten. Neutrons generated within the tank are passed through a surrounding region containing neutron filtering and moderating materials and through neutron delimiting structure to produce a beam or beams of epithermal neutrons normally having a minimum flux intensity level of 1.0{times}10{sup 9} neutrons per square centimeter per second. Such beam or beams of epithermal neutrons are passed through gamma ray attenuating material to provide the required epithermal neutrons for BNCT use. 3 figs.
Accelerator-based neutron source for boron neutron capture therapy (BNCT) and method
Yoon, Woo Y.; Jones, James L.; Nigg, David W.; Harker, Yale D.
1999-01-01
A source for boron neutron capture therapy (BNCT) comprises a body of photoneutron emitter that includes heavy water and is closely surrounded in heat-imparting relationship by target material; one or more electron linear accelerators for supplying electron radiation having energy of substantially 2 to 10 MeV and for impinging such radiation on the target material, whereby photoneutrons are produced and heat is absorbed from the target material by the body of photoneutron emitter. The heavy water is circulated through a cooling arrangement to remove heat. A tank, desirably cylindrical or spherical, contains the heavy water, and a desired number of the electron accelerators circumferentially surround the tank and the target material as preferably made up of thin plates of metallic tungsten. Neutrons generated within the tank are passed through a surrounding region containing neutron filtering and moderating materials and through neutron delimiting structure to produce a beam or beams of epithermal neutrons normally having a minimum flux intensity level of 1.0.times.10.sup.9 neutrons per square centimeter per second. Such beam or beams of epithermal neutrons are passed through gamma ray attenuating material to provide the required epithermal neutrons for BNCT use.
Toward automatic detection of vessel stenoses in cerebral 3D DSA volumes
NASA Astrophysics Data System (ADS)
Mualla, F.; Pruemmer, M.; Hahn, D.; Hornegger, J.
2012-05-01
Vessel diseases are a very common reason for permanent organ damage, disability and death. This fact necessitates further research for extracting meaningful and reliable medical information from the 3D DSA volumes. Murray's law states that at each branch point of a lumen-based system, the sum of the minor branch diameters each raised to the power x, is equal to the main branch diameter raised to the power x. The principle of minimum work and other factors like the vessel type, impose typical values for the junction exponent x. Therefore, deviations from these typical values may signal pathological cases. In this paper, we state the necessary and the sufficient conditions for the existence and the uniqueness of the solution for x. The second contribution is a scale- and orientation- independent set of features for stenosis classification. A support vector machine classifier was trained in the space of these features. Only one branch was misclassified in a cross validation on 23 branches. The two contributions fit into a pipeline for the automatic detection of the cerebral vessel stenoses.
NASA Astrophysics Data System (ADS)
Rosa, Massimiliano
We have derived expressions for the elements of the matrix representing a certain angular (SN) and spatial discretized form of the neutron integral transport operator. This is the transport operator that if directly inverted on the once-collided fixed particle source produces, without the need for an iterative procedure, the converged limit of the scalar fluxes for the iterative procedure. The asymptotic properties of this operator's elements have then been investigated in homogeneous and periodically heterogeneous limits in one-dimensional and two-dimensional geometries. The thesis covers the results obtained from this asymptotic study of the matrix structure of the discrete integral transport operator and illustrates how they relate to the iterative acceleration of neutral particle transport methods. Specifically, it will be shown that in one-dimensional problems (both homogeneous and periodically heterogeneous) and homogeneous two-dimensional problems, containing optically thick cells, the discrete integral transport operator acquires a sparse matrix structure, implying a strong local coupling of a cell-averaged scalar flux only with its nearest Cartesian neighbors. These results provide further insight into the excellent convergence properties of diffusion-based acceleration schemes for this broad class of transport problems. In contrast, the results of the asymptotic analysis for two-dimensional periodically heterogeneous problems point to a sparse but non-local matrix structure due to long-range coupling of a cell's average flux with its neighboring cells, independent of the distance between the cells in the spatial mesh. The latter results indicate that cross-derivative coupling, namely coupling of a cell's average flux to its diagonal neighbors, is of the same order as self-coupling and coupling with its first Cartesian neighbors. Hence they substantiate the conjecture that the loss of robustness of diffusion-based acceleration schemes, in particular of the
2D models of gas flow and ice grain acceleration in Enceladus' vents using DSMC methods
NASA Astrophysics Data System (ADS)
Tucker, Orenthal J.; Combi, Michael R.; Tenishev, Valeriy M.
2015-09-01
The gas distribution of the Enceladus water vapor plume and the terminal speeds of ejected ice grains are physically linked to its subsurface fissures and vents. It is estimated that the gas exits the fissures with speeds of ∼300-1000 m/s, while the micron-sized grains are ejected with speeds comparable to the escape speed (Schmidt, J. et al. [2008]. Nature 451, 685-688). We investigated the effects of isolated axisymmetric vent geometries on subsurface gas distributions, and in turn, the effects of gas drag on grain acceleration. Subsurface gas flows were modeled using a collision-limiter Direct Simulation Monte Carlo (DSMC) technique in order to consider a broad range of flow regimes (Bird, G. [1994]. Molecular Gas Dynamics and the Direct Simulation of Gas Flows. Oxford University Press, Oxford; Titov, E.V. et al. [2008]. J. Propul. Power 24(2), 311-321). The resulting DSMC gas distributions were used to determine the drag force for the integration of ice grain trajectories in a test particle model. Simulations were performed for diffuse flows in wide channels (Reynolds number ∼10-250) and dense flows in narrow tubular channels (Reynolds number ∼106). We compared gas properties like bulk speed and temperature, and the terminal grain speeds obtained at the vent exit with inferred values for the plume from Cassini data. In the simulations of wide fissures with dimensions similar to that of the Tiger Stripes the resulting subsurface gas densities of ∼1014-1020 m-3 were not sufficient to accelerate even micron-sized ice grains to the Enceladus escape speed. In the simulations of narrow tubular vents with radii of ∼10 m, the much denser flows with number densities of 1021-1023 m-3 accelerated micron-sized grains to bulk gas speed of ∼600 m/s. Further investigations are required to understand the complex relationship between the vent geometry, gas source rate and the sizes and speeds of ejected grains.
Estill, C F; MacDonald, L A; Wenzl, T B; Petersen, M R
2000-09-01
Ergonomists need easy-to-use, quantitative job evaluation methods to assess risk factors for upper extremity work-related musculoskeletal disorders in field-based epidemiology studies. One device that may provide an objective measure of exposure to arm acceleration is a wrist-worn accelerometer or activity monitor. A field trial was conducted to evaluate the performance of a single-axis accelerometer using an industrial population (n=158) known to have diverse upper limb motion characteristics. The second phase of the field trial involved an examination of the relationship between more traditional observation-based ergonomic exposure measures and the monitor output among a group of assembly-line production employees (n=48) performing work tasks with highly stereotypic upper limb motion patterns. As expected, the linear acceleration data obtained from the activity monitor showed statistically significant differences between three occupational groups known observationally to have different upper limb motion requirements. Among the assembly-line production employees who performed different short-cycle assembly work tasks, statistically significant differences were also observed. Several observation-based ergonomic exposure measures were found to explain differences in the acceleration measure among the production employees who performed different jobs: hand and arm motion speed, use of the hand as a hammer, and, negatively, resisting forearm rotation from the torque of a power tool. The activity monitors were found to be easy to use and non-intrusive, and to be able to distinguish arm acceleration among groups with diverse upper limb motion characteristics as well as between different assembly job tasks where arm monitors were performed repeatedly at a fixed rate.
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan; Ouazzani, Jalil
1989-01-01
The problem of determining the sensitivity of Bridgman-Stockbarger directional solidification experiments to residual accelerations of the type associated with spacecraft in low earth orbit is analyzed numerically using a pseudo-spectral collocation method. The approach employs a novel iterative scheme combining the method of artificial compressibility and a generalized ADI method. The results emphasize the importance of the consideration of residual accelerations and careful selection of the operating conditions in order to take full advantages of the low gravity conditions.
Evaluation of Dynamic Mechanical Loading as an Accelerated Test Method for Ribbon Fatigue: Preprint
Bosco, N.; Silverman, T. J.; Wohlgemuth, J.; Kurtz, S.; Inoue, M.; Sakurai, K.; Shinoda, T.; Zenkoh, H.; Hirota, K.; Miyashita, M.; Tadanori, T.; Suzuki, S.
2015-04-07
Dynamic Mechanical Loading (DML) of photovoltaic modules is explored as a route to quickly fatigue copper interconnect ribbons. Results indicate that most of the interconnect ribbons may be strained through module mechanical loading to a level that will result in failure in a few hundred to thousands of cycles. Considering the speed at which DML may be applied, this translates into a few hours o testing. To evaluate the equivalence of DML to thermal cycling, parallel tests were conducted with thermal cycling. Preliminary analysis suggests that one +/-1 kPa DML cycle is roughly equivalent to one standard accelerated thermal cycle and approximately 175 of these cycles are equivalent to a 25-year exposure in Golden Colorado for the mechanism of module ribbon fatigue.
Neutron source, linear-accelerator fuel enricher and regenerator and associated methods
Steinberg, Meyer; Powell, James R.; Takahashi, Hiroshi; Grand, Pierre; Kouts, Herbert
1982-01-01
A device for producing fissile material inside of fabricated nuclear elements so that they can be used to produce power in nuclear power reactors. Fuel elements, for example, of a LWR are placed in pressure tubes in a vessel surrounding a liquid lead-bismuth flowing columnar target. A linear-accelerator proton beam enters the side of the vessel and impinges on the dispersed liquid lead-bismuth columns and produces neutrons which radiate through the surrounding pressure tube assembly or blanket containing the nuclear fuel elements. These neutrons are absorbed by the natural fertile uranium-238 elements and are transformed to fissile plutonium-239. The fertile fuel is thus enriched in fissile material to a concentration whereby they can be used in power reactors. After use in the power reactors, dispensed depleted fuel elements can be reinserted into the pressure tubes surrounding the target and the nuclear fuel regenerated for further burning in the power reactor.
Evaluation of Dynamic Mechanical Loading as an Accelerated Test Method for Ribbon Fatigue
Bosco, Nick; Silverman, Timothy J.; Wohlgemuth, John; Kurtz, Sarah; Inoue, Masanao; Sakurai, Keiichiro; Shioda, Tsuyoshi; Zenkoh, Hirofumi; Hirota, Kusato; Miyashita, Masanori; Tadanori, Tanahashi; Suzuki, Soh; Chen, Yifeng; Verlinden, Pierre J.
2014-12-31
Dynamic Mechanical Loading (DML) of photovoltaic modules is explored as a route to quickly fatigue copper interconnect ribbons. Results indicate that most of the interconnect ribbons may be strained through module mechanical loading to a level that will result in failure in a few hundred to thousands of cycles. Considering the speed at which DML may be applied, this translates into a few hours of testing. To evaluate the equivalence of DML to thermal cycling, parallel tests were conducted with thermal cycling. Preliminary analysis suggests that one +/-1 kPa DML cycle is roughly equivalent to one standard accelerated thermal cycle and approximately 175 of these cycles are equivalent to a 25-year exposure in Golden Colorado for the mechanism of module ribbon fatigue.
A simplified spherical harmonic method for coupled electron-photon transport calculations
Josef, J.A.
1996-12-01
In this thesis we have developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. Our theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. Previous analyses have indicated that the P{sub 1} DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S{sub N} equations, yet is very effective for the 1-D S{sub N} equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP{sub N} method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP{sub N} method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations.
NASA Astrophysics Data System (ADS)
Czarski, Tomasz; Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Simrock, Stefan
2004-07-01
The cavity control system for the TESLA -- TeV-Energy Superconducting Linear Accelerator project is initially introduced in this paper. The FPGA -- Field Programmable Gate Array technology has been implemented for digital controller stabilizing cavity field gradient. The cavity SIMULINK model has been applied to test the hardware controller. The step operation method has been developed for testing the FPGA device coupled to the SIMULINK model of the analog real plant. The FPGA signal processing has been verified according to the required algorithm of the reference MATLAB controller. Some experimental results have been presented for different cavity operational conditions.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Otsuka, Takao; Okimoto, Noriaki; Taiji, Makoto
2015-11-15
In the field of drug discovery, it is important to accurately predict the binding affinities between target proteins and drug applicant molecules. Many of the computational methods available for evaluating binding affinities have adopted molecular mechanics-based force fields, although they cannot fully describe protein-ligand interactions. A noteworthy computational method in development involves large-scale electronic structure calculations. Fragment molecular orbital (FMO) method, which is one of such large-scale calculation techniques, is applied in this study for calculating the binding energies between proteins and ligands. By testing the effects of specific FMO calculation conditions (including fragmentation size, basis sets, electron correlation, exchange-correlation functionals, and solvation effects) on the binding energies of the FK506-binding protein and 10 ligand complex molecule, we have found that the standard FMO calculation condition, FMO2-MP2/6-31G(d), is suitable for evaluating the protein-ligand interactions. The correlation coefficient between the binding energies calculated with this FMO calculation condition and experimental values is determined to be R = 0.77. Based on these results, we also propose a practical scheme for predicting binding affinities by combining the FMO method with the quantitative structure-activity relationship (QSAR) model. The results of this combined method can be directly compared with experimental binding affinities. The FMO and QSAR combined scheme shows a higher correlation with experimental data (R = 0.91). Furthermore, we propose an acceleration scheme for the binding energy calculations using a multilayer FMO method focusing on the protein-ligand interaction distance. Our acceleration scheme, which uses FMO2-HF/STO-3G:MP2/6-31G(d) at R(int) = 7.0 Å, reduces computational costs, while maintaining accuracy in the evaluation of binding energy.
Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus
2015-09-01
Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. PMID:26388368
Ultrasensitive detection method for primordial nuclides in copper with Accelerator Mass Spectrometry
NASA Astrophysics Data System (ADS)
Famulok, N.; Faestermann, T.; Fimiani, L.; Gómez-Guzmán, J. M.; Hain, K.; Korschinek, G.; Ludwig, P.; Schönert, S.
2015-10-01
The sensitivity of rare event physics experiments like neutrino or direct dark matter detection crucially depends on the background level. A significant background contribution originates from the primordial actinides thorium (Th) and uranium (U) and the progenies of their decay chains. The applicability of ultra-sensitive Accelerator Mass Spectrometry (AMS) for the direct detection of Th and U impurities in three copper samples is evaluated. Although AMS has been proven to reach outstanding sensitivities for long-lived isotopes, this technique has only very rarely been used to detect ultra low concentrations of primordial actinides. Here it is utilized for the first time to detect primordial Th and U in ultra pure copper serving as shielding material in low level detectors. The lowest concentrations achieved were (1.5 ± 0.6) ·10-11 g/g for Th and (8 ± 4) ·10-14 g/g for U which corresponds to (59 ± 24) and (1.0 ± 0.5) μBq/kg, respectively.
Optimal control with accelerated convergence: Combining the Krotov and quasi-Newton methods
Eitan, Reuven; Mundt, Michael; Tannor, David J.
2011-05-15
One of the most popular methods for solving numerical optimal control problems is the Krotov method, adapted for quantum control by Tannor and coworkers. The Krotov method has the following three appealing properties: (1) monotonic increase of the objective with iteration number, (2) no requirement for a line search, leading to a significant savings over gradient (first-order) methods, and (3) macrosteps at each iteration, resulting in significantly faster growth of the objective at early iterations than in gradient methods where small steps are required. The principal drawback of the Krotov method is slow convergence at later iterations, which is particularly problematic when high fidelity is desired. We show here that, near convergence, the Krotov method degenerates to a first-order gradient method. We then present a variation on the Krotov method that has all the advantages of the original Krotov method but with significantly enhanced convergence (second-order or quasi-Newton) as the optimal solution is approached. We illustrate the method by controlling the three-dimensional dynamics of the valence electron in the Na atom.
Vay, J.-L.; Geddes, C.G.R.; Cormier-Michel, E.; Grote, D.P.
2011-07-01
Modeling of laser-plasma wakefield accelerators in an optimal frame of reference has been shown to produce orders of magnitude speed-up of calculations from first principles. Obtaining these speedups required mitigation of a high-frequency instability that otherwise limits effectiveness. In this paper, methods are presented which mitigated the observed instability, including an electromagnetic solver with tunable coefficients, its extension to accommodate Perfectly Matched Layers and Friedman's damping algorithms, as well as an efficient large bandwidth digital filter. It is observed that choosing the frame of the wake as the frame of reference allows for higher levels of filtering or damping than is possible in other frames for the same accuracy. Detailed testing also revealed the existence of a singular time step at which the instability level is minimized, independently of numerical dispersion. A combination of the techniques presented in this paper prove to be very efficient at controlling the instability, allowing for efficient direct modeling of 10 GeV class laser plasma accelerator stages. The methods developed in this paper may have broader application, to other Lorentz-boosted simulations and Particle-In-Cell simulations in general.
Ellis, P.F. II; Ferguson, A.F.
1995-04-19
In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. Phase 1 of the project, Conceptual Design of an accelerated test method and apparatus, was successfully completed in June 1993. The culmination of that effort was the concept of the Simulated Stator Unit (SSU) test. The objective of the Phase 2 limited proof-of-concept demonstration was to: answer specific engineering/design questions; design and construct an analog control sequencer and supporting apparatus; and conduct limited tests to determine the viability of the SSU test concept. This report reviews the SSU test concept, and describes the results through the conclusion of the proof-of-concept prototype tests in March 1995. The technical design issues inherent in transforming any conceptual design to working equipment have been resolved, and two test systems and controllers have been constructed. Pilot tests and three prototype tests have been completed, concluding the current phase of work. One prototype unit was tested without thermal stress loads. Twice daily insulation property measurements (IPMs) on this unit demonstrated that the insulation property measurements themselves did not degrade the SSU.
NASA Astrophysics Data System (ADS)
Ellis, P. F., II; Ferguson, A. F.
1995-04-01
In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. Phase 1 of the project, Conceptual Design of an accelerated test method and apparatus, was successfully completed in June 1993. The culmination of that effort was the concept of the Simulated Stator Unit (SSU) test. The objective of the Phase 2 limited proof-of-concept demonstration was to: answer specific engineering/design questions; design and construct an analog control sequencer and supporting apparatus; and conduct limited tests to determine the viability of the SSU test concept. This report reviews the SSU test concept, and describes the results through the conclusion of the proof-of-concept prototype tests in March 1995. The technical design issues inherent in transforming any conceptual design to working equipment have been resolved, and two test systems and controllers have been constructed. Pilot tests and three prototype tests have been completed, concluding the current phase of work. One prototype unit was tested without thermal stress loads. Twice daily insulation property measurements (IPM's) on this unit demonstrated that the insulation property measurements themselves did not degrade the SSU.
NASA Astrophysics Data System (ADS)
Chen, Qi; Chen, Quan; Luo, Xiaobing
2014-09-01
In recent years, due to the fast development of high power light-emitting diode (LED), its lifetime prediction and assessment have become a crucial issue. Although the in situ measurement has been widely used for reliability testing in laser diode community, it has not been applied commonly in LED community. In this paper, an online testing method for LED life projection under accelerated reliability test was proposed and the prototype was built. The optical parametric data were collected. The systematic error and the measuring uncertainty were calculated to be within 0.2% and within 2%, respectively. With this online testing method, experimental data can be acquired continuously and sufficient amount of data can be gathered. Thus, the projection fitting accuracy can be improved (r2 = 0.954) and testing duration can be shortened.
Hu, Yu-Jen; Chow, Kuan-Chih; Liu, Ching-Chuan; Lin, Li-Jen; Wang, Sheng-Cheng; Wang, Shulhn-Der
2015-08-01
The standard World Health Organization procedure for vaccine development has provided a guideline for influenza viruses, but no systematic operational model. We recently designed a systemic analysis method to evaluate annual perspective sequence changes of influenza virus strains. We applied dnaml of PHYLIP 3.69, developed by Joseph Felsenstein of Washington University, and ClustalX2, developed by Larkin et al, for calculating, comparing, and localizing the most plausible vaccine epitopes. This study identified the changes in biological sequences and associated alignment alterations, which would ultimately affect epitope structures, as well as the plausible hidden features to search for the most conserved and effective epitopes for vaccine development. Addition our newly designed systemic analysis method to supplement the WHO guidelines could accelerate the development of urgently needed vaccines that might concurrently combat several strains of viruses within a shorter period. PMID:26044364
On the equivalence of LIST and DIIS methods for convergence acceleration
Garza, Alejandro J.; Scuseria, Gustavo E.
2015-04-28
Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay’s DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.
Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method
Dana Knoll; HyeongKae Park; Chris Newman
2011-02-01
We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.
Linear-scaling multipole-accelerated Gaussian and finite-element Coulomb method
NASA Astrophysics Data System (ADS)
Watson, Mark A.; Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko
2008-02-01
A linear-scaling implementation of the Gaussian and finite-element Coulomb (GFC) method is presented for the rapid computation of the electronic Coulomb potential. The current work utilizes the fast multipole method (FMM) for the evaluation of the Poisson equation boundary condition. The FMM affords significant savings for small- and medium-sized systems and overcomes the bottleneck in the GFC method for very large systems. Compared to an exact analytical treatment of the boundary, more than 100-fold speedups are observed for systems with more than 1000 basis functions without any significant loss of accuracy. We present CPU times to demonstrate the effectiveness of the linear-scaling GFC method for both one-dimensional polyalanine chains and the challenging case of three-dimensional diamond fragments.
Can Accelerators Accelerate Learning?
NASA Astrophysics Data System (ADS)
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
Can Accelerators Accelerate Learning?
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-10
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ)[1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery
2016-01-01
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632
Multi-GPU accelerated three-dimensional FDTD method for electromagnetic simulation.
Nagaoka, Tomoaki; Watanabe, Soichi
2011-01-01
Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.
White III, James B; Archibald, Richard K; Evans, Katherine J; Drake, John
2011-01-01
In this paper we present a new approach to increase the time-step size for an explicit discontinuous Galerkin numerical method. The attributes of this approach are demonstrated on standard tests for the shallow-water equations on the sphere. The addition of multiwavelets to discontinuous Galerkin method, which has the benefit of being scalable, flexible, and conservative, provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact linear part time-evolution schemes, which can remain stable for implicit-sized time steps, can help increase the time-step size for shallow water equations on the sphere.
Properties of the Feynman-alpha method applied to accelerator-driven subcritical systems.
Taczanowski, S; Domanska, G; Kopec, M; Janczyszyn, J
2005-01-01
A Monte Carlo study of the Feynman-method with a simple code simulating the multiplication chain, confined to pertinent time-dependent phenomena has been done. The significance of its key parameters (detector efficiency and dead time, k-source and spallation neutrons multiplicities, required number of fissions etc.) has been discussed. It has been demonstrated that this method can be insensitive to properties of the zones surrounding the core, whereas is strongly affected by the detector dead time. In turn, the influence of harmonics in the neutron field and of the dispersion of spallation neutrons has proven much less pronounced.
NASA Astrophysics Data System (ADS)
Zhao, Qiang, He, Zhi-Yong; Yang, Lei; Zhang, Xue-Ying; Cui, Wen-Juan; Chen, Zhi-Qiang; Xu, Hu-Shan
2016-07-01
In this paper, we study a monitoring method for neutron flux for the spallation target used in an accelerator driven sub-critical (ADS) system, where a spallation target located vertically at the centre of a sub-critical core is bombarded vertically by high-energy protons from an accelerator. First, by considering the characteristics in the spatial variation of neutron flux from the spallation target, we propose a multi-point measurement technique, i.e. the spallation neutron flux should be measured at multiple vertical locations. To explain why the flux should be measured at multiple locations, we have studied neutron production from a tungsten target bombarded by a 250 MeV-proton beam with Geant4-based Monte Carlo simulations. The simulation results indicate that the neutron flux at the central location is up to three orders of magnitude higher than the flux at lower locations. Secondly, we have developed an effective technique in order to measure the spallation neutron flux with a fission chamber (FC), by establishing the relation between the fission rate measured by FC and the spallation neutron flux. Since this relation is linear for a FC, a constant calibration factor is used to derive the neutron flux from the measured fission rate. This calibration factor can be extracted from the energy spectra of spallation neutrons. Finally, we have evaluated the proposed calibration method for a FC in the environment of an ADS system. The results indicate that the proposed method functions very well. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03010000 and XDA03030000) and the National Natural Science Foundation of China(91426301).
Accelerating the Use of Weblogs as an Alternative Method to Deliver Case-Based Learning
ERIC Educational Resources Information Center
Chen, Charlie; Wu, Jiinpo; Yang, Samuel C.
2008-01-01
Weblog technology is an alternative medium to deliver the case-based method of learning business concepts. The social nature of this technology can potentially promote active learning and enhance analytical ability of students. The present research investigates the primary factors contributing to the adoption of Weblog technology by students to…
Two common laboratory extraction techniques were evaluated for routine use with the micro-colorimetric lipid determination method developed by Van Handel (1985) [E. Van Handel, J. Am. Mosq. Control Assoc. 1(1985) 302] and recently validated for small samples by Inouye and Lotufo ...
Acceleration of Ions and Electrons by Coronal Shocks
NASA Astrophysics Data System (ADS)
Sandroos, A.
2013-12-01
Diffusive shock acceleration (DSA) of particles at collisionless shock waves driven by coronal mass ejections (CMEs) is the best developed theory for the genesis of gradual solar energetic particle (SEP) events. According to DSA, particles scatter from fluctuations present in the ambient magnetic field, which causes some particles to encounter the shock front repeatedly and to gain energy during each crossing. DSA operating in solar corona is a complex process whose outcome depends on multiple parameters such as shock speed and strength, magnetic geometry, and composition of seed particles. Currently, STEREO and other near-Earth spacecraft are providing valuable multi-point information on how SEP properties, such as composition and energy spectra, vary in longitude. Initial results have shown that longitude distributions of large CME-associated SEP events are much wider than previously thought. These findings have many important consequences on SEP modeling. For example, it is important to extend the present models into two or three spatial coordinates to properly account for the effects of coronal and interplanetary magnetic geometry and the evolution of the CME-driven shock wave on the acceleration and transport of SEPs. We present a new model for the shock acceleration of ions and electrons in the solar corona and discuss implications for particle properties (energy spectra, longitudinal distribution, composition) in the resulting gradual SEP events. We also discuss the possible emission of type II radio waves by the accelerated coronal electrons. In the new model, the ion pitch angle scattering rate is calculated from modeled Alfvén wave power spectra using quasilinear theory. The energy gained by ions in scatterings are self-consistently removed from waves so that total energy (ions+waves) is conserved. New model has been implemented on massively parallel simulation platform Corsair.
NASA Astrophysics Data System (ADS)
Guda, A. A.; Guda, S. A.; Soldatov, M. A.; Lomachenko, K. A.; Bugaev, A. L.; Lamberti, C.; Gawelda, W.; Bressler, C.; Smolentsev, G.; Soldatov, A. V.; Joly, Y.
2016-05-01
Finite difference method (FDM) implemented in the FDMNES software [Phys. Rev. B, 2001, 63, 125120] was revised. Thorough analysis shows, that the calculated diagonal in the FDM matrix consists of about 96% zero elements. Thus a sparse solver would be more suitable for the problem instead of traditional Gaussian elimination for the diagonal neighbourhood. We have tried several iterative sparse solvers and the direct one MUMPS solver with METIS ordering turned out to be the best. Compared to the Gaussian solver present method is up to 40 times faster and allows XANES simulations for complex systems already on personal computers. We show applicability of the software for metal-organic [Fe(bpy)3]2+ complex both for low spin and high spin states populated after laser excitation.
Rider, William; Kamm, J. R.; Tomkins, C. D.; Zoldi, C. A.; Prestridge, K. P.; Marr-Lyon, M.; Rightley, P. M.; Benjamin, R. F.
2002-01-01
We consider the detailed structures of mixing flows for Richtmyer-Meshkov experiments of Prestridge et al. [PRE 00] and Tomkins et al. [TOM 01] and examine the most recent measurements from the experimental apparatus. Numerical simulations of these experiments are performed with three different versions of high resolution finite volume Godunov methods. We compare experimental data with simulations for configurations of one and two diffuse cylinders of SF{sub 6} in air using integral measures as well as fractal analysis and continuous wavelet transforms. The details of the initial conditions have a significant effect on the computed results, especially in the case of the double cylinder. Additionally, these comparisons reveal sensitive dependence of the computed solution on the numerical method.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Application of accelerated acquisition and highly constrained reconstruction methods to MR
NASA Astrophysics Data System (ADS)
Wang, Kang
2011-12-01
There are many Magnetic Resonance Imaging (MRI) applications that require rapid data acquisition. In conventional proton MRI, representative applications include real-time dynamic imaging, whole-chest pulmonary perfusion imaging, high resolution coronary imaging, MR T1 or T2 mapping, etc. The requirement for fast acquisition and novel reconstruction methods is either due to clinical demand for high temporal resolution, high spatial resolution, or both. Another important category in which fast MRI methods are highly desirable is imaging with hyperpolarized (HP) contrast media, such as HP 3He imaging for evaluation of pulmonary function, and imaging of HP 13C-labeled substrates for the study of in vivo metabolic processes. To address these needs, numerous MR undersampling methods have been developed and combined with novel image reconstruction techniques. This thesis aims to develop novel data acquisition and image reconstruction techniques for the following applications. (I) Ultrashort echo time spectroscopic imaging (UTESI). The need for acquiring many echo images in spectroscopic imaging with high spatial resolution usually results in extended scan times, and thus requires k-space undersampling and novel imaging reconstruction methods to overcome the artifacts related to the undersampling. (2) Dynamic hyperpolarized 13C spectroscopic imaging. HP 13C compounds exhibit non-equilibrium T1 decay and rapidly evolving spectral dynamics, and therefore it is vital to utilize the polarized signal wisely and efficiently to observe the entire temporal dynamic of the injected "C compounds as well as the corresponding downstream metabolites. (3) Time-resolved contrast-enhanced MR angiography. The diagnosis of vascular diseases often requires large coverage of human body anatomies with high spatial resolution and sufficient temporal resolution for the separation of arterial phases from venous phases. The goal of simultaneously achieving high spatial and temporal resolution has
Jones, Roger M
2003-05-23
In order to measure the wakefield left behind multiple bunches of energetic electrons we have previously used the ASSET facility in the SLC [1]. However, in order to produce a more rapid and cost-effective determination of the wakefields we have designed a wire experimental method to measure the beam impedance and from the Fourier transform thereof, the wakefields. In this paper we present studies of the wire effect on the properties of X-band structures in study for the JLC/NLC (Japanese Linear Collider/Next Linear Collider) project. Simulations are made on infinite and finite periodical structures. The results are discussed.
NASA Astrophysics Data System (ADS)
Różewski, Przemysław
Nowadays, e-learning systems take the form of the Distance Learning Network (DLN) due to widespread use and accessibility of the Internet and networked e-learning services. The focal point of the DLN performance is efficiency of knowledge processing in asynchronous learning mode and facilitating cooperation between students. In addition, the DLN articulates attention to social aspects of the learning process as well. In this paper, a method for the DLN development is proposed. The main research objectives for the proposed method are the processes of acceleration of social collaboration and knowledge sharing in the DLN. The method introduces knowledge-disposed agents (who represent students in educational scenarios) that form a network of individuals aimed to increase their competence. For every agent the competence expansion process is formulated. Based on that outcome the process of dynamic network formation performed on the social and knowledge levels. The method utilizes formal apparatuses of competence set and network game theories combined with an agent system-based approach.
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Li, Changqing
2016-01-01
Fluorescence molecular tomography (FMT) is a significant preclinical imaging modality that has been actively studied in the past two decades. It remains a challenging task to obtain fast and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden and the ill-posed nature of the inverse problem. We have recently studied a nonuniform multiplicative updating algorithm that combines with the ordered subsets (OS) method for fast convergence. However, increasing the number of OS leads to greater approximation errors and the speed gain from larger number of OS is limited. We propose to further enhance the convergence speed by incorporating a first-order momentum method that uses previous iterations to achieve optimal convergence rate. Using numerical simulations and a cubic phantom experiment, we have systematically compared the effects of the momentum technique, the OS method, and the nonuniform updating scheme in accelerating the FMT reconstruction. We found that the proposed combined method can produce a high-quality image using an order of magnitude less time.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
Courau, T.; Plagne, L.; Ponicot, A.; Sjoden, G.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)
Nicholson, Kelly M; Chandrasekhar, Nita; Sholl, David S
2014-11-18
CONSPECTUS: Not only is hydrogen critical for current chemical and refining processes, it is also projected to be an important energy carrier for future green energy systems such as fuel cell vehicles. Scientists have examined light metal hydrides for this purpose, which need to have both good thermodynamic properties and fast charging/discharging kinetics. The properties of hydrogen in metals are also important in the development of membranes for hydrogen purification. In this Account, we highlight our recent work aimed at the large scale screening of metal-based systems with either favorable hydrogen capacities and thermodynamics for hydrogen storage in metal hydrides for use in onboard fuel cell vehicles or promising hydrogen permeabilities relative to pure Pd for hydrogen separation from high temperature mixed gas streams using dense metal membranes. Previously, chemists have found that the metal hydrides need to hit a stability sweet spot: if the compound is too stable, it will not release enough hydrogen under low temperatures; if the compound is too unstable, the reaction may not be reversible under practical conditions. Fortunately, we can use DFT-based methods to assess this stability via prediction of thermodynamic properties, equilibrium reaction pathways, and phase diagrams for candidate metal hydride systems with reasonable accuracy using only proposed crystal structures and compositions as inputs. We have efficiently screened millions of mixtures of pure metals, metal hydrides, and alloys to identify promising reaction schemes via the grand canonical linear programming method. Pure Pd and Pd-based membranes have ideal hydrogen selectivities over other gases but suffer shortcomings such as sensitivity to sulfur poisoning and hydrogen embrittlement. Using a combination of detailed DFT, Monte Carlo techniques, and simplified models, we are able to accurately predict hydrogen permeabilities of metal membranes and screen large libraries of candidate alloys
CUDA Fortran acceleration for the finite-difference time-domain method
NASA Astrophysics Data System (ADS)
Hadi, Mohammed F.; Esmaeili, Seyed A.
2013-05-01
A detailed description of programming the three-dimensional finite-difference time-domain (FDTD) method to run on graphical processing units (GPUs) using CUDA Fortran is presented. Two FDTD-to-CUDA thread-block mapping designs are investigated and their performances compared. Comparative assessment of trade-offs between GPU's shared memory and L1 cache is also discussed. This presentation is for the benefit of FDTD programmers who work exclusively with Fortran and are reluctant to port their codes to C in order to utilize GPU computing. The derived CUDA Fortran code is compared with an optimized CPU version that runs on a workstation-class CPU to present a realistic GPU to CPU run time comparison and thus help in making better informed investment decisions on FDTD code redesigns and equipment upgrades. All analyses are mirrored with CUDA C simulations to put in perspective the present state of CUDA Fortran development.
NASA Astrophysics Data System (ADS)
Ghasemi, F.; Abbasi Davani, F.
2015-06-01
Due to Iran's growing need for accelerators in various applications, IPM's electron Linac project has been defined. This accelerator is a 15 MeV energy S-band traveling-wave accelerator which is being designed and constructed based on the klystron that has been built in Iran. Based on the design, operating mode is π /2 and the accelerating chamber consists of two 60cm long tubes with constant impedance and a 30cm long buncher. Amongst all construction methods, shrinking method is selected for construction of IPM's electron Linac tube because it has a simple procedure and there is no need for large vacuum or hydrogen furnaces. In this paper, different aspects of this method are investigated. According to the calculations, linear ratio of frequency alteration to radius change is 787.8 MHz/cm, and the maximum deformation at the tube wall where disks and the tube make contact is 2.7μ m. Applying shrinking method for construction of 8- and 24-cavity tubes results in satisfactory frequency and quality factor. Average deviations of cavities frequency of 8- and 24-cavity tubes to the design values are 0.68 MHz and 1.8 MHz respectively before tune and 0.2 MHz and 0.4 MHz after tune. Accelerating tubes, buncher, and high power couplers of IPM's electron linac are constructed using shrinking method.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
NASA Astrophysics Data System (ADS)
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Contaldi, Carlo R.
2014-10-01
The recent Bicep2 [1] detection of, what is claimed to be primordial B-modes, opens up the possibility of constraining not only the energy scale of inflation but also the detailed acceleration history that occurred during inflation. In turn this can be used to determine the shape of the inflaton potential V(φ) for the first time — if a single, scalar inflaton is assumed to be driving the acceleration. We carry out a Monte Carlo exploration of inflationary trajectories given the current data. Using this method we obtain a posterior distribution of possible acceleration profiles ε(N) as a function of e-fold N and derived posterior distributions of the primordial power spectrum P(k) and potential V(φ). We find that the Bicep2 result, in combination with Planck measurements of total intensity Cosmic Microwave Background (CMB) anisotropies, induces a significant feature in the scalar primordial spectrum at scales k∼ 10{sup -3} Mpc {sup -1}. This is in agreement with a previous detection of a suppression in the scalar power [2].
Fischbach, R; Landwehr, P; Lackner, K; Nossen, J O; Heindel, W; Berg, K J; Eichhorn, G; Jacobsen, T F
1996-01-01
Iodixanol (Visipaque, 320 mgI/ml) was compared with iopamidol (Solutrast, 370 mgI/ml) in a double-blind, randomized, parallel group, intravenous DSA phase-III trial for evaluation of safety and efficacy. A total of 117 patients received iodixanol (n = 60) or iopamidol (n = 57). Diagnostic efficacy was evaluated using categoric and visual analogue scales. Discomfort and adverse events were recorded. A total of 39 patients collected urine up to 72 h after the examination for analysis. Diagnostic efficacy and radiographic density were similar in both groups. Discomfort was milder with iodixanol. The difference between the frequency of adverse events between both groups (iodixanol = 7, iopamidol = 2) was without statistical significance. Creatinine clearance was slightly more affected by iodixanol, whereas the increase in renal excretion of N-acetyl-beta-glucosaminidase (NAG) in the first 24-h collection period after the examination was significantly higher (p < 0.01) with iopamidol. Iodixanol was of equal diagnostic efficacy compared with iopamidol despite its reduced iodine content. Both contrast media are well suited for IV DSA.
NASA Astrophysics Data System (ADS)
Smith, Cindy D.
Common methods for commissioning linear accelerators often neglect beam data for small fields. Examining the methods of beam data collection and modeling for commissioning linear accelerators revealed little to no discussion of the protocols for fields smaller than 4 cm x 4 cm. This leads to decreased confidence levels in the dose calculations and associated monitor units (MUs) for Intensity Modulated Radiation Therapy (IMRT). The parameters of commissioning the Novalis linear accelerator (linac) on the Eclipse Treatment Planning System (TPS) led to the study of challenges collecting data for very small fields. The focus of this thesis is the examination of the protocols for output factor collection and their impact on dose calculations by the TPS for IMRT treatment plans. Improving output factor collection methods, led to significant improvement in absolute dose calculations which correlated with the complexity of the plans.
NASA Astrophysics Data System (ADS)
Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song
2015-09-01
An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.
Kaminski, Artur; Grazka, Ewelina; Jastrzebska, Anna; Marowska, Joanna; Gut, Grzegorz; Wojciechowski, Artur; Uhrynowska-Tyszkiewicz, Izabela
2012-08-01
Accelerated electron beam (EB) irradiation has been a sufficient method used for sterilisation of human tissue grafts for many years in a number of tissue banks. Accelerated EB, in contrast to more often used gamma photons, is a form of ionizing radiation that is characterized by lower penetration, however it is more effective in producing ionisation and to reach the same level of sterility, the exposition time of irradiated product is shorter. There are several factors, including dose and temperature of irradiation, processing conditions, as well as source of irradiation that may influence mechanical properties of a bone graft. The purpose of this study was to evaluate the effect e-beam irradiation with doses of 25 or 35 kGy, performed on dry ice or at ambient temperature, on mechanical properties of non-defatted or defatted compact bone grafts. Left and right femurs from six male cadaveric donors, aged from 46 to 54 years, were transversely cut into slices of 10 mm height, parallel to the longitudinal axis of the bone. Compact bone rings were assigned to the eight experimental groups according to the different processing method (defatted or non-defatted), as well as e-beam irradiation dose (25 or 35 kGy) and temperature conditions of irradiation (ambient temperature or dry ice). Axial compression testing was performed with a material testing machine. Results obtained for elastic and plastic regions of stress-strain curves examined by univariate analysis are described. Based on multivariate analysis, including all groups, it was found that temperature of e-beam irradiation and defatting had no consistent significant effect on evaluated mechanical parameters of compact bone rings. In contrast, irradiation with both doses significantly decreased the ultimate strain and its derivative toughness, while not affecting the ultimate stress (bone strength). As no deterioration of mechanical properties was observed in the elastic region, the reduction of the energy
NASA Astrophysics Data System (ADS)
Nation, John A.
1993-01-01
This report describes work carried out on AFOSR grant number F49620-92-J-0153DEF during the period February 1, 1992 to January 31, 1993. The report provides a brief description of the program objectives, summarizes the main accomplishments during the last year, and concludes with listings of conferences and referred publications, which have either been submitted for publications or published during the program year.
NASA Technical Reports Server (NTRS)
Jansen, Ralph
1995-01-01
Neural network systems were evaluated for use in predicting wear of mechanical systems. Three different neural network software simulation packages were utilized in order to create models of tribological wear tests. Representative simple, medium, and high complexity simulation packages were selected. Pin-on-disk, rub shoe, and four-ball tribological test data was used for training, testing, and verification of the neural network models. Results showed mixed success. The neural networks were able to predict results with some accuracy if the number of input variables was low or the amount of training data was high. Increased neural network complexity resulted in more accurate results, however there was a point of diminishing return. Medium complexity models were the best trade off between accuracy and computing time requirements. A NASA Technical Memorandum and a Society of Tribologists and Lubrication Engineers paper are being published which detail the work.
Gwin, Joseph T; Chu, Jeffery J; Diamond, Solomon G; Halstead, P David; Crisco, Joseph J; Greenwald, Richard M
2010-01-01
The performance characteristics of football helmets are currently evaluated by simulating head impacts in the laboratory using a linear drop test method. To encourage development of helmets designed to protect against concussion, the National Operating Committee for Standards in Athletic Equipment recently proposed a new headgear testing methodology with the goal of more closely simulating in vivo head impacts. This proposed test methodology involves an impactor striking a helmeted headform, which is attached to a nonrigid neck. The purpose of the present study was to compare headform accelerations recorded according to the current (n=30) and proposed (n=54) laboratory test methodologies to head accelerations recorded in the field during play. In-helmet systems of six single-axis accelerometers were worn by the Dartmouth College men's football team during the 2005 and 2006 seasons (n=20,733 impacts; 40 players). The impulse response characteristics of a subset of laboratory test impacts (n=27) were compared with the impulse response characteristics of a matched sample of in vivo head accelerations (n=24). Second- and third-order underdamped, conventional, continuous-time process models were developed for each impact. These models were used to characterize the linear head/headform accelerations for each impact based on frequency domain parameters. Headform linear accelerations generated according to the proposed test method were less similar to in vivo head accelerations than headform accelerations generated by the current linear drop test method. The nonrigid neck currently utilized was not developed to simulate sport-related direct head impacts and appears to be a source of the discrepancy between frequency characteristics of in vivo and laboratory head/headform accelerations. In vivo impacts occurred 37% more frequently on helmet regions, which are tested in the proposed standard than on helmet regions tested currently. This increase was largely due to the
The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2015-01-01
The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forward-forward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.
The Advanced Composition Explorer Shock Database and Application to Particle Acceleration Theory
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2015-01-01
The theory of particle acceleration via diffusive shock acceleration (DSA) has been studied in depth by Gosling et al. (1981), van Nes et al. (1984), Mason (2000), Desai et al. (2003), Zank et al. (2006), among many others. Recently, Parker and Zank (2012, 2014) and Parker et al. (2014) using the Advanced Composition Explorer (ACE) shock database at 1 AU explored two questions: does the upstream distribution alone have enough particles to account for the accelerated downstream distribution and can the slope of the downstream accelerated spectrum be explained using DSA? As was shown in this research, diffusive shock acceleration can account for a large population of the shocks. However, Parker and Zank (2012, 2014) and Parker et al. (2014) used a subset of the larger ACE database. Recently, work has successfully been completed that allows for the entire ACE database to be considered in a larger statistical analysis. We explain DSA as it applies to single and multiple shocks and the shock criteria used in this statistical analysis. We calculate the expected injection energy via diffusive shock acceleration given upstream parameters defined from the ACE Solar Wind Electron, Proton, and Alpha Monitor (SWEPAM) data to construct the theoretical upstream distribution. We show the comparison of shock strength derived from diffusive shock acceleration theory to observations in the 50 keV to 5 MeV range from an instrument on ACE. Parameters such as shock velocity, shock obliquity, particle number, and time between shocks are considered. This study is further divided into single and multiple shock categories, with an additional emphasis on forward-forward multiple shock pairs. Finally with regard to forwardforward shock pairs, results comparing injection energies of the first shock, second shock, and second shock with previous energetic population will be given.
NASA Astrophysics Data System (ADS)
Bailey, I. R.; Barber, D. P.; Chattopadhyay, S.; Hartin, A.; Heinzl, T.; Hesselbach, S.; Moortgat-Pick, G. A.
2009-11-01
The joint IPPP Durham/Cockcroft Institute/ICFA workshop on advanced QED methods for future accelerators took place at the Cockcroft Institute in early March 2009. The motivation for the workshop was the need for a detailed consideration of the physics processes associated with beam-beam effects at the interaction points of future high-energy electron-positron colliders. There is a broad consensus within the particle physics community that the next international facility for experimental high-energy physics research beyond the Large Hadron Collider at CERN should be a high-luminosity electron-positron collider working at the TeV energy scale. One important feature of such a collider will be its ability to deliver polarised beams to the interaction point and to provide accurate measurements of the polarisation state during physics collisions. The physics collisions take place in very dense charge bunches in the presence of extremely strong electromagnetic fields of field strength of order of the Schwinger critical field strength of 4.4×1013 Gauss. These intense fields lead to depolarisation processes which need to be thoroughly understood in order to reduce uncertainty in the polarisation state at collision. To that end, this workshop reviewed the formalisms for describing radiative processes and the methods of calculation in the future strong-field environments. These calculations are based on the Furry picture of organising the interaction term of the Lagrangian. The means of deriving the transition probability of the most important of the beam-beam processes - Beamsstrahlung - was reviewed. The workshop was honoured by the presentations of one of the founders, V N Baier, of the 'Operator method' - one means for performing these calculations. Other theoretical methods of performing calculations in the Furry picture, namely those due to A I Nikishov, V I Ritus et al, were reviewed and intense field quantum processes in fields of different form - namely those
Razinkov, Vladimir I; Treuheit, Michael J; Becker, Gerald W
2015-04-01
More therapeutic monoclonal antibodies and antibody-based modalities are in development today than ever before, and a faster and more accurate drug discovery process will ensure that the number of candidates coming to the biopharmaceutical pipeline will increase in the future. The process of drug product development and, specifically, formulation development is a critical bottleneck on the way from candidate selection to fully commercialized medicines. This article reviews the latest advances in methods of formulation screening, which allow not only the high-throughput selection of the most suitable formulation but also the prediction of stability properties under manufacturing and long-term storage conditions. We describe how the combination of automation technologies and high-throughput assays creates the opportunity to streamline the formulation development process starting from early preformulation screening through to commercial formulation development. The application of quality by design (QbD) concepts and modern statistical tools are also shown here to be very effective in accelerated formulation development of both typical antibodies and complex modalities derived from them.
Cosmic ray acceleration at perpendicular shocks in supernova remnants
Ferrand, Gilles; Danos, Rebecca J.; Shalchi, Andreas; Safi-Harb, Samar; Edmon, Paul; Mendygral, Peter
2014-09-10
Supernova remnants (SNRs) are believed to accelerate particles up to high energies through the mechanism of diffusive shock acceleration (DSA). Except for direct plasma simulations, all modeling efforts must rely on a given form of the diffusion coefficient, a key parameter that embodies the interactions of energetic charged particles with magnetic turbulence. The so-called Bohm limit is commonly employed. In this paper, we revisit the question of acceleration at perpendicular shocks, by employing a realistic model of perpendicular diffusion. Our coefficient reduces to a power law in momentum for low momenta (of index α), but becomes independent of the particle momentum at high momenta (reaching a constant value κ{sub ∞} above some characteristic momentum p {sub c}). We first provide simple analytical expressions of the maximum momentum that can be reached at a given time with this coefficient. Then we perform time-dependent numerical simulations to investigate the shape of the particle distribution that can be obtained when the particle pressure back-reacts on the flow. We observe that for a given index α and injection level, the shock modifications are similar for different possible values of p {sub c}, whereas the particle spectra differ markedly. Of particular interest, low values of p {sub c} tend to remove the concavity once thought to be typical of non-linear DSA, and result in steep spectra, as required by recent high-energy observations of Galactic SNRs.
John Womersley
2003-08-21
I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.
Hertzberg, A.; Bruckner, A.P.; Mattick, A.T.; Bogdanoff, D.W.; Brackett, D.C.; McFall, K.A.
1987-01-01
This report describes work performed for the Department of Energy over the time period 1 June 1985 to 30 April 1987. The main areas of investigation are computational studies of gas and high explosive driven ramjet-in-tube concepts over the velocity range 3 - 20 km/sec, linear velocity multiplication over the velocity range 7 - 100/sup +/ km/sec and radiation emitted from impacts at closing velocities of 80 - 400 km/sec. This report presents the computational methods used, including benchmark proof tests of these methods, as well as results of the investigations. 41 refs., 62 figs., 11 tabs.
NASA Astrophysics Data System (ADS)
García-Pareja, S.; Vilches, M.; Lallena, A. M.
2010-01-01
The Monte Carlo simulation of clinical electron linear accelerators requires large computation times to achieve the level of uncertainty required for radiotherapy. In this context, variance reduction techniques play a fundamental role in the reduction of this computational time. Here we describe the use of the ant colony method to control the application of two variance reduction techniques: Splitting and Russian roulette. The approach can be applied to any accelerator in a straightforward way and permits the increasing of the efficiency of the simulation by a factor larger than 50.
NASA Astrophysics Data System (ADS)
Colazo, M.
2016-08-01
Argentine has 10 percent of the operative time available for the DSA 3 Antenna of the European Space Agency, installed in Malargüe, Mendoza. Here we present the history of the project and the current activities for the scientific use of the antenna.
NASA Astrophysics Data System (ADS)
Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George
2014-06-01
An electron-photon coupled Monte Carlo code ARCHER -
Hoven, Andor F. van den Leeuwen, Maarten S. van Lam, Marnix G. E. H. Bosch, Maurice A. A. J. van den
2015-02-15
PurposeCurrent anatomical classifications do not include all variants relevant for radioembolization (RE). The purpose of this study was to assess the individual hepatic arterial configuration and segmental vascularization pattern and to develop an individualized RE treatment strategy based on an extended classification.MethodsThe hepatic vascular anatomy was assessed on MDCT and DSA in patients who received a workup for RE between February 2009 and November 2012. Reconstructed MDCT studies were assessed to determine the hepatic arterial configuration (origin of every hepatic arterial branch, branching pattern and anatomical course) and the hepatic segmental vascularization territory of all branches. Aberrant hepatic arteries were defined as hepatic arterial branches that did not originate from the celiac axis/CHA/PHA. Early branching patterns were defined as hepatic arterial branches originating from the celiac axis/CHA.ResultsThe hepatic arterial configuration and segmental vascularization pattern could be assessed in 110 of 133 patients. In 59 patients (54 %), no aberrant hepatic arteries or early branching was observed. Fourteen patients without aberrant hepatic arteries (13 %) had an early branching pattern. In the 37 patients (34 %) with aberrant hepatic arteries, five also had an early branching pattern. Sixteen different hepatic arterial segmental vascularization patterns were identified and described, differing by the presence of aberrant hepatic arteries, their respective vascular territory, and origin of the artery vascularizing segment four.ConclusionsThe hepatic arterial configuration and segmental vascularization pattern show marked individual variability beyond well-known classifications of anatomical variants. We developed an individualized RE treatment strategy based on an extended anatomical classification.
Medina, L Carolina; Sartain, Jerry; Obreza, Thomas; Hall, William L; Thiex, Nancy J
2014-01-01
Several technologies have been proposed to characterize the nutrient release patterns of enhanced-efficiency fertilizers (EEFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize nutrient release of slow-release fertilizer (SRF) and controlled-release fertilizer (CRF) materials, no official method exists to assess their nutrient release patterns. However, the increased production and distribution of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. Nonlinear regression was used to establish a correlation between the data generated from a 180-day soil incubation-column leaching procedure and 74 h accelerated lab extraction method, and to develop a model that can predict the 180-day nitrogen (N) release curve for a specific SRF and CRF product based on the data from the accelerated laboratory extraction method. Based on the R2 > 0.90 obtained for most materials, results indicated that the data generated from the 74 h accelerated lab extraction method could be used to predict N release from the selected materials during 180 days, including those fertilizers that require biological activity for N release. PMID:25051612
NASA Astrophysics Data System (ADS)
Yu, S. D.; Zhang, X.
2010-05-01
This paper presents a method for determining the instantaneous angular speed and instantaneous angular acceleration of the crankshaft in a reciprocating engine and propeller dynamical system from electrical pulse signals generated by a magnetic encoder. The method is based on accurate determination of the measured global mean angular speed and precise values of times when leading edges of individual magnetic teeth pass through the magnetic sensor. Under a steady-state operating condition, a discrete deviation time vs. shaft rotational angle series of uniform interval is obtained and used for accurate determination of the crankshaft speed and acceleration. The proposed method for identifying sub- and super-harmonic oscillations in the instantaneous angular speeds and accelerations is new and efficient. Experiments were carried out on a three-cylinder four-stroke Saito 450R model aircraft engine and a Solo propeller in connection with a 64-teeth Admotec KL2202 magnetic encoder and an HS-4 data acquisition system. Comparisons with an independent data processing scheme indicate that the proposed method yields noise-free instantaneous angular speeds and is superior to the finite difference based methods commonly used in the literature.
Medina, L Carolina; Sartain, Jerry; Obreza, Thomas; Hall, William L; Thiex, Nancy J
2014-01-01
Several technologies have been proposed to characterize the nutrient release patterns of enhanced-efficiency fertilizers (EEFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize nutrient release of slow-release fertilizer (SRF) and controlled-release fertilizer (CRF) materials, no official method exists to assess their nutrient release patterns. However, the increased production and distribution of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. Nonlinear regression was used to establish a correlation between the data generated from a 180-day soil incubation-column leaching procedure and 74 h accelerated lab extraction method, and to develop a model that can predict the 180-day nitrogen (N) release curve for a specific SRF and CRF product based on the data from the accelerated laboratory extraction method. Based on the R2 > 0.90 obtained for most materials, results indicated that the data generated from the 74 h accelerated lab extraction method could be used to predict N release from the selected materials during 180 days, including those fertilizers that require biological activity for N release.
Combined generating-accelerating buncher for compact linear accelerators
NASA Astrophysics Data System (ADS)
Savin, E. A.; Matsievskiy, S. V.; Sobenin, N. P.; Sokolov, I. D.; Zavadtsev, A. A.
2016-09-01
Described in the previous article [1] method of the power extraction from the modulated electron beam has been applied to the compact standing wave electron linear accelerator feeding system, which doesnt require any connection waveguides between the power source and the accelerator itself [2]. Generating and accelerating bunches meet in the hybrid accelerating cell operating at TM020 mode, thus the accelerating module is placed on the axis of the generating module, which consists from the pulsed high voltage electron sources and electrons dumps. This combination makes the accelerator very compact in size which is very valuable for the modern applications such as portable inspection sources. Simulations and geometry cold tests are presented.
Ellis, P.F. II; Ferguson, A.F.; Fuentes, K.T.
1996-05-06
In 1992, the Air-Conditioning and Refrigeration Technology Institute, Inc. (ARTI) contracted Radian Corporation to ascertain whether an improved accelerated test method or procedure could be developed that would allow prediction of the life of motor insulation materials used in hermetic motors for air-conditioning and refrigeration equipment operated with alternative refrigerant/lubricant mixtures. This report presents the results of phase three concerning the reproducibility and discrimination testing.
NASA Astrophysics Data System (ADS)
Hwang, James Ho-Jin; Duran, Adam
2016-08-01
Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC
Cascaded radiation pressure acceleration
Pei, Zhikun; Shen, Baifei E-mail: zhxm@siom.ac.cn; Zhang, Xiaomei E-mail: zhxm@siom.ac.cn; Wang, Wenpeng; Zhang, Lingang; Yi, Longqing; Shi, Yin; Xu, Zhizhan
2015-07-15
A cascaded radiation-pressure acceleration scheme is proposed. When an energetic proton beam is injected into an electrostatic field moving at light speed in a foil accelerated by light pressure, protons can be re-accelerated to much higher energy. An initial 3-GeV proton beam can be re-accelerated to 7 GeV while its energy spread is narrowed significantly, indicating a 4-GeV energy gain for one acceleration stage, as shown in one-dimensional simulations and analytical results. The validity of the method is further confirmed by two-dimensional simulations. This scheme provides a way to scale proton energy at the GeV level linearly with laser energy and is promising to obtain proton bunches at tens of gigaelectron-volts.
NASA Astrophysics Data System (ADS)
Pern, F. J.; Noufi, R.
2012-10-01
A step-stress accelerated degradation testing (SSADT) method was employed for the first time to evaluate the stability of CuInGaSe2 (CIGS) solar cells and device component materials in four Al-framed test structures encapsulated with an edge sealant and three kinds of backsheet or moisture barrier film for moisture ingress control. The SSADT exposure used a 15°C and then a 15% relative humidity (RH) increment step, beginning from 40°C/40%RH (T/RH = 40/40) to 85°C/70%RH (85/70) as of the moment. The voluminous data acquired and processed as of total DH = 3956 h with 85/70 = 704 h produced the following results. The best CIGS solar cells in sample Set-1 with a moisture-permeable TPT backsheet showed essentially identical I-V degradation trend regardless of the Al-doped ZnO (AZO) layer thickness ranging from standard 0.12 μm to 0.50 μm on the cells. No clear "stepwise" feature in the I-V parameter degradation curves corresponding to the SSADT T/RH/time profile was observed. Irregularity in I-V performance degradation pattern was observed with some cells showing early degradation at low T/RH < 55/55 and some showing large Voc, FF, and efficiency degradation due to increased series Rs (ohm-cm2) at T/RH >= 70/70. Results of (electrochemical) impedance spectroscopy (ECIS) analysis indicate degradation of the CIGS solar cells corresponded to increased series resistance Rs (ohm) and degraded parallel (minority carrier diffusion/recombination) resistance Rp, capacitance C, overall time constant Rp*C, and "capacitor quality" factor (CPE-P), which were related to the cells' p-n junction properties. Heating at 85/70 appeared to benefit the CIGS solar cells as indicated by the largely recovered CPE-P factor. Device component materials, Mo on soda lime glass (Mo/SLG), bilayer ZnO (BZO), AlNi grid contact, and CdS/CIGS/Mo/SLG in test structures with TPT showed notable to significant degradation at T/RH >= 70/70. At T/RH = 85/70, substantial blistering of BZO layers on CIGS
Pern, F. J.; Noufi, R.
2012-10-01
A step-stress accelerated degradation testing (SSADT) method was employed for the first time to evaluate the stability of CuInGaSe2 (CIGS) solar cells and device component materials in four Al-framed test structures encapsulated with an edge sealant and three kinds of backsheet or moisture barrier film for moisture ingress control. The SSADT exposure used a 15oC and then a 15% relative humidity (RH) increment step, beginning from 40oC/40%RH (T/RH = 40/40) to 85oC/70%RH (85/70) as of the moment. The voluminous data acquired and processed as of total DH = 3956 h with 85/70 = 704 h produced the following results. The best CIGS solar cells in sample Set-1 with a moisture-permeable TPT backsheet showed essentially identical I-V degradation trend regardless of the Al-doped ZnO (AZO) layer thickness ranging from standard 0.12 μm to 0.50 μm on the cells. No clear 'stepwise' feature in the I-V parameter degradation curves corresponding to the SSADT T/RH/time profile was observed. Irregularity in I-V performance degradation pattern was observed with some cells showing early degradation at low T/RH < 55/55 and some showing large Voc, FF, and efficiency degradation due to increased series Rs (ohm-cm2) at T/RH ≥ 70/70. Results of (electrochemical) impedance spectroscopy (ECIS) analysis indicate degradation of the CIGS solar cells corresponded to increased series resistance Rs (ohm) and degraded parallel (minority carrier diffusion/recombination) resistance Rp, capacitance C, overall time constant Rp*C, and 'capacitor quality' factor (CPE-P), which were related to the cells? p-n junction properties. Heating at 85/70 appeared to benefit the CIGS solar cells as indicated by the largely recovered CPE-P factor. Device component materials, Mo on soda lime glass (Mo/SLG), bilayer ZnO (BZO), AlNi grid contact, and CdS/CIGS/Mo/SLG in test structures with TPT showed notable to significant degradation at T/RH ≥ 70/70. At T/RH = 85/70, substantial blistering of BZO layers on CIGS
Particle acceleration and reconnection in the solar wind
NASA Astrophysics Data System (ADS)
Zank, G. P.; Hunana, P.; Mostafavi, P.; le Roux, J. A.; Webb, G. M.; Khabarova, O.; Cummings, A. C.; Stone, E. C.; Decker, R. B.
2016-03-01
An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized quasi-2D small-scale magnetic island reconnection processes. An advection-diffusion transport equation for a nearly isotropic particle distribution describes particle transport and energization in a region of interacting magnetic islands [1; 2]. The dominant charged particle energization processes are 1) the electric field induced by quasi-2D magnetic island merging, and 2) magnetic island contraction. The acceleration of charged particles in a "sea of magnetic islands" in a super-Alfvénic flow, and the energization of particles by combined diffusive shock acceleration (DSA) and downstream magnetic island reconnection processes are discussed.
Ball, Lisa Sherry
2013-11-30
Accelerated nursing students are ideal informants regarding abstract nursing concepts. How emotional intelligence (EI) is used in nursing remains a relatively elusive process that has yet to be empirically modeled. The purpose of this study was to generate a theoretical model that explains how EI is used in nursing by accelerated baccalaureate nursing students. Using a mixed methods grounded theory study design, theoretical sampling of EI scores directed sampling for individual interviews and focus groups. Caring for a human being emerged as the basic social process at the heart of which all other processes--Getting it; Being caring; The essence of professional nurse caring; Doing something to make someone feel better; and Dealing with difficulty--are interconnected. In addition to a theoretical explanation of the use of EI in nursing, this study corroborates findings from other qualitative studies in nursing and contributes a rich description of accelerated baccalaureate nursing students and an example of a mixed methods study design to the small but growing literature in these areas.
NASA Astrophysics Data System (ADS)
Iwashita, T.; Adachi, T.; Takayama, K.; Leo, K. W.; Arai, T.; Arakida, Y.; Hashimoto, M.; Kadokura, E.; Kawai, M.; Kawakubo, T.; Kubo, Tomio; Koyama, K.; Nakanishi, H.; Okazaki, K.; Okamura, K.; Someya, H.; Takagi, A.; Tokuchi, A.; Wake, M.
2011-07-01
The High Energy Accelerator Research Organization KEK digital accelerator (KEK-DA) is a renovation of the KEK 500 MeV booster proton synchrotron, which was shut down in 2006. The existing 40 MeV drift tube linac and rf cavities have been replaced by an electron cyclotron resonance (ECR) ion source embedded in a 200 kV high-voltage terminal and induction acceleration cells, respectively. A DA is, in principle, capable of accelerating any species of ion in all possible charge states. The KEK-DA is characterized by specific accelerator components such as a permanent magnet X-band ECR ion source, a low-energy transport line, an electrostatic injection kicker, an extraction septum magnet operated in air, combined-function main magnets, and an induction acceleration system. The induction acceleration method, integrating modern pulse power technology and state-of-art digital control, is crucial for the rapid-cycle KEK-DA. The key issues of beam dynamics associated with low-energy injection of heavy ions are beam loss caused by electron capture and stripping as results of the interaction with residual gas molecules and the closed orbit distortion resulting from relatively high remanent fields in the bending magnets. Attractive applications of this accelerator in materials and biological sciences are discussed.
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
NASA Technical Reports Server (NTRS)
Kolyer, J. M.; Mann, N. R.
1978-01-01
Inherent weatherability is controlled by the three weather factors common to all exposure sites: insolation, temperature, and humidity. Emphasis was focused on the transparent encapsulant portion of miniature solar cell arrays by eliminating weathering effects on the substrate and circuitry (which are also parts of the encapsulant system). The most extensive data were for yellowing, which were measured conveniently and precisely. Considerable data also were obtained on tensile strength. Changes in these two properties after outdoor exposure were predicted very well from accelerated exposure data.
Probing Acceleration and Turbulence at Relativistic Shocks in Blazar Jets
NASA Astrophysics Data System (ADS)
Baring, Matthew G.; Böttcher, Markus; Summerlin, Errol J.
2016-09-01
Diffusive shock acceleration (DSA) at relativistic shocks is widely thought to be an important acceleration mechanism in various astrophysical jet sources, including radio-loud active galactic nuclei such as blazars. Such acceleration can produce the non-thermal particles that emit the broadband continuum radiation that is detected from extragalactic jets. An important recent development for blazar science is the ability of Fermi-LAT spectroscopy to pin down the shape of the distribution of the underlying non-thermal particle population. This paper highlights how multi-wavelength spectra spanning optical to X-ray to gamma-ray bands can be used to probe diffusive acceleration in relativistic, oblique, magnetohydrodynamic (MHD) shocks in blazar jets. Diagnostics on the MHD turbulence near such shocks are obtained using thermal and non-thermal particle distributions resulting from detailed Monte Carlo simulations of DSA. These probes are afforded by the characteristic property that the synchrotron νFν peak energy does not appear in the gamma-ray band above 100 MeV. We investigate self-consistently the radiative synchrotron and inverse Compton signatures of the simulated particle distributions. Important constraints on the diffusive mean free paths of electrons, and the level of electromagnetic field turbulence are identified for three different case study blazars, Mrk 501, BL Lacertae and AO 0235+164. The X-ray excess of AO 0235+164 in a flare state can be modelled as the signature of bulk Compton scattering of external radiation fields, thereby tightly constraining the energy-dependence of the diffusion coefficient for electrons. The concomitant interpretations that turbulence levels decline with remoteness from jet shocks, and the probable significant role for non-gyroresonant diffusion, are posited.
DIFFUSIVE ACCELERATION OF PARTICLES AT OBLIQUE, RELATIVISTIC, MAGNETOHYDRODYNAMIC SHOCKS
Summerlin, Errol J.; Baring, Matthew G. E-mail: baring@rice.edu
2012-01-20
Diffusive shock acceleration (DSA) at relativistic shocks is expected to be an important acceleration mechanism in a variety of astrophysical objects including extragalactic jets in active galactic nuclei and gamma-ray bursts. These sources remain good candidate sites for the generation of ultrahigh energy cosmic rays. In this paper, key predictions of DSA at relativistic shocks that are germane to the production of relativistic electrons and ions are outlined. The technique employed to identify these characteristics is a Monte Carlo simulation of such diffusive acceleration in test-particle, relativistic, oblique, magnetohydrodynamic (MHD) shocks. Using a compact prescription for diffusion of charges in MHD turbulence, this approach generates particle angular and momentum distributions at any position upstream or downstream of the shock. Simulation output is presented for both small angle and large angle scattering scenarios, and a variety of shock obliquities including superluminal regimes when the de Hoffmann-Teller frame does not exist. The distribution function power-law indices compare favorably with results from other techniques. They are found to depend sensitively on the mean magnetic field orientation in the shock, and the nature of MHD turbulence that propagates along fields in shock environs. An interesting regime of flat-spectrum generation is addressed; we provide evidence for it being due to shock drift acceleration, a phenomenon well known in heliospheric shock studies. The impact of these theoretical results on blazar science is outlined. Specifically, Fermi Large Area Telescope gamma-ray observations of these relativistic jet sources are providing significant constraints on important environmental quantities for relativistic shocks, namely, the field obliquity, the frequency of scattering, and the level of field turbulence.
Bell, J.S.
1959-09-15
An arrangement for the drift tubes in a linear accelerator is described whereby each drift tube acts to shield the particles from the influence of the accelerating field and focuses the particles passing through the tube. In one embodiment the drift tube is splii longitudinally into quadrants supported along the axis of the accelerator by webs from a yoke, the quadrants. webs, and yoke being of magnetic material. A magnetic focusing action is produced by energizing a winding on each web to set up a magnetic field between adjacent quadrants. In the other embodiment the quadrants are electrically insulated from each other and have opposite polarity voltages on adjacent quadrants to provide an electric focusing fleld for the particles, with the quadrants spaced sufficienily close enough to shield the particles within the tube from the accelerating electric field.
Abbin, J.P. Jr.; Devaney, H.F.; Hake, L.W.
1979-08-29
The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.
Abbin, Jr., Joseph P.; Devaney, Howard F.; Hake, Lewis W.
1982-08-17
The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
TIME-DEPENDENT DIFFUSIVE SHOCK ACCELERATION IN SLOW SUPERNOVA REMNANT SHOCKS
Tang, Xiaping; Chevalier, Roger A. E-mail: rac5x@virginia.edu
2015-02-20
Recent gamma-ray observations show that middle-aged supernova remnants (SNRs) interacting with molecular clouds can be sources of both GeV and TeV emission. Models involving reacceleration of preexisting cosmic rays (CRs) in the ambient medium and direct interaction between SNR and molecular clouds have been proposed to explain the observed gamma-ray emission. For the reacceleration process, standard diffusive shock acceleration (DSA) theory in the test particle limit produces a steady-state particle spectrum that is too flat compared to observations, which suggests that the high-energy part of the observed spectrum has not yet reached a steady state. We derive a time-dependent DSA solution in the test particle limit for situations involving reacceleration of preexisting CRs in the preshock medium. Simple estimates with our time-dependent DSA solution plus a molecular cloud interaction model can reproduce the overall shape of the spectra of IC 443 and W44 from GeV to TeV energies through pure π{sup 0}-decay emission. We allow for a power-law momentum dependence of the diffusion coefficient, finding that a power-law index of 0.5 is favored.
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
Sleiman, Mohamad; Kirchstetter, Thomas W.; Berdahl, Paul; Gilbert, Haley E.; Quelen, Sarah; Marlot, Lea; Preble, Chelsea V.; Chen, Sharon; Montalbano, Amandine; Rosseler, Olivier; Akbari, Hashem; Levinson, Ronnen; Destaillats, Hugo
2014-01-09
Highly reflective roofs can decrease the energy required for building air conditioning, help mitigate the urban heat island effect, and slow global warming. However, these benefits are diminished by soiling and weathering processes that reduce the solar reflectance of most roofing materials. Soiling results from the deposition of atmospheric particulate matter and the growth of microorganisms, each of which absorb sunlight. Weathering of materials occurs with exposure to water, sunlight, and high temperatures. This study developed an accelerated aging method that incorporates features of soiling and weathering. The method sprays a calibrated aqueous soiling mixture of dust minerals, black carbon, humic acid, and salts onto preconditioned coupons of roofing materials, then subjects the soiled coupons to cycles of ultraviolet radiation, heat and water in a commercial weatherometer. Three soiling mixtures were optimized to reproduce the site-specific solar spectral reflectance features of roofing products exposed for 3 years in a hot and humid climate (Miami, Florida); a hot and dry climate (Phoenix, Arizona); and a polluted atmosphere in a temperate climate (Cleveland, Ohio). A fourth mixture was designed to reproduce the three-site average values of solar reflectance and thermal emittance attained after 3 years of natural exposure, which the Cool Roof Rating Council (CRRC) uses to rate roofing products sold in the US. This accelerated aging method was applied to 25 products₋single ply membranes, factory and field applied coatings, tiles, modified bitumen cap sheets, and asphalt shingles₋and reproduced in 3 days the CRRC's 3-year aged values of solar reflectance. In conclusion, this accelerated aging method can be used to speed the evaluation and rating of new cool roofing materials.
Eriksen, Kristoffer A.; Hughes, John P.; Badenes, Carles; Fesen, Robert; Ghavamian, Parviz; Moffett, David; Plucinksy, Paul P.; Slane, Patrick; Rakowski, Cara E.; Reynoso, Estela M.
2011-02-20
Supernova remnants (SNRs) have long been assumed to be the source of cosmic rays (CRs) up to the 'knee' of the CR spectrum at 10{sup 15} eV, accelerating particles to relativistic energies in their blast waves by the process of diffusive shock acceleration (DSA). Since CR nuclei do not radiate efficiently, their presence must be inferred indirectly. Previous theoretical calculations and X-ray observations show that CR acceleration significantly modifies the structure of the SNR and greatly amplifies the interstellar magnetic field. We present new, deep X-ray observations of the remnant of Tycho's supernova (SN 1572, henceforth Tycho), which reveal a previously unknown, strikingly ordered pattern of non-thermal high-emissivity stripes in the projected interior of the remnant, with spacing that corresponds to the gyroradii of 10{sup 14}-10{sup 15} eV protons. Spectroscopy of the stripes shows the plasma to be highly turbulent on the (smaller) scale of the Larmor radii of TeV energy electrons. Models of the shock amplification of magnetic fields produce structure on the scale of the gyroradius of the highest energy CRs present, but they do not predict the highly ordered pattern we observe. We interpret the stripes as evidence for acceleration of particles to near the knee of the CR spectrum in regions of enhanced magnetic turbulence, while the observed highly ordered pattern of these features provides a new challenge to models of DSA.
NASA Astrophysics Data System (ADS)
Eriksen, Kristoffer A.; Hughes, John P.; Badenes, Carles; Fesen, Robert; Ghavamian, Parviz; Moffett, David; Plucinksy, Paul P.; Rakowski, Cara E.; Reynoso, Estela M.; Slane, Patrick
2011-02-01
Supernova remnants (SNRs) have long been assumed to be the source of cosmic rays (CRs) up to the "knee" of the CR spectrum at 1015 eV, accelerating particles to relativistic energies in their blast waves by the process of diffusive shock acceleration (DSA). Since CR nuclei do not radiate efficiently, their presence must be inferred indirectly. Previous theoretical calculations and X-ray observations show that CR acceleration significantly modifies the structure of the SNR and greatly amplifies the interstellar magnetic field. We present new, deep X-ray observations of the remnant of Tycho's supernova (SN 1572, henceforth Tycho), which reveal a previously unknown, strikingly ordered pattern of non-thermal high-emissivity stripes in the projected interior of the remnant, with spacing that corresponds to the gyroradii of 1014-1015 eV protons. Spectroscopy of the stripes shows the plasma to be highly turbulent on the (smaller) scale of the Larmor radii of TeV energy electrons. Models of the shock amplification of magnetic fields produce structure on the scale of the gyroradius of the highest energy CRs present, but they do not predict the highly ordered pattern we observe. We interpret the stripes as evidence for acceleration of particles to near the knee of the CR spectrum in regions of enhanced magnetic turbulence, while the observed highly ordered pattern of these features provides a new challenge to models of DSA.
NASA Astrophysics Data System (ADS)
Li, Xianfeng; Snyder, James A.; Stuart, Steven J.; Latour, Robert A.
2015-10-01
The recently developed "temperature intervals with global exchange of replicas" (TIGER2) accelerated sampling method is found to have inaccuracies when applied to systems with explicit solvation. This inaccuracy is due to the energy fluctuations of the solvent, which cause the sampling method to be less sensitive to the energy fluctuations of the solute. In the present work, the problem of the TIGER2 method is addressed in detail and a modification to the sampling method is introduced to correct this problem. The modified method is called "TIGER2 with solvent energy averaging," or TIGER2A. This new method overcomes the sampling problem with the TIGER2 algorithm and is able to closely approximate Boltzmann-weighted sampling of molecular systems with explicit solvation. The difference in performance between the TIGER2 and TIGER2A methods is demonstrated by comparing them against analytical results for simple one-dimensional models, against replica exchange molecular dynamics (REMD) simulations for sampling the conformation of alanine dipeptide and the folding behavior of (AAQAA)3 peptide in aqueous solution, and by comparing their performance in sampling the behavior of hen egg-white lysozyme in aqueous solution. The new TIGER2A method solves the problem caused by solvent energy fluctuations in TIGER2 while maintaining the two important characteristics of TIGER2, i.e., (1) using multiple replicas sampled at different temperature levels to help systems efficiently escape from local potential energy minima and (2) enabling the number of replicas used for a simulation to be independent of the size of the molecular system, thus providing an accelerated sampling method that can be used to efficiently sample systems considered too large for the application of conventional temperature REMD.
Contrast enhanced diffusion NMR: quantifying impurities in block copolymers for DSA
NASA Astrophysics Data System (ADS)
Wojtecki, Rudy; Porath, Ellie; Vora, Ankit; Nelson, Alshakim; Sanders, Daniel
2016-03-01
Block-copolymers (BCPs) offer the potential to meet the demands of next generation lithographic materials as they can self-assemble into scalable and tailorable nanometer scale patterns. In order for these materials to find wide spread adoption many challenges remain, including reproducible thin film morphology, for which the purity of block copolymers is critical. One of the sources of impurities are reaction conditions used to synthesize block copolymers that may result in the formation of homopolymer as a side product, which can impact the quality and the morphology of self-assembled features. Detection and characterization of these homopolymer impurities can be challenging by traditional methods of polymer characterization. We will discuss an alternate NMR-based method for the detection of homopolymer impurities in block copolymers - contrast enhanced diffusion ordered spectroscopy (CEDOSY). This experimental technique measures the diffusion coefficient of polymeric materials in the solution allowing for the `virtual' or spectroscopic separation of BCPs that contain homopolymer impurities. Furthermore, the contrast between the diffusion coefficient of mixtures containing BCPs and homopolymer impurities can be enhanced by taking advantage of the chemical mismatch of the two blocks to effectively increase the size of the BCP (and diffusion coefficient) through the formation of micelles using a cosolvent, while the size and diffusion coefficient of homopolymer impurities remain unchanged. This enables the spectroscopic separation of even small amounts of homopolymer impurities that are similar in size to BCPs. Herein, we present the results using the CEDOSY technique with both first generation BCP system, poly(styrene)-b-poly(methyl methacrylate), as well as a second generation high-χ system.
Time dependent diffusive shock acceleration and its application to middle aged supernova remnants
NASA Astrophysics Data System (ADS)
Tang, Xiaping; Chevalier, Roger A.
2016-06-01
Recent gamma-ray observations show that middle aged supernova remnants (SNRs) interacting with molecular clouds (MCs) can be sources of both GeV and TeV emission. Based on the MC association, two scenarios have been proposed to explain the observed gamma-ray emission. In one, energetic cosmic ray (CR) particles escape from the SNR and then illuminate nearby MCs, producing gamma-ray emission, while the other involves direct interaction between the SNR and MC. In the direct interaction scenario, re-acceleration of pre-existing CRs in the ambient medium is investigated while particles injected from the thermal pool are neglected in view of the slow shock speeds in middle aged SNRs. However, standard diffusive shock acceleration (DSA) theory produces a steady state particle spectrum that is too flat compared to observations, which suggests that the high energy part of the observed spectrum has not yet reached a steady state. We derive a time dependent DSA solution in the test particle limit for re-acceleration of pre-existing CRs case and show that it is capable of reproducing the observed gamma-ray emission in SNRs like IC 443 and W44, in the context of a MC interaction model. We also provide a simple physical picture to understand the time dependent DSA spectrum. A spatially averaged diffusion coefficient around the SNR can be estimated through fitting the gamma-ray spectrum. The spatially averaged diffusion coefficient in middle aged SNRs like IC 443 and W44 is estimated to be ~10^(25) cm^2/s at ~ 1GeV, which is between the Bohm limit and interstellar value.
Re-acceleration Model for Radio Relics with Spectral Curvature
NASA Astrophysics Data System (ADS)
Kang, Hyesung; Ryu, Dongsu
2016-05-01
Most of the observed features of radio gischt relics, such as spectral steepening across the relic width and a power-law-like integrated spectrum, can be adequately explained by a diffusive shock acceleration (DSA) model in which relativistic electrons are (re-)accelerated at shock waves induced in the intracluster medium. However, the steep spectral curvature in the integrated spectrum above ˜2 GHz detected in some radio relics, such as the Sausage relic in cluster CIZA J2242.8+5301, may not be interpreted by the simple radiative cooling of postshock electrons. In order to understand such steepening, we consider here a model in which a spherical shock sweeps through and then exits out of a finite-size cloud with fossil relativistic electrons. The ensuing integrated radio spectrum is expected to steepen much more than predicted for aging postshock electrons, since the re-acceleration stops after the cloud-crossing time. Using DSA simulations that are intended to reproduce radio observations of the Sausage relic, we show that both the integrated radio spectrum and the surface brightness profile can be fitted reasonably well, if a shock of speed {u}s ˜ 2.5-2.8 × {10}3 {km} {{{s}}}-1 and a sonic Mach number {M}s ˜ 2.7-3.0 traverses a fossil cloud for ˜45 Myr, and the postshock electrons cool further for another ˜10 Myr. This attempt illustrates that steep curved spectra of some radio gischt relics could be modeled by adjusting the shape of the fossil electron spectrum and adopting the specific configuration of the fossil cloud.
Radioisotope Dating with Accelerators.
ERIC Educational Resources Information Center
Muller, Richard A.
1979-01-01
Explains a new method of detecting radioactive isotopes by counting their accelerated ions rather than the atoms that decay during the counting period. This method increases the sensitivity by several orders of magnitude, and allows one to find the ages of much older and smaller samples. (GA)
Quinto, Francesca; Golser, Robin; Lagos, Markus; Plaschke, Markus; Schäfer, Thorsten; Steier, Peter; Geckeis, Horst
2015-06-01
(236)U, (237)Np, and Pu isotopes and (243)Am were determined in ground- and seawater samples at levels below ppq (fg/g) with a maximum sample size of 250 g. Such high sensitivity was possible by using accelerator mass spectrometry (AMS) at the Vienna Environmental Research Accelerator (VERA) with extreme selectivity and recently improved efficiency and a significantly simplified separation chemistry. The use of nonisotopic tracers was investigated in order to allow for the determination of (237)Np and (243)Am, for which isotopic tracers either are rarely available or suffer from various isobaric mass interferences. In the present study, actinides were concentrated from the sample matrix via iron hydroxide coprecipitation and measured sequentially without previous chemical separation from each other. The analytical method was validated by the analysis of the Reference Material IAEA 443 and was applied to groundwater samples from the Colloid Formation and Migration (CFM) project at the deep underground rock laboratory of the Grimsel Test Site (GTS) and to natural water samples affected solely by global fallout. While the precision of the presented analytical method is somewhat limited by the use of nonisotopic spikes, the sensitivity allows for the determination of ∼10(5) atoms in a sample. This provides, e.g., the capability to study the long-term release and retention of actinide tracers in field experiments as well as the transport of actinides in a variety of environmental systems by tracing contamination from global fallout.
NASA Technical Reports Server (NTRS)
Vlahos, L.; Machado, M. E.; Ramaty, R.; Murphy, R. J.; Alissandrakis, C.; Bai, T.; Batchelor, D.; Benz, A. O.; Chupp, E.; Ellison, D.
1986-01-01
Data is compiled from Solar Maximum Mission and Hinothori satellites, particle detectors in several satellites, ground based instruments, and balloon flights in order to answer fundamental questions relating to: (1) the requirements for the coronal magnetic field structure in the vicinity of the energization source; (2) the height (above the photosphere) of the energization source; (3) the time of energization; (4) transistion between coronal heating and flares; (5) evidence for purely thermal, purely nonthermal and hybrid type flares; (6) the time characteristics of the energization source; (7) whether every flare accelerates protons; (8) the location of the interaction site of the ions and relativistic electrons; (9) the energy spectra for ions and relativistic electrons; (10) the relationship between particles at the Sun and interplanetary space; (11) evidence for more than one acceleration mechanism; (12) whether there is single mechanism that will accelerate particles to all energies and also heat the plasma; and (13) how fast the existing mechanisms accelerate electrons up to several MeV and ions to 1 GeV.
ERIC Educational Resources Information Center
Ford, William J.
2010-01-01
This article focuses on the accelerated associate degree program at Ivy Tech Community College (Indiana) in which low-income students will receive an associate degree in one year. The three-year pilot program is funded by a $2.3 million grant from the Lumina Foundation for Education in Indianapolis and a $270,000 grant from the Indiana Commission…
Pope, K.E.
1958-01-01
This patent relates to an improved acceleration integrator and more particularly to apparatus of this nature which is gyrostabilized. The device may be used to sense the attainment by an airborne vehicle of a predetermined velocitv or distance along a given vector path. In its broad aspects, the acceleration integrator utilizes a magnetized element rotatable driven by a synchronous motor and having a cylin drical flux gap and a restrained eddy- current drag cap deposed to move into the gap. The angular velocity imparted to the rotatable cap shaft is transmitted in a positive manner to the magnetized element through a servo feedback loop. The resultant angular velocity of tae cap is proportional to the acceleration of the housing in this manner and means may be used to measure the velocity and operate switches at a pre-set magnitude. To make the above-described dcvice sensitive to acceleration in only one direction the magnetized element forms the spinning inertia element of a free gyroscope, and the outer housing functions as a gimbal of a gyroscope.
Wang, Zhehui; Barnes, Cris W.
2002-01-01
There has been invented an apparatus for acceleration of a plasma having coaxially positioned, constant diameter, cylindrical electrodes which are modified to converge (for a positive polarity inner electrode and a negatively charged outer electrode) at the plasma output end of the annulus between the electrodes to achieve improved particle flux per unit of power.
NASA Astrophysics Data System (ADS)
Ouaknin, Gaddiel; Laachi, Nabil; Delaney, Kris; Fredrickson, Glenn; Gibou, Frederic
2016-03-01
Directed self-assembly using block copolymers for positioning vertical interconnect access in integrated circuits relies on the proper shape of a confined domain in which polymers will self-assemble into the targeted design. Finding that shape, i.e., solving the inverse problem, is currently mainly based on trial and error approaches. We introduce a level-set based algorithm that makes use of a shape optimization strategy coupled with self-consistent field theory to solve the inverse problem in an automated way. It is shown that optimal shapes are found for different targeted topologies with accurate placement and distances between the different components.
Biomedical accelerator mass spectrometry
NASA Astrophysics Data System (ADS)
Freeman, Stewart P. H. T.; Vogel, John S.
1995-05-01
Ultrasensitive SIMS with accelerator based spectrometers has recently begun to be applied to biomedical problems. Certain very long-lived radioisotopes of very low natural abundances can be used to trace metabolism at environmental dose levels ( [greater-or-equal, slanted] z mol in mg samples). 14C in particular can be employed to label a myriad of compounds. Competing technologies typically require super environmental doses that can perturb the system under investigation, followed by uncertain extrapolation to the low dose regime. 41Ca and 26Al are also used as elemental tracers. Given the sensitivity of the accelerator method, care must be taken to avoid contamination of the mass spectrometer and the apparatus employed in prior sample handling including chemical separation. This infant field comprises the efforts of a dozen accelerator laboratories. The Center for Accelerator Mass Spectrometry has been particularly active. In addition to collaborating with groups further afield, we are researching the kinematics and binding of genotoxins in-house, and we support innovative uses of our capability in the disciplines of chemistry, pharmacology, nutrition and physiology within the University of California. The field can be expected to grow further given the numerous potential applications and the efforts of several groups and companies to integrate more the accelerator technology into biomedical research programs; the development of miniaturized accelerator systems and ion sources capable of interfacing to conventional HPLC and GMC, etc. apparatus for complementary chemical analysis is anticipated for biomedical laboratories.
NASA Astrophysics Data System (ADS)
Inoue, Yoshiyuki; Tanaka, Yasuyuki T.
2016-09-01
Relativistic jets launched by supermassive black holes, so-called active galactic nuclei (AGNs), are known as the most energetic particle accelerators in the universe. However, the baryon loading efficiency onto the jets from the accretion flows and their particle acceleration efficiencies have been veiled in mystery. With the latest data sets, we perform multi-wavelength spectral analysis of quiescent spectra of 13 TeV gamma-ray detected high-frequency-peaked BL Lacs (HBLs) following one-zone static synchrotron self-Compton (SSC) model. We determine the minimum, cooling break, and maximum electron Lorentz factors following the diffusive shock acceleration (DSA) theory. We find that HBLs have {P}B/{P}e˜ 6.3× {10}-3 and the radiative efficiency {ɛ }{{rad,jet}}˜ 6.7× {10}-4, where P B and P e is the Poynting and electron power, respectively. By assuming 10 leptons per one proton, the jet power relates to the black hole mass as {P}{{jet}}/{L}{{Edd}}˜ 0.18, where {P}{{jet}} and {L}{{Edd}} is the jet power and the Eddington luminosity, respectively. Under our model assumptions, we further find that HBLs have a jet production efficiency of {η }{{jet}}˜ 1.5 and a mass loading efficiency of {ξ }{{jet}}≳ 5× {10}-2. We also investigate the particle acceleration efficiency in the blazar zone by including the most recent Swift/BAT data. Our samples ubiquitously have particle acceleration efficiencies of {η }g˜ {10}4.5, which is inefficient to accelerate particles up to the ultra-high-energy-cosmic-ray (UHECR) regime. This implies that the UHECR acceleration sites should not be the blazar zones of quiescent low power AGN jets, if one assumes the one-zone SSC model based on the DSA theory.
GAMMA-RAY EMISSION OF ACCELERATED PARTICLES ESCAPING A SUPERNOVA REMNANT IN A MOLECULAR CLOUD
Ellison, Donald C.; Bykov, Andrei M. E-mail: byk@astro.ioffe.ru
2011-04-20
We present a model of gamma-ray emission from core-collapse supernovae (SNe) originating from the explosions of massive young stars. The fast forward shock of the supernova remnant (SNR) can accelerate particles by diffusive shock acceleration (DSA) in a cavern blown by a strong, pre-SN stellar wind. As a fundamental part of nonlinear DSA, some fraction of the accelerated particles escape the shock and interact with a surrounding massive dense shell producing hard photon emission. To calculate this emission, we have developed a new Monte Carlo technique for propagating the cosmic rays (CRs) produced by the forward shock of the SNR, into the dense, external material. This technique is incorporated in a hydrodynamic model of an evolving SNR which includes the nonlinear feedback of CRs on the SNR evolution, the production of escaping CRs along with those that remain trapped within the remnant, and the broadband emission of radiation from trapped and escaping CRs. While our combined CR-hydro-escape model is quite general and applies to both core collapse and thermonuclear SNe, the parameters we choose for our discussion here are more typical of SNRs from very massive stars whose emission spectra differ somewhat from those produced by lower mass progenitors directly interacting with a molecular cloud.
Medina, L Carolina; Sartain, Jerry B; Obreza, Thomas A; Hall, William L; Thiex, Nancy J
2014-01-01
Several technologies have been proposed to characterize the nutrient release and availability patterns of enhanced-efficiency fertilizers (EEFs), especially slow-release fertilizers (SRFs) and controlled-release fertilizers (CRFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize EEF materials, no validated method exists to assess their nutrient release patterns. However, the increased use of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. A series of experiments were conducted to evaluate the effect of temperature, fertilizer test portion size, and extraction time on the performance of a 74 h accelerated laboratory extraction method to measure SRF and CRF nutrient release profiles. Temperature was the only factor that influenced nutrient release rate, with a highly marked effect for phosphorus and to a lesser extent for nitrogen (N) and potassium. Based on the results, the optimal extraction temperature set was: Extraction No. 1-2:00 h at 25 degrees C; Extraction No. 2-2:00 h at 50 degrees C; Extraction No. 3-20:00 h at 55 degrees C; and Extraction No. 4-50:00 h at 60 degrees C. Ruggedness of the method was tested by evaluating the effect of small changes in seven selected factors on method behavior using a fractional multifactorial design. Overall, the method showed ruggedness for measuring N release rates of coated CRFs.
Caporaso, George J.; Sampayan, Stephen E.; Kirbie, Hugh C.
2007-02-06
A compact linear accelerator having at least one strip-shaped Blumlein module which guides a propagating wavefront between first and second ends and controls the output pulse at the second end. Each Blumlein module has first, second, and third planar conductor strips, with a first dielectric strip between the first and second conductor strips, and a second dielectric strip between the second and third conductor strips. Additionally, the compact linear accelerator includes a high voltage power supply connected to charge the second conductor strip to a high potential, and a switch for switching the high potential in the second conductor strip to at least one of the first and third conductor strips so as to initiate a propagating reverse polarity wavefront(s) in the corresponding dielectric strip(s).
NASA Astrophysics Data System (ADS)
Öz, E.; Batsch, F.; Muggli, P.
2016-09-01
A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE) (Assmann et al., 2014 [1]) project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook (Marlow, 1967 [2]) method and has been described in great detail in the work by Hill et al. (1986) [3]. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of 1% for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prototype 8 cm long novel Rb vapor cell.
ERIC Educational Resources Information Center
Schneider, Jenifer Jasinski; King, James R.; Kozdras, Deborah; Minick, Vanessa; Welsh, James L.
2012-01-01
During a teaching methods field experience, we initiated several processes to facilitate pre-service teachers' reflection, empowerment, and performance as they learned to teach students. Through an ethno-theater presentation and subsequent revisions to an ethno-theater script, we turned the reflective lens on ourselves as we discovered instances…
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
Chuang, Ya-Hui; Zhang, Yingjie; Zhang, Wei; Boyd, Stephen A; Li, Hui
2015-07-24
Land application of biosolids and irrigation with reclaimed water in agricultural production could result in accumulation of pharmaceuticals in vegetable produce. To better assess the potential human health impact from long-term consumption of pharmaceutical-contaminated vegetables, it is important to accurately quantify the amount of pharmaceuticals accumulated in vegetables. In this study, a quick, easy, cheap, effective, rugged and safe (QuEChERS) method was developed and optimized to extract multiple classes of pharmaceuticals from vegetables, which were subsequently quantified by liquid chromatography coupled to tandem mass spectrometry. For the eleven target pharmaceuticals in celery and lettuce, the extraction recovery of the QuEChERS method ranged from 70.1 to 118.6% with relative standard deviation <20%, and the method detection limit was achieved at the levels of nanograms of pharmaceuticals per gram of vegetables. The results revealed that the performance of the QuEChERS method was comparable to, or better than that of accelerated solvent extraction (ASE) method for extraction of pharmaceuticals from plants. The two optimized extraction methods were applied to quantify the uptake of pharmaceuticals by celery and lettuce growing hydroponically. The results showed that all the eleven target pharmaceuticals could be absorbed by the vegetables from water. Compared to the ASE method, the QuEChERS method offers the advantages of short time and reduced costs of sample preparation, and less amount of organic solvents used. The established QuEChERS method could be used to determine the accumulation of multiple classes of pharmaceutical residues in vegetables and other plants, which is needed to evaluate the quality and safety of agricultural produce consumed by humans. PMID:26065569
Chuang, Ya-Hui; Zhang, Yingjie; Zhang, Wei; Boyd, Stephen A; Li, Hui
2015-07-24
Land application of biosolids and irrigation with reclaimed water in agricultural production could result in accumulation of pharmaceuticals in vegetable produce. To better assess the potential human health impact from long-term consumption of pharmaceutical-contaminated vegetables, it is important to accurately quantify the amount of pharmaceuticals accumulated in vegetables. In this study, a quick, easy, cheap, effective, rugged and safe (QuEChERS) method was developed and optimized to extract multiple classes of pharmaceuticals from vegetables, which were subsequently quantified by liquid chromatography coupled to tandem mass spectrometry. For the eleven target pharmaceuticals in celery and lettuce, the extraction recovery of the QuEChERS method ranged from 70.1 to 118.6% with relative standard deviation <20%, and the method detection limit was achieved at the levels of nanograms of pharmaceuticals per gram of vegetables. The results revealed that the performance of the QuEChERS method was comparable to, or better than that of accelerated solvent extraction (ASE) method for extraction of pharmaceuticals from plants. The two optimized extraction methods were applied to quantify the uptake of pharmaceuticals by celery and lettuce growing hydroponically. The results showed that all the eleven target pharmaceuticals could be absorbed by the vegetables from water. Compared to the ASE method, the QuEChERS method offers the advantages of short time and reduced costs of sample preparation, and less amount of organic solvents used. The established QuEChERS method could be used to determine the accumulation of multiple classes of pharmaceutical residues in vegetables and other plants, which is needed to evaluate the quality and safety of agricultural produce consumed by humans.
ION ACCELERATION IN NON-RELATIVISTIC ASTROPHYSICAL SHOCKS
Gargate, L.; Spitkovsky, A.
2012-01-01
We explore the physics of shock evolution and particle acceleration in non-relativistic collisionless shocks using hybrid simulations. We analyze a wide range of physical parameters relevant to the acceleration of cosmic rays (CRs) in astrophysical shock scenarios. We show that there are fundamental differences between high and low Mach number shocks in terms of the electromagnetic turbulence generated in the pre-shock zone; dominant modes are resonant with the streaming CRs in the low Mach number regime, while both resonant and non-resonant modes are present for high Mach numbers. Energetic power-law tails for ions in the downstream plasma account for up to 15% of the incoming upstream flow energy, distributed over {approx}5% of the particles in a power law with slope -2 {+-} 0.2 in energy. Quasi-parallel shocks with {theta} {<=} 45 Degree-Sign are good ion accelerators, while power laws are greatly suppressed for quasi-perpendicular shocks, {theta} > 45 Degree-Sign . The efficiency of conversion of flow energy into the energy of accelerated particles peaks at {theta} = 15 Degree-Sign -30 Degree-Sign and M{sub A} = 6, and decreases for higher Mach numbers, down to {approx}2% for M{sub A} = 31. Accelerated particles are produced by diffusive shock acceleration (DSA) and by shock drift acceleration (SDA) mechanisms, with the SDA contribution to the overall energy gain increasing with magnetic inclination. We also present a direct comparison between hybrid and fully kinetic particle-in-cell results at early times. In supernova remnant (SNR) shocks, particle acceleration will be significant for low Mach number quasi-parallel flows (M{sub A} < 30, {theta} < 45). This finding underscores the need for an effective magnetic amplification mechanism in SNR shocks.
The Brookhaven National Laboratory Accelerator Test Facility
Batchelor, K.
1992-09-01
The Brookhaven National Laboratory Accelerator Test Facility comprises a 50 MeV traveling wave electron linear accelerator utilizing a high gradient, photo-excited, raidofrequency electron gun as an injector and an experimental area for study of new acceleration methods or advanced radiation sources using free electron lasers. Early operation of the linear accelerator system including calculated and measured beam parameters are presented together with the experimental program for accelerator physics and free electron laser studies.
The Brookhaven National Laboratory Accelerator Test Facility
Batchelor, K.
1992-01-01
The Brookhaven National Laboratory Accelerator Test Facility comprises a 50 MeV traveling wave electron linear accelerator utilizing a high gradient, photo-excited, raidofrequency electron gun as an injector and an experimental area for study of new acceleration methods or advanced radiation sources using free electron lasers. Early operation of the linear accelerator system including calculated and measured beam parameters are presented together with the experimental program for accelerator physics and free electron laser studies.
NASA Astrophysics Data System (ADS)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-01
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate {O}(1/k^2). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
Maeda, Keiichi
2012-10-20
In this paper, we develop a model for the radio and X-ray emissions from the Type IIb supernova (SN IIb) 2011dh in the first 100 days after the explosion, and investigate a spectrum of relativistic electrons accelerated at a strong shock wave. The widely accepted theory of particle acceleration, the so-called diffusive shock acceleration (DSA) or Fermi mechanism, requires seed electrons with modest energy with {gamma} {approx} 1-100, and little is known about this pre-acceleration mechanism. We derive the energy distribution of relativistic electrons in this pre-accelerated energy regime. We find that the efficiency of the electron acceleration must be low, i.e., {epsilon}{sub e} {approx}< 10{sup -2} as compared to the conventionally assumed value of {epsilon}{sub e} {approx} 0.1. Furthermore, independent of the low value of {epsilon}{sub e}, we find that the X-ray luminosity cannot be attributed to any emission mechanisms suggested as long as these electrons follow the conventionally assumed single power-law distribution. A consistent view between the radio and X-ray can only be obtained if the pre-acceleration injection spectrum peaks at {gamma} {approx} 20-30 and then only a fraction of these electrons eventually experience the DSA-like acceleration toward the higher energy-then the radio and X-ray properties are explained through the synchrotron and inverse Compton mechanisms, respectively. Our findings support the idea that the pre-acceleration of the electrons is coupled with the generation/amplification of the magnetic field.
Advanced concepts for acceleration
Keefe, D.
1986-07-01
Selected examples of advanced accelerator concepts are reviewed. Such plasma accelerators as plasma beat wave accelerator, plasma wake field accelerator, and plasma grating accelerator are discussed particularly as examples of concepts for accelerating relativistic electrons or positrons. Also covered are the pulsed electron-beam, pulsed laser accelerator, inverse Cherenkov accelerator, inverse free-electron laser, switched radial-line accelerators, and two-beam accelerator. Advanced concepts for ion acceleration discussed include the electron ring accelerator, excitation of waves on intense electron beams, and two-wave combinations. (LEW)
Accelerators and the Accelerator Community
Malamud, Ernest; Sessler, Andrew
2008-06-01
In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.
NASA Astrophysics Data System (ADS)
Kagadis, G. C.; Diamantopoulos, A.; Samaras, N.; Daskalakis, A.; Spyridonos, P.; Katsanos, K.; Karnabatidis, D.; Sourgiadaki, E.; Cavouras, D.; Siablis, D.; Nikiforidis, G. C.
2009-05-01
In-vivo dynamic visualization and accurate quantification of vascular networks is a prerequisite of crucial importance in both therapeutic angiogenesis and tumor anti-angiogenesis studies. A user independent computerized tool was developed, for the automated segmentation and quantitative assessment of in-vivo acquired DSA images. Automatic vessel assessment was performed employing the concept of image structural tensor. Initially, vasculature was estimated according to the largest eigenvalue of the structural tensor. The resulted eigenvalue matrix was treated as gray-matrix from which the vessels were gradually segmented and then categorized in three main sub-groups; large, medium and small-size vessels. The histogram percentiles, corresponding to 85%, 65% and 47% of prime eigenvalue gray-matrix were optimally found to give the thresholds T1, T2 and T3 respectively, for extracting vessels of different size. The proposed methodology was tested on a series of DSA images in both normal rabbits (group A) and in rabbits with experimental induced chronic hindlimb ischemia (group B). As a result an automated computerized tool was developed to process images without any user intervention in either experimental or clinical studies. Specifically, a higher total vascular area and length were calculated in group B compared to group A (p=0.0242 and p=0.0322 respectively), which is in accordance to the fact that significantly more collateral arteries are developed during the physiological response to the stimuli of ischemia.
Ming, Yingzi; Hu, Juan; Luo, Qizhi; Ding, Xiang; Luo, Weiguang; Zhuang, Quan; Zou, Yizhou
2015-01-01
The presence of donor-specific alloantibodies (DSAs) against the MICA antigen results in high risk for antibody-mediated rejection (AMR) of a transplanted kidney, especially in patients receiving a re-transplant. We describe the incidence of acute C4d+ AMR in a patient who had received a first kidney transplant with a zero HLA antigen mismatch. Retrospective analysis of post-transplant T and B cell crossmatches were negative, but a high level of MICA alloantibody was detected in sera collected both before and after transplant. The DSA against the first allograft mismatched MICA*018 was in the recipient. Flow cytometry and cytotoxicity tests with five samples of freshly isolated human umbilical vein endothelial cells demonstrated the alloantibody nature of patient's MICA-DSA. Prior to the second transplant, a MICA virtual crossmatch and T and B cell crossmatches were used to identify a suitable donor. The patient received a second kidney transplant, and allograft was functioning well at one-year follow-up. Our study indicates that MICA virtual crossmatch is important in selection of a kidney donor if the recipient has been sensitized with MICA antigens.
Ravichandran, R; Binukumar, J P; Sivakumar, S S; Krishnamurthy, K; Davis, C A
2008-07-01
Intensity-modulated radiotherapy (IMRT) clinical dose delivery is based on computer-controlled multileaf movements at different velocities. To test the accuracy of modulation of the beam periodically, quality assurance (QA) methods are necessary. Using a cylindrical phantom, dose delivery was checked at a constant geometry for sweeping fields. Repeated measurements with an in-house designed methodology over a period of 1 year indicate that the method is very sensitive to check the proper functioning of such dose delivery in medical linacs. A cylindrical perspex phantom with facility to accurately position a 0.6-cc (FC 65) ion chamber at constant depth at isocenter, (SA 24 constancy check tool phantom for MU check, Scanditronix Wellhofer) was used. Dosimeter readings were integrated for 4-mm, 10-mm, 20-mm sweeping fields and for 3 angular positions of the gantry periodically. Consistency of standard sweeping field output (10-mm slit width) and the ratios of outputs against other slit widths over a long period were reported. A 10-mm sweeping field output was found reproducible within an accuracy of 0.03% (n = 25) over 1 year. Four-millimeter, 20-mm outputs expressed as ratio with respect to 10-mm sweep output remained within a mean deviation of 0.2% and 0.03% respectively. Outputs at 3 gantry angles remained within 0.5%, showing that the effect of dynamic movements of multileaf collimator (MLC) on the output is minimal for angular positions of gantry. This method of QA is very simple and is recommended in addition to individual patient QA measurements, which reflect the accuracy of dose planning system. In addition to standard output and energy checks of linacs, the above measurements can be complemented so as to check proper functioning of multileaf collimator for dynamic field dose delivery.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Yang, Sheng-Chun; Wang, Yong-Lei; Jiao, Gui-Sheng; Qian, Hu-Jun; Lu, Zhong-Yuan
2016-01-30
We present new algorithms to improve the performance of ENUF method (F. Hedman, A. Laaksonen, Chem. Phys. Lett. 425, 2006, 142) which is essentially Ewald summation using Non-Uniform FFT (NFFT) technique. A NearDistance algorithm is developed to extensively reduce the neighbor list size in real-space computation. In reciprocal-space computation, a new algorithm is developed for NFFT for the evaluations of electrostatic interaction energies and forces. Both real-space and reciprocal-space computations are further accelerated by using graphical processing units (GPU) with CUDA technology. Especially, the use of CUNFFT (NFFT based on CUDA) very much reduces the reciprocal-space computation. In order to reach the best performance of this method, we propose a procedure for the selection of optimal parameters with controlled accuracies. With the choice of suitable parameters, we show that our method is a good alternative to the standard Ewald method with the same computational precision but a dramatically higher computational efficiency. PMID:26584145
NASA Astrophysics Data System (ADS)
Owusu-Banson, Derek
In recent times, a variety of industries, applications and numerical methods including the meshless method have enjoyed a great deal of success by utilizing the graphical processing unit (GPU) as a parallel coprocessor. These benefits often include performance improvement over the previous implementations. Furthermore, applications running on graphics processors enjoy superior performance per dollar and performance per watt than implementations built exclusively on traditional central processing technologies. The GPU was originally designed for graphics acceleration but the modern GPU, known as the General Purpose Graphical Processing Unit (GPGPU) can be used for scientific and engineering calculations. The GPGPU consists of massively parallel array of integer and floating point processors. There are typically hundreds of processors per graphics card with dedicated high-speed memory. This work describes an application written by the author, titled GaussianRBF to show the implementation and results of a novel meshless method that in-cooperates the collocation of the Gaussian radial basis function by utilizing the GPU as a parallel co-processor. Key phases of the proposed meshless method have been executed on the GPU using the NVIDIA CUDA software development kit. Especially, the matrix fill and solution phases have been carried out on the GPU, along with some post processing. This approach resulted in a decreased processing time compared to similar algorithm implemented on the CPU while maintaining the same accuracy.
Wu, Jia Jun; Mak, Yim Ling; Murphy, Margaret B; Lam, James C W; Chan, Wing Hei; Wang, Mingfu; Chan, Leo L; Lam, Paul K S
2011-07-01
Ciguatera fish poisoning (CFP) is a global foodborne illness caused by consumption of seafood containing ciguatoxins (CTXs) originating from dinoflagellates such as Gambierdiscus toxicus. P-CTX-1 has been suggested to be the most toxic CTX, causing ciguatera at 0.1 μg/kg in the flesh of carnivorous fish. CTXs are structurally complex and difficult to quantify, but there is a need for analytical methods for CFP toxins in coral reef fishes to protect human health. In this paper, we describe a sensitive and rapid extraction method using accelerated solvent extraction combined with high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) for the detection and quantification of P-CTX-1 in fish flesh. By the use of a more sensitive MS system (5500 QTRAP), the validated method has a limit of quantification (LOQ) of 0.01 μg/kg, linearity correlation coefficients above 0.99 for both solvent- and matrix-based standard solutions as well as matrix spike recoveries ranging from 49% to 85% in 17 coral reef fish species. Compared with previous methods, this method has better overall recovery, extraction efficiency and LOQ. Fish flesh from 12 blue-spotted groupers (Cephalopholis argus) was assessed for the presence of CTXs using HPLC-MS/MS analysis and the commonly used mouse neuroblastoma assay, and the results of the two methods were strongly correlated. This method is capable of detecting low concentrations of P-CTX-1 in fish at levels that are relevant to human health, making it suitable for monitoring of suspected ciguateric fish both in the environment and in the marketplace. PMID:21505950
Tanguay, Jesse; Kim, Ho Kyung; Cunningham, Ian A.
2012-01-15
Purpose: X-ray digital subtraction angiography (DSA) is widely used for vascular imaging. However, the need to subtract a mask image can result in motion artifacts and compromised image quality. The current interest in energy-resolving photon-counting (EPC) detectors offers the promise of eliminating motion artifacts and other advanced applications using a single exposure. The authors describe a method of assessing the iodine signal-to-noise ratio (SNR) that may be achieved with energy-resolved angiography (ERA) to enable a direct comparison with other approaches including DSA and dual-energy angiography for the same patient exposure. Methods: A linearized noise-propagation approach, combined with linear expressions of dual-energy and energy-resolved imaging, is used to describe the iodine SNR. The results were validated by a Monte Carlo calculation for all three approaches and compared visually for dual-energy and DSA imaging using a simple angiographic phantom with a CsI-based flat-panel detector. Results: The linearized SNR calculations show excellent agreement with Monte Carlo results. While dual-energy methods require an increased tube heat load of 2x to 4x compared to DSA, and photon-counting detectors are not yet ready for angiographic imaging, the available iodine SNR for both methods as tested is within 10% of that of conventional DSA for the same patient exposure over a wide range of patient thicknesses and iodine concentrations. Conclusions: While the energy-based methods are not necessarily optimized and further improvements are likely, the linearized noise-propagation analysis provides the theoretical framework of a level playing field for optimization studies and comparison with conventional DSA. It is concluded that both dual-energy and photon-counting approaches have the potential to provide similar angiographic image quality to DSA.
NASA Astrophysics Data System (ADS)
Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng
2002-03-01
The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm
Laser acceleration and its future
Tajima, Toshiki
2010-01-01
Laser acceleration is based on the concept to marshal collective fields that may be induced by laser. In order to exceed the material breakdown field by a large factor, we employ the broken-down matter of plasma. While the generated wakefields resemble with the fields in conventional accelerators in their structure (at least qualitatively), it is their extreme accelerating fields that distinguish the laser wakefield from others, amounting to tiny emittance and compact accelerator. The current research largely falls on how to master the control of acceleration process in spatial and temporal scales several orders of magnitude smaller than the conventional method. The efforts over the last several years have come to a fruition of generating good beam properties with GeV energies on a table top, leading to many applications, such as ultrafast radiolysis, intraoperative radiation therapy, injection to X-ray free electron laser, and a candidate for future high energy accelerators. PMID:20228616
General purpose programmable accelerator board
Robertson, Perry J.; Witzke, Edward L.
2001-01-01
A general purpose accelerator board and acceleration method comprising use of: one or more programmable logic devices; a plurality of memory blocks; bus interface for communicating data between the memory blocks and devices external to the board; and dynamic programming capabilities for providing logic to the programmable logic device to be executed on data in the memory blocks.
Xia, Yidong; Lou, Jialin; Luo, Hong; Edwards, Jack; Mueller, Frank
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementation of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.
Xia, Yidong; Lou, Jialin; Luo, Hong; Edwards, Jack; Mueller, Frank
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
Ottonello, Giuliana; Ferrari, Angelo; Magi, Emanuele
2014-01-01
A simple and robust method for the determination of 18 polychlorinated biphenyls (PCBs) in fish was developed and validated. A mixture of acetone/n-hexane (1:1, v/v) was selected for accelerated solvent extraction (ASE). After the digestion of fat, the clean-up was carried out using solid phase extraction silica cartridges. Samples were analysed by GC-MS in selected ion monitoring (SIM) using three fragment ions for each congener (one quantifier and two qualifiers). PCB 155 and PCB 198 were employed as internal standards. The lowest limit of detection was observed for PCB 28 (0.4ng/g lipid weight). The accuracy of the method was verified by means of the Certified Reference Material EDF-2525 and good results in terms of linearity (R(2)>0.994) and recoveries (80-110%) were also achieved. Precision was evaluated by spiking blank samples at 4, 8 and 12ng/g. Relative standard deviation values for repeatability and reproducibility were lower than 8% and 16%, respectively. The method was applied to the determination of PCBs in 80 samples belonging to four Mediterranean fish species. The proposed procedure is particularly effective because it provides good recoveries with lowered extraction time and solvent consumption; in fact, the total time of extraction is about 12min per sample and, for the clean-up step, a total solvent volume of 13ml is required.
Muto, Hideshi; Ohshiro, Yukimitsu; Kawasaki, Katsunori; Oyaizu, Michihiro; Hattori, Toshiyuki
2013-04-19
In the past decade, we have developed extremely long-lived carbon stripper foils of 1-50 {mu}g/cm{sup 2} thickness prepared by a heavy ion beam sputtering method. These foils were mainly used for low energy heavy ion beams. Recently, high energy negative Hydrogen and heavy ion accelerators have started to use carbon stripper foils of over 100 {mu}g/cm{sup 2} in thickness. However, the heavy ion beam sputtering method was unsuccessful in production of foils thicker than about 50 {mu}g/cm{sup 2} because of the collapse of carbon particle build-up from substrates during the sputtering process. The reproduction probability of the foils was less than 25%, and most of them had surface defects. However, these defects were successfully eliminated by introducing higher beam energies of sputtering ions and a substrate heater during the sputtering process. In this report we describe a highly reproducible method for making thick carbon stripper foils by a heavy ion beam sputtering with a Krypton ion beam.
Nelli, Flavio Enrico
2016-03-01
A very simple method to measure the effect of the backscatter from secondary collimators into the beam monitor chambers in linear accelerators equipped with multi-leaf collimators (MLC) is presented here. The backscatter to the monitor chambers from the upper jaws of the secondary collimator was measured on three beam-matched linacs by means of three methods: this new methodology, the ecliptic method, and assessing the variation of the beam-on time per monitor unit with dose rate feedback disabled. This new methodology was used to assess the backscatter characteristics of asymmetric over-traveling jaws. Excellent agreement between the backscatter values measured using the new methodology introduced here and the ones obtained using the other two methods was established. The experimental values reported here differ by less than 1% from published data. The sensitivity of this novel technique allowed differences in backscatter due to the same opening of the jaws, when placed at different positions on the beam path, to be resolved. The introduction of the ecliptic method has made the determination of the backscatter to the monitor chambers an easy procedure. The method presented here for machines equipped with MLCs makes the determination of backscatter to the beam monitor chambers even easier, and suitable to characterize linacs equipped with over-traveling asymmetric secondary collimators. This experimental procedure could be simply implemented to fully characterize the backscatter output factor constituent when detailed dosimetric modeling of the machine's head is required. The methodology proved to be uncomplicated, accurate and suitable for clinical or experimental environments. PMID:26671445
Medina, L Carolina; Sartain, Jerry B; Obreza, Thomas A; Hall, William L; Thiex, Nancy J
2014-01-01
Several technologies have been proposed to characterize the nutrient release and availability patterns of enhanced-efficiency fertilizers (EEFs), especially slow-release fertilizers (SRFs) and controlled-release fertilizers (CRFs) during the last few decades. These technologies have been developed mainly by manufacturers and are product-specific based on the regulation and analysis of each EEF product. Despite previous efforts to characterize EEF materials, no validated method exists to assess their nutrient release patterns. However, the increased use of EEFs in specialty and nonspecialty markets requires an appropriate method to verify nutrient claims and material performance. A series of experiments were conducted to evaluate the effect of temperature, fertilizer test portion size, and extraction time on the performance of a 74 h accelerated laboratory extraction method to measure SRF and CRF nutrient release profiles. Temperature was the only factor that influenced nutrient release rate, with a highly marked effect for phosphorus and to a lesser extent for nitrogen (N) and potassium. Based on the results, the optimal extraction temperature set was: Extraction No. 1-2:00 h at 25 degrees C; Extraction No. 2-2:00 h at 50 degrees C; Extraction No. 3-20:00 h at 55 degrees C; and Extraction No. 4-50:00 h at 60 degrees C. Ruggedness of the method was tested by evaluating the effect of small changes in seven selected factors on method behavior using a fractional multifactorial design. Overall, the method showed ruggedness for measuring N release rates of coated CRFs. PMID:25051611
Saikko, Vesa
2015-01-21
The temporal change of the direction of sliding relative to the ultrahigh molecular weight polyethylene (UHMWPE) component of prosthetic joints is known to be of crucial importance with respect to wear. One complete revolution of the resultant friction vector is commonly called a wear cycle. It was hypothesized that in order to accelerate the wear test, the cycle frequency may be substantially increased if the circumference of the slide track is reduced in proportion, and still the wear mechanisms remain realistic and no overheating takes place. This requires an additional slow motion mechanism with which the lubrication of the contact is maintained and wear particles are conveyed away from the contact. A three-station, dual motion high frequency circular translation pin-on-disk (HF-CTPOD) device with a relative cycle frequency of 25.3 Hz and an average sliding velocity of 27.4 mm/s was designed. The pins circularly translated at high frequency (1.0 mm per cycle, 24.8 Hz, clockwise), and the disks at low frequency (31.4mm per cycle, 0.5 Hz, counter-clockwise). In a 22 million cycle (10 day) test, the wear rate of conventional gamma-sterilized UHMWPE pins against polished CoCr disks in diluted serum was 1.8 mg per 24 h, which was six times higher than that in the established 1 Hz CTPOD device. The wear mechanisms were similar. Burnishing of the pin was the predominant feature. No overheating took place. With the dual motion HF-CTPOD method, the wear testing of UHMWPE as a bearing material in total hip arthroplasty can be substantially accelerated without concerns of the validity of the wear simulation.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby
NASA Astrophysics Data System (ADS)
Khabarova, Olga V.; Zank, Gary P.; Li, Gang; Malandraki, Olga E.; le Roux, Jakobus A.; Webb, Gary M.
2016-04-01
We have recently shown both theoretically (Zank et al. 2014, 2015; le Roux et al. 2015) and observationally (Khabarova et al. 2015) that dynamical small-scale magnetic islands play a significant role in local particle acceleration in the supersonic solar wind. We discuss here observational evidence for particle acceleration at shock waves that is enhanced by the recently proposed mechanism of particle energization by both island contraction and the reconnection electric field generated in merging or contracting magnetic islands downstream of the shocks (Zank et al. 2014, 2015; le Roux et al. 2015). Both observations and simulations suppose formation of magnetic islands in the turbulent wake of heliospheric or interplanetary shocks (ISs) (Turner et al. 2013; Karimabadi et al. 2014; Chasapis et al. 2015). A combination of the DSA mechanism with acceleration by magnetic island dynamics explain why the spectra of energetic particles that are supposed to be accelerated at heliospheric shocks are sometimes harder than predicted by DSA theory (Zank et al. 2015). Moreover, such an approach allows us to explain and describe other unusual behaviour of accelerated particles, such as when energetic particle flux intensity peaks are observed downstream of heliospheric shocks instead of peaking directly at the shock according to DSA theory. Zank et al. (2015) predicted the peak location to be behind the heliospheric termination shock (HTS) and showed that the distance from the shock to the peak depends on particle energy, which is in agreement with Voyager 2 observations. Similar particle behaviour is observed near strong ISs in the outer heliosphere as observed by Voyager 2. Observations show that heliospheric shocks are accompanied by current sheets, and that IS crossings always coincide with sharp changes in the IMF azimuthal angle and the IMF strength, which is typical for strong current sheets. The presence of current sheets in the vicinity of ISs acts to magnetically
Suzuki, Yusuke; Hayashi, Naoki; Kato, Hideki; Fukuma, Hiroshi; Hirose, Yasujiro; Kawano, Makoto; Nishii, Yoshio; Nakamura, Masaru; Mukouyama, Takashi
2013-01-01
In small-field irradiation, the back-scattered radiation (BSR) affects the counts measured with a beam monitor chamber (BMC). In general, the effect of the BSR depends on the opened-jaw size. The effect is significantly large in small-field irradiation. Our purpose in this study was to predict the effect of BSR on LINAC output accurately with an improved target-current-pulse (TCP) technique. The pulse signals were measured with a system consisting of a personal computer and a digitizer. The pulse signals were analyzed with in-house software. The measured parameters were the number of pulses, the change in the waveform and the integrated signal values of the TCPs. The TCPs were measured for various field sizes with four linear accelerators. For comparison, Yu's method in which a universal counter was used was re-examined. The results showed that the variance of the measurements by the new method was reduced to approximately 1/10 of the variance by the previous method. There was no significant variation in the number of pulses due to a change in the field size in the Varian Clinac series. However, a change in the integrated signal value was observed. This tendency was different from the result of other investigations in the past. Our prediction method is able to define the cutoff voltage for the TCP acquired by digitizer. This functionality provides the capability of clearly classifying TCPs into signals and noise. In conclusion, our TCP analysis method can predict the effect of BSR on the BMC even for small-field irradiations.
Prian, L.; Pollard, R.; Shan, R.; Mastropietro, C.W.; Barkatt, A.; Gentry, T.R.; Bank, L.C.
1997-12-31
The development of accelerated test methods to characterize long-term environmental effects on fiber-reinforced plastics (FRPs) requires the use of physicochemical methods, as well as macromechanical measurements, in order to investigate the degradation processes and predict their course over long periods of time. Thermochemical and mechanical measurements were performed on a large number of FRPs exposed to neutral, basic, and acidic media between 23 and 80 C over periods of 7 to 224 days. The resin matrices used in the present study included vinylester, polyester, and epoxy, and the fiber materials were silicate glass, aramid, and carbon. TGA was used to study the effects of aqueous media on FRPs. In particular, the relative weight loss upon heating the previously exposed material from 150 to 300 C was found to be indicative of the extent of matrix depolymerization. Indications were obtained for correlation between this weight loss and the extent of degradation of various measures of mechanical strength. The measured weight change of the tested materials during exposure was found to reflect the extent of water absorption and could be related to the extent of the weight loss between 150 and 300 C. In basic environments, weight loss, rather than gain, took place as a result of fiber dissolution.
Imaging using accelerated heavy ions
Chu, W.T.
1982-05-01
Several methods for imaging using accelerated heavy ion beams are being investigated at Lawrence Berkeley Laboratory. Using the HILAC (Heavy-Ion Linear Accelerator) as an injector, the Bevalac can accelerate fully stripped atomic nuclei from carbon (Z = 6) to krypton (Z = 34), and partly stripped ions up to uranium (Z = 92). Radiographic studies to date have been conducted with helium (from 184-inch cyclotron), carbon, oxygen, and neon beams. Useful ranges in tissue of 40 cm or more are available. To investigate the potential of heavy-ion projection radiography and computed tomography (CT), several methods and instrumentation have been studied.
NASA Astrophysics Data System (ADS)
Hermus, James; Szczykutowicz, Timothy P.; Strother, Charles M.; Mistretta, Charles
2014-03-01
When performing Computed Tomographic (CT) image reconstruction on digital subtraction angiography (DSA) projections, loss of vessel contrast has been observed behind highly attenuating anatomy, such as dental implants and large contrast filled aneurysms. Because this typically occurs only in a limited range of projection angles, the observed contrast time course can potentially be altered. In this work, we have developed a model for acquiring DSA projections that models both the polychromatic nature of the x-ray spectrum and the x-ray scattering interactions to investigate this problem. In our simulation framework, scatter and beam hardening contributions to vessel dropout can be analyzed separately. We constructed digital phantoms with large clearly defined regions containing iodine contrast, bone, soft issue, titanium (dental implants) or combinations of these materials. As the regions containing the materials were large and rectangular, when the phantoms were forward projected, the projections contained uniform regions of interest (ROI) and enabled accurate vessel dropout analysis. Two phantom models were used, one to model the case of a vessel behind a large contrast filled aneurysm and the other to model a vessel behind a dental implant. Cases in which both beam hardening and scatter were turned off, only scatter was turned on, only beam hardening was turned on, and both scatter and beam hardening were turned on, were simulated for both phantom models. The analysis of this data showed that the contrast degradation is primarily due to scatter. When analyzing the aneurysm case, 90.25% of the vessel contrast was lost in the polychromatic scatter image, however only 50.5% of the vessel contrast was lost in the beam hardening only image. When analyzing the teeth case, 44.2% of the vessel contrast was lost in the polychromatic scatter image and only 26.2% of the vessel contrast was lost in the beam hardening only image.
Heuberger, Adam L; Broeckling, Corey D; Sedin, Dana; Holbrook, Christian; Barr, Lindsay; Kirkpatrick, Kaylyn; Prenni, Jessica E
2016-06-01
Flavour stability is vital to the brewing industry as beer is often stored for an extended time under variable conditions. Developing an accelerated model to evaluate brewing techniques that affect flavour stability is an important area of research. Here, we performed metabolomics on non-volatile compounds in beer stored at 37 °C between 1 and 14 days for two beer types: an amber ale and an India pale ale. The experiment determined high temperature to influence non-volatile metabolites, including the purine 5-methylthioadenosine (5-MTA). In a second experiment, three brewing techniques were evaluated for improved flavour stability: use of antioxidant crowns, chelation of pro-oxidants, and varying plant content in hops. Sensory analysis determined the hop method was associated with improved flavour stability, and this was consistent with reduced 5-MTA at both regular and high temperature storage. Future studies are warranted to understand the influence of 5-MTA on flavour and aging within different beer types. PMID:26830592
Heuberger, Adam L; Broeckling, Corey D; Sedin, Dana; Holbrook, Christian; Barr, Lindsay; Kirkpatrick, Kaylyn; Prenni, Jessica E
2016-06-01
Flavour stability is vital to the brewing industry as beer is often stored for an extended time under variable conditions. Developing an accelerated model to evaluate brewing techniques that affect flavour stability is an important area of research. Here, we performed metabolomics on non-volatile compounds in beer stored at 37 °C between 1 and 14 days for two beer types: an amber ale and an India pale ale. The experiment determined high temperature to influence non-volatile metabolites, including the purine 5-methylthioadenosine (5-MTA). In a second experiment, three brewing techniques were evaluated for improved flavour stability: use of antioxidant crowns, chelation of pro-oxidants, and varying plant content in hops. Sensory analysis determined the hop method was associated with improved flavour stability, and this was consistent with reduced 5-MTA at both regular and high temperature storage. Future studies are warranted to understand the influence of 5-MTA on flavour and aging within different beer types.
NASA Astrophysics Data System (ADS)
Wang, Hu; Zou, Yubin; Wen, Weiwei; Lu, Yuanrong; Guo, Zhiyu
2016-07-01
Peking University Neutron Imaging Facility (PKUNIFTY) works on an accelerator-based neutron source with a repetition period of 10 ms and pulse duration of 0.4 ms, which has a rather low Cd ratio. To improve the effective Cd ratio and thus improve the detection capability of the facility, energy-filtering neutron imaging was realized with the intensified CCD camera and time-of-flight (TOF) method. Time structure of the pulsed neutron source was firstly simulated with Geant4, and the simulation result was evaluated with experiment. Both simulation and experiment results indicated that fast neutrons and epithermal neutrons were concentrated in the first 0.8 ms of each pulse period; meanwhile in the period of 0.8-2.0 ms only thermal neutrons existed. Based on this result, neutron images with and without energy filtering were acquired respectively, and it showed that detection capability of PKUNIFTY was improved with setting the exposure interval as 0.8-2.0 ms, especially for materials with strong moderating capability.
Khoshkholgh, Roghaie; Keshavarz, Tahereh; Moshfeghy, Zeinab; Akbarzadeh, Marzieh; Asadi, Nasrin; Zare, Najaf
2016-01-01
Objective: To compare the effects of two auditory methods by mother and fetus on the results of NST in 2011-2012. Materials and methods: In this single-blind clinical trial, 213 pregnant women with gestational age of 37-41 weeks who had no pregnancy complications were randomly divided into 3 groups (auditory intervention for mother, auditory intervention for fetus, and control) each containing 71 subjects. In the intervention groups, music was played through the second 10 minutes of NST. The three groups were compared regarding baseline fetal heart rate and number of accelerations in the first and second 10 minutes of NST. The data were analyzed using one-way ANOVA, Kruskal-Wallis, and paired T-test. Results: The results showed no significant difference among the three groups regarding baseline fetal heart rate in the first (p = 0.945) and second (p = 0.763) 10 minutes. However, a significant difference was found among the three groups concerning the number of accelerations in the second 10 minutes. Also, a significant difference was observed in the number of accelerations in the auditory intervention for mother (p = 0.013) and auditory intervention for fetus groups (p < 0.001). The difference between the number of accelerations in the first and second 10 minutes was also statistically significant (p = 0.002). Conclusion: Music intervention was effective in the number of accelerations which is the indicator of fetal health. Yet, further studies are required to be conducted on the issue. PMID:27385971
Accelerated life testing of spacecraft subsystems
NASA Technical Reports Server (NTRS)
Wiksten, D.; Swanson, J.
1972-01-01
The rationale and requirements for conducting accelerated life tests on electronic subsystems of spacecraft are presented. A method for applying data on the reliability and temperature sensitivity of the parts contained in a sybsystem to the selection of accelerated life test parameters is described. Additional considerations affecting the formulation of test requirements are identified, and practical limitations of accelerated aging are described.
White, Adrienne Lynne; Min, Thaw Htwe; Gross, Mechthild M.; Kajeechiwa, Ladda; Thwin, May Myo; Hanboonkunupakarn, Borimas; Than, Hla Hla; Zin, Thet Wai; Rijken, Marcus J.; Hoogenboom, Gabie; McGready, Rose
2016-01-01
Background To evaluate a skilled birth attendant (SBA) training program in a neglected population on the Thai-Myanmar border, we used multiple methods to show that refugee and migrant health workers can be given effective training in their own environment to become SBAs and teachers of SBAs. The loss of SBAs through resettlement to third countries necessitated urgent training of available workers to meet local needs. Methods and Findings All results were obtained from student records of theory grades and clinical log books. Qualitative evaluation of both the SBA and teacher programs was obtained using semi-structured interviews with supervisors and teachers. We also reviewed perinatal indicators over an eight-year period, starting prior to the first training program until after the graduation of the fourth cohort of SBAs. Results Four SBA training programs scheduled between 2009 and 2015 resulted in 79/88 (90%) of students successfully completing a training program of 250 theory hours and 625 supervised clinical hours. All 79 students were able to: achieve pass grades on theory examination (median 80%, range [70–89]); obtain the required clinical experience within twelve months; achieve clinical competence to provide safe care during childbirth. In 2010–2011, five experienced SBAs completed a train-the-trainer (TOT) program and went on to facilitate further training programs. Perinatal indicators within Shoklo Malaria Research Unit (SMRU), such as place of birth, maternal and newborn outcomes, showed no significant differences before and after introduction of training or following graduate deployment in the local maternity units. Confidence, competence and teamwork emerged from qualitative evaluation by senior SBAs working with and supervising students in the clinics. Conclusions We demonstrate that in resource-limited settings or in marginalized populations, it is possible to accelerate training of skilled birth attendants to provide safe maternity care
NASA Astrophysics Data System (ADS)
Lee, M. T.; Gottfried, M.; Berglund, E.; Rodriguez, G.; Ceckanowicz, D. J.; Cutter, N.; Badgeley, J.
2014-12-01
The boom and bust history of mineral extraction in the American southwest is visible today in tens of thousands of abandoned and slowly decaying mine installations that scar the landscape. Mine tailing piles, mounds of crushed mineral ore, often contain significant quantities of heavy metal elements which may leach into surrounding soils, surface water and ground water. Chemical analysis of contaminated soils is a tedious and time-consuming process. Regional assessment of heavy metal contamination for treatment prioritization would be greatly accelerated by the development of near-surface imaging indices of heavy-metal vegetative stress in western grasslands. Further, the method would assist in measuring the ongoing effectiveness of phytoremedatian and phytostabilization efforts. To test feasibility we ground truthed nine phytoremediated and two control sites sites along the mine-impacted Kerber Creek watershed in Saguache County, Colorado. Total metal concentration was determined by XRF for both plant and soil samples. Leachable metals were extracted from soil samples following US EPA method 1312. Plants were identified, sorted into roots, shoots and leaves, and digested via microwave acid extraction. Metal concentrations were determined with high accuracy by ICP-OES analysis. Plants were found to contain significantly higher concentrations of heavy metals than surrounding soils, particularly for manganese (Mn), iron (Fe), copper (Cu), zinc (Zn), barium (Ba), and lead (Pb). Plant species accumulated and distributed metals differently, yet most showed translocation of metals from roots to above ground structures. Ground analysis was followed by near surface imaging using an unmanned aerial vehicle equipped with visible/near and shortwave infrared (0.7 to 1.5 μm) cameras. Images were assessed for spectral shifts indicative of plant stress and attempts made to correlate results with measured soil and plant metal concentrations.
NASA Astrophysics Data System (ADS)
Fernandes, Milton Virgílio
2014-06-01
In this thesis, high-energy (HE; E > 0.1 GeV) and very-high-energy (VHE; E > 0.1 TeV) γ-ray data were investigated to probe Galactic stellar clusters (SCs) and star-forming regions (SFRs) as sites of hadronic Galactic cosmic-ray (GCR) acceleration. In principle, massive SCs and SFRs could accelerate GCRs at the shock front of the collective SC wind fed by the individual high-mass stars. The subsequently produced VHE γ rays would be measured with imaging air-Cherenkov telescopes (IACTs). A couple of the Galactic VHE γ-ray sources, including those potentially produced by SCs, fill a large fraction of the field-of-view (FoV) and require additional observations of source-free regions to determine the dominant background for a spectral reconstruction. A new method of reconstructing spectra for such extended sources without the need of further observations is developed: the Template Background Spectrum (TBS). This methods is based on a method to generate skymaps, which determines background in parameter space. The idea is the creation of a look-up of the background normalisation in energy, zenith angle, and angular separation and to account for possible systematics. The results obtained with TBS and state-of-the-art background-estimation methods on H.E.S.S. data are in good agreement. With TBS even those sources could be reconstructed that normally would need further observations. Therefore, TBS is the third method to reconstruct VHE γ-ray spectra, but the first one to not need additional observations in the analysis of extended sources. The discovery of the largest VHE γ-ray source HESS J1646-458 (2.2° in size) towards the SC Westerlund 1 (Wd 1) can be plausibly explained by the SC-wind scenario. But owing to its size, other alternative counterparts to the TeV emission (pulsar, binary system, magnetar) were found in the FoV. Therefore, an association of HESS J1646-458 with the SC is favoured, but cannot be confirmed. The SC Pismis 22 is located in the centre of
Mass spectrometry with accelerators.
Litherland, A E; Zhao, X-L; Kieser, W E
2011-01-01
As one in a series of articles on Canadian contributions to mass spectrometry, this review begins with an outline of the history of accelerator mass spectrometry (AMS), noting roles played by researchers at three Canadian AMS laboratories. After a description of the unique features of AMS, three examples, (14)C, (10)Be, and (129)I are given to illustrate the methods. The capabilities of mass spectrometry have been extended by the addition of atomic isobar selection, molecular isobar attenuation, further ion acceleration, followed by ion detection and ion identification at essentially zero dark current or ion flux. This has been accomplished by exploiting the techniques and accelerators of atomic and nuclear physics. In 1939, the first principles of AMS were established using a cyclotron. In 1977 the selection of isobars in the ion source was established when it was shown that the (14)N(-) ion was very unstable, or extremely difficult to create, making a tandem electrostatic accelerator highly suitable for assisting the mass spectrometric measurement of the rare long-lived radioactive isotope (14)C in the environment. This observation, together with the large attenuation of the molecular isobars (13)CH(-) and (12)CH 2(-) during tandem acceleration and the observed very low background contamination from the ion source, was found to facilitate the mass spectrometry of (14)C to at least a level of (14)C/C ~ 6 × 10(-16), the equivalent of a radiocarbon age of 60,000 years. Tandem Accelerator Mass Spectrometry, or AMS, has now made possible the accurate radiocarbon dating of milligram-sized carbon samples by ion counting as well as dating and tracing with many other long-lived radioactive isotopes such as (10)Be, (26)Al, (36)Cl, and (129)I. The difficulty of obtaining large anion currents with low electron affinities and the difficulties of isobar separation, especially for the heavier mass ions, has prompted the use of molecular anions and the search for alternative
Accelerator mass spectrometry.
Hellborg, Ragnar; Skog, Göran
2008-01-01
In this overview the technique of accelerator mass spectrometry (AMS) and its use are described. AMS is a highly sensitive method of counting atoms. It is used to detect very low concentrations of natural isotopic abundances (typically in the range between 10(-12) and 10(-16)) of both radionuclides and stable nuclides. The main advantages of AMS compared to conventional radiometric methods are the use of smaller samples (mg and even sub-mg size) and shorter measuring times (less than 1 hr). The equipment used for AMS is almost exclusively based on the electrostatic tandem accelerator, although some of the newest systems are based on a slightly different principle. Dedicated accelerators as well as older "nuclear physics machines" can be found in the 80 or so AMS laboratories in existence today. The most widely used isotope studied with AMS is 14C. Besides radiocarbon dating this isotope is used in climate studies, biomedicine applications and many other fields. More than 100,000 14C samples are measured per year. Other isotopes studied include 10Be, 26Al, 36Cl, 41Ca, 59Ni, 129I, U, and Pu. Although these measurements are important, the number of samples of these other isotopes measured each year is estimated to be less than 10% of the number of 14C samples.
Bush, David A
2008-09-30
A research grant was approved to fund development of requirements and concepts for extracting a helium-ion beam at the LLUMC proton accelerator facility, thus enabling the facility to better simulate the deep space environment via beams sufficient to study biological effects of accelerated helium ions in living tissues. A biologically meaningful helium-ion beam will be accomplished by implementing enhancements to increase the accelerator's maximum proton beam energy output from 250MeV to 300MeV. Additional benefits anticipated from the increased energy include the capability to compare possible benefits from helium-beam radiation treatment with proton-beam treatment, and to provide a platform for developing a future proton computed tomography imaging system.
Kauffman, R.
1994-07-01
The research reported herein continued to concentrate on in situ conductivity measurements for development into an accelerated screening method for determining the chemical and thermal stabilities of refrigerant/lubricant mixtures. The work reported herein was performed in two phases. In the first phase, sealed tubes were prepared with steel catalysts and mixtures of CFC-12, HCFC-22, HFC-134a, and HFC-32/HFC-134a (zeotrope 30:70) refrigerants with oils as described in ANSI/ASHRAE Method 97-1989. In the second phase of work, modified sealed tubes, with and without steel catalysts present, were used to perform in situ conductivity measurements on mixtures of CFC-12 refrigerant with oils. The isothermal in situ conductivity measurements were compared with conventional tests, e.g., color measurements, gas chromatography, and trace metals to evaluate the capabilities of in situ conductivity for determining the chemical and thermal stabilities of refrigerant/lubricant mixtures. Other sets of tests were performed using ramped temperature conditions from 175{degrees}C (347{degrees}F) to 205{degrees}C (401{degrees}F) to evaluate the capabilities of in situ conductivity for detecting the onset of rapid degradation in CFC-12, HCFC-22 and HFC-134a refrigerant mixtures with naphthenic oil aged with and without steel catalysts present.
NASA Technical Reports Server (NTRS)
Davis, Jeffrey
2012-01-01
Opportunities: I. Engage NASA team (examples) a) Research and technology calls . provide suggestions to AES, HRP, OCT. b) Use NASA@Work to solicit other ideas; (possibly before R+D calls). II. Stimulate collaboration (examples) a) NHHPC. b) Wharton Mack Center for Technological Innovation (Feb 2013). c) International ] DLR ] :envihab (July 2013). d) Accelerated research models . NSF, Myelin Repair Foundation. III. Engage public Prizes (open platform: InnoCentive, yet2.com, NTL; Rice Business Plan, etc.) IV. Use same methods to engage STEM.
Miyaji, Yoshihiro; Ishizuka, Tomoko; Kawai, Kenji; Hamabe, Yoshimi; Miyaoka, Teiji; Oh-hara, Toshinari; Ikeda, Toshihiko; Kurihara, Atsushi
2009-01-01
A technique utilizing simultaneous intravenous microdosing of (14)C-labeled drug with oral dosing of non-labeled drug for measurement of absolute bioavailability was evaluated using R-142086 in male dogs. Plasma concentrations of R-142086 were measured by liquid chromatography-tandem mass spectrometry (LC-MS/MS) and those of (14)C-R-142086 were measured by accelerator mass spectrometry (AMS). The absence of metabolites in the plasma and urine was confirmed by a single radioactive peak of the parent compound in the chromatogram after intravenous microdosing of (14)C-R-142086 (1.5 microg/kg). Although plasma concentrations of R-142086 determined by LC-MS/MS were approximately 20% higher than those of (14)C-R-142086 as determined by AMS, there was excellent correlation (r=0.994) between both concentrations after intravenous dosing of (14)C-R-142086 (0.3 mg/kg). The oral bioavailability of R-142086 at 1 mg/kg obtained by simultaneous intravenous microdosing of (14)C-R-142086 was 16.1%, this being slightly higher than the value (12.5%) obtained by separate intravenous dosing of R-142086 (0.3 mg/kg). In conclusion, on utilizing simultaneous intravenous microdosing of (14)C-labeled drug in conjunction with AMS analysis, absolute bioavailability could be approximately measured in dogs, but without total accuracy. Bioavailability in humans may possibly be approximately measured at an earlier stage and at a lower cost. PMID:19430168
Levin, A.R.; Goldberg, H.L.; Borer, J.S.; Rothenberg, L.N.; Nolan, F.A.; Engle, M.A.; Cohen, B.; Skelly, N.T.; Carter, J.
1983-08-01
Digital subtraction angiography (DSA) permits high-resolution cardiac imaging with relatively low doses of contrast medium and reduced radiation exposure. These are potential advantages in children with congenital heart disease. Computer-based DSA (30 frames/sec) and conventional cutfilm angiography (6 frames/sec) or cineangiography (60 frames/sec) were compared in 42 patients, ages 2 months to 18 years (mean 7.8 years) and weighing 3.4 to 78.5 kg (mean 28.2 kg). There were 29 diagnoses that included valvular regurgitant lesions, obstructive lesions, various shunt abnormalities, and a group of miscellaneous anomalies. For injections made at a site distant from the lesion and on the right side of the circulation, the mean dose of contrast medium was 60% to 100% of the conventional dose given during standard angiography. With injections made close to the lesion and on the left side of the circulation, the mean dose of contrast medium was 27.5% to 42% of the conventional dose. Radiation exposure for each technique was markedly reduced in all age groups. A total of 92 digital subtraction angiograms were performed. Five studies were suboptimal because too little contrast medium was injected; in the remaining 87 injections, DSA and conventional studies resulted in identical diagnoses in 81 instances (p less than .001 vs chance). The remaining six injections made during DSA failed to confirm diagnoses made angiographically by standard cutfilm angiography or cineangiography. We conclude that DSA usually provides diagnostic information equivalent to that available from cutfilm angiography and cineangiography, but DSA requires considerably lower doses of contrast medium and less radiation exposure than standard conventional methods.
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
Ohira, Yutaka
2013-04-10
We consider particle acceleration by large-scale incompressible turbulence with a length scale larger than the particle mean free path. We derive an ensemble-averaged transport equation of energetic charged particles from an extended transport equation that contains the shear acceleration. The ensemble-averaged transport equation describes particle acceleration by incompressible turbulence (turbulent shear acceleration). We find that for Kolmogorov turbulence, the turbulent shear acceleration becomes important on small scales. Moreover, using Monte Carlo simulations, we confirm that the ensemble-averaged transport equation describes the turbulent shear acceleration.
NASA Astrophysics Data System (ADS)
Wilhelm, Thomas; Burde, Jan-Philipp; Lück, Stephan
2015-11-01
Acceleration is a physical quantity that is difficult to understand and hence its complexity is often erroneously simplified. Many students think of acceleration as equivalent to velocity, a ˜ v. For others, acceleration is a scalar quantity, which describes the change in speed Δ|v| or Δ|v|/Δt (as opposed to the change in velocity). The main difficulty with the concept of acceleration therefore lies in developing a correct understanding of its direction. The free iOS app AccelVisu supports students in acquiring a correct conception of acceleration by showing acceleration arrows directly at moving objects.
Accelerator science in medical physics.
Peach, K; Wilson, P; Jones, B
2011-12-01
The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future.
Accelerator science in medical physics
Peach, K; Wilson, P; Jones, B
2011-01-01
The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future. PMID:22374548
Dynamics of pyroelectric accelerators
Ghaderi, R.; Davani, F. Abbasi
2015-01-26
Pyroelectric crystals are used to produce high energy electron beams. We have derived a method to model electric potential generation on LiTaO{sub 3} crystal during heating cycle. In this method, effect of heat transfer on the potential generation is investigated by some experiments. In addition, electron emission from the crystal surface is modeled by measurements and analysis. These spectral data are used to present a dynamic equation of electric potential with respect to thickness of the crystal and variation of its temperature. The dynamic equation's results for different thicknesses are compared with measured data. As a result, to attain more energetic electrons, best thickness of the crystals could be extracted from the equation. This allows for better understanding of pyroelectric crystals and help to study about current and energy of accelerated electrons.
Accelerating Particles with Plasma
Litos, Michael; Hogan, Mark
2014-11-05
Researchers at SLAC explain how they use plasma wakefields to accelerate bunches of electrons to very high energies over only a short distance. Their experiments offer a possible path for the future of particle accelerators.
Accelerator Technology Division
NASA Astrophysics Data System (ADS)
1992-04-01
In fiscal year (FY) 1991, the Accelerator Technology (AT) division continued fulfilling its mission to pursue accelerator science and technology and to develop new accelerator concepts for application to research, defense, energy, industry, and other areas of national interest. This report discusses the following programs: The Ground Test Accelerator Program; APLE Free-Electron Laser Program; Accelerator Transmutation of Waste; JAERI, OMEGA Project, and Intense Neutron Source for Materials Testing; Advanced Free-Electron Laser Initiative; Superconducting Super Collider; The High-Power Microwave Program; (Phi) Factory Collaboration; Neutral Particle Beam Power System Highlights; Accelerator Physics and Special Projects; Magnetic Optics and Beam Diagnostics; Accelerator Design and Engineering; Radio-Frequency Technology; Free-Electron Laser Technology; Accelerator Controls and Automation; Very High-Power Microwave Sources and Effects; and GTA Installation, Commissioning, and Operations.
NASA Technical Reports Server (NTRS)
Mutzberg, J.
1972-01-01
Design is proposed for inexpensive accelerometer which would work by applying pressure to fluid during acceleration. Pressure is used to move shuttle, and shuttle movement is sensed and calibrated to give acceleration readings.
NASA Technical Reports Server (NTRS)
Cheng, D. Y.
1971-01-01
Converging, coaxial accelerator electrode configuration operates in vacuum as plasma gun. Plasma forms by periodic injections of high pressure gas that is ionized by electrical discharges. Deflagration mode of discharge provides acceleration, and converging contours of plasma gun provide focusing.
MEQALAC rf accelerating structure
Keane, J.; Brodowski, J.
1981-01-01
A prototype MEQALAC capable of replacing the Cockcroft Walton pre-injector at BNL is being fabricated. Ten milliamperes of H/sup -/ beam supplied from a source sitting at a potential of -40 kilovolt is to be accelerated to 750 keV. This energy gain is provided by a 200 Megahertz accelerating system rather than the normal dc acceleration. Substantial size and cost reduction would be realized by such a system over conventional pre-accelerator systems.
Acceleration gradient of a plasma wakefield accelerator
Uhm, Han S.
2008-02-25
The phase velocity of the wakefield waves is identical to the electron beam velocity. A theoretical analysis indicates that the acceleration gradient of the wakefield accelerator normalized by the wave breaking amplitude is K{sub 0}({xi})/K{sub 1}({xi}), where K{sub 0}({xi}) and K{sub 1}({xi}) are the modified Bessel functions of the second kind of order zero and one, respectively and {xi} is the beam parameter representing the beam intensity. It is also shown that the beam density must be considerably higher than the diffuse plasma density for the large radial velocity of plasma electrons that are required for a high acceleration gradient.
Solid oxide materials research accelerated electrochemical testing
Windisch, C.; Arey, B.
1995-08-01
The objectives of this work were to develop methods for accelerated testing of cathode materials for solid oxide fuel cells under selected operating conditions. The methods would be used to evaluate the performance of LSM cathode material.
ERIC Educational Resources Information Center
Willis, Mariam
2012-01-01
Acceleration is one tool for providing high-ability students the opportunity to learn something new every day. Some people talk about acceleration as taking a student out of step. In actuality, what one is doing is putting a student in step with the right curriculum. Whole-grade acceleration, also called grade-skipping, usually happens between…
Fernow, R.C.
1995-07-01
Far fields are propagating electromagnetic waves far from their source, boundary surfaces, and free charges. The general principles governing the acceleration of charged particles by far fields are reviewed. A survey of proposed field configurations is given. The two most important schemes, Inverse Cerenkov acceleration and Inverse free electron laser acceleration, are discussed in detail.
Angular Acceleration without Torque?
ERIC Educational Resources Information Center
Kaufman, Richard D.
2012-01-01
Hardly. Just as Robert Johns qualitatively describes angular acceleration by an internal force in his article "Acceleration Without Force?" here we will extend the discussion to consider angular acceleration by an internal torque. As we will see, this internal torque is due to an internal force acting at a distance from an instantaneous center.
Turbulence Dissipation in Non-Linear Diffusive Shock Acceleration with Magnetic Field Amplification
NASA Astrophysics Data System (ADS)
Ellison, Donald C.; Vladimirov, A.
2008-03-01
High Mach number shocks in young supernova remnants (SNRs) are believed to simultaneously place a large fraction of the supernova explosion energy in relativistic particles and amplify the ambient magnetic field by large factors. Continuing our efforts to model this strongly nonlinear process with a Monte Carlo simulation, we have incorporated the effects of the dissipation of the self-generated turbulence on the shock structure and thermal particle injection rate. We find that the heating of the thermal gas in the upstream shock precursor by the turbulence damping significantly impacts the acceleration process in our thermal pool injection model. This precursor heating may also have observational consequences. In this preliminary work, we parameterize the turbulence damping rate and lay the groundwork for incorporating more realistic physical models of turbulence generation and dissipation in nonlinear DSA. This work was support in part by NASA ATP grant NNX07AG79G.
Accelerating the loop expansion
Ingermanson, R.
1986-07-29
This thesis introduces a new non-perturbative technique into quantum field theory. To illustrate the method, I analyze the much-studied phi/sup 4/ theory in two dimensions. As a prelude, I first show that the Hartree approximation is easy to obtain from the calculation of the one-loop effective potential by a simple modification of the propagator that does not affect the perturbative renormalization procedure. A further modification then susggests itself, which has the same nice property, and which automatically yields a convex effective potential. I then show that both of these modifications extend naturally to higher orders in the derivative expansion of the effective action and to higher orders in the loop-expansion. The net effect is to re-sum the perturbation series for the effective action as a systematic ''accelerated'' non-perturbative expansion. Each term in the accelerated expansion corresponds to an infinite number of terms in the original series. Each term can be computed explicitly, albeit numerically. Many numerical graphs of the various approximations to the first two terms in the derivative expansion are given. I discuss the reliability of the results and the problem of spontaneous symmetry-breaking, as well as some potential applications to more interesting field theories. 40 refs.
Probing electron acceleration and x-ray emission in laser-plasma accelerators
Thaury, C.; Ta Phuoc, K.; Corde, S.; Brijesh, P.; Lambert, G.; Malka, V.; Mangles, S. P. D.; Bloom, M. S.; Kneip, S.
2013-06-15
While laser-plasma accelerators have demonstrated a strong potential in the acceleration of electrons up to giga-electronvolt energies, few experimental tools for studying the acceleration physics have been developed. In this paper, we demonstrate a method for probing the acceleration process. A second laser beam, propagating perpendicular to the main beam, is focused on the gas jet few nanosecond before the main beam creates the accelerating plasma wave. This second beam is intense enough to ionize the gas and form a density depletion, which will locally inhibit the acceleration. The position of the density depletion is scanned along the interaction length to probe the electron injection and acceleration, and the betatron X-ray emission. To illustrate the potential of the method, the variation of the injection position with the plasma density is studied.
Picciotto, Sally; Ljungman, Petter L; Eisen, Ellen A
2016-04-01
Straight metalworking fluids have been linked to cardiovascular mortality in analyses using binary exposure metrics, accounting for healthy worker survivor bias by using g-estimation of accelerated failure time models. A cohort of 38,666 Michigan autoworkers was followed (1941-1994) for mortality from all causes and ischemic heart disease. The structural model chosen here, using continuous exposure, assumes that increasing exposure from 0 to 1 mg/m(3) in any single year would decrease survival time by a fixed amount. Under that assumption, banning the fluids would have saved an estimated total of 8,468 (slope-based 95% confidence interval: 2,262, 28,563) person-years of life in this cohort. On average, 3.04 (slope-based 95% confidence interval: 0.02, 25.98) years of life could have been saved for each exposed worker who died from ischemic heart disease. Estimates were sensitive to both model specification for predicting exposure (multinomial or logistic regression) and characterization of exposure as binary or continuous in the structural model. Our results provide evidence supporting the hypothesis of a detrimental relationship between straight metalworking fluids and mortality, particularly from ischemic heart disease, as well as an instructive example of the challenges in obtaining and interpreting results from accelerated failure time models using a continuous exposure in the presence of competing risks. PMID:26968943
NASA Technical Reports Server (NTRS)
Foster, John E.
2004-01-01
A plasma accelerator has been conceived for both material-processing and spacecraft-propulsion applications. This accelerator generates and accelerates ions within a very small volume. Because of its compactness, this accelerator could be nearly ideal for primary or station-keeping propulsion for spacecraft having masses between 1 and 20 kg. Because this accelerator is designed to generate beams of ions having energies between 50 and 200 eV, it could also be used for surface modification or activation of thin films.
Accelerated dynamics simulations of nanotubes.
Uberuaga, B. P.; Stuart, S. J.; Voter, A. F.
2002-01-01
We report on the application of accelerated dynamics techniques to the study of carbon nanotubes. We have used the parallel replica method and temperature accelerated dynamics simulations are currently in progress. In the parallel replica study, we have stretched tubes at a rate significantly lower than that used in previous studies. In these preliminary results, we find that there are qualitative differences in the rupture of the nanotubes at different temperatures. We plan on extending this investigation to include nanotubes of various chiralities. We also plan on exploring unique geometries of nanotubes.
High brightness electron accelerator
Sheffield, Richard L.; Carlsten, Bruce E.; Young, Lloyd M.
1994-01-01
A compact high brightness linear accelerator is provided for use, e.g., in a free electron laser. The accelerator has a first plurality of acclerating cavities having end walls with four coupling slots for accelerating electrons to high velocities in the absence of quadrupole fields. A second plurality of cavities receives the high velocity electrons for further acceleration, where each of the second cavities has end walls with two coupling slots for acceleration in the absence of dipole fields. The accelerator also includes a first cavity with an extended length to provide for phase matching the electron beam along the accelerating cavities. A solenoid is provided about the photocathode that emits the electons, where the solenoid is configured to provide a substantially uniform magnetic field over the photocathode surface to minimize emittance of the electons as the electrons enter the first cavity.
Hammond, Andrew P.; /Reed Coll. /SLAC
2010-08-25
One of the options for future particle accelerators are photonic band gap (PBG) fiber accelerators. PBG fibers are specially designed optical fibers that use lasers to excite an electric field that is used to accelerate electrons. To improve PBG accelerators, the basic parameters of the fiber were tested to maximize defect size and acceleration. Using the program CUDOS, several accelerating modes were found that maximized these parameters for several wavelengths. The design of multiple defects, similar to having closely bound fibers, was studied to find possible coupling or the change of modes. The amount of coupling was found to be dependent on distance separated. For certain distances accelerating coupled modes were found and examined. In addition, several non-periodic fiber structures were examined using CUDOS. The non-periodic fibers produced several interesting results and promised more modes given time to study them in more detail.
Yang, Guang; Sun, Qiushi; Hu, Zhiyan; Liu, Hua; Zhou, Tingting; Fan, Guorong
2015-10-01
In this study, an accelerated solvent extraction dispersive liquid-liquid microextraction coupled with gas chromatography and mass spectrometry was established and employed for the extraction, concentration and analysis of essential oil constituents from Ligusticum chuanxiong Hort. Response surface methodology was performed to optimize the key parameters in accelerated solvent extraction on the extraction efficiency, and key parameters in dispersive liquid-liquid microextraction were discussed as well. Two representative constituents in Ligusticum chuanxiong Hort, (Z)-ligustilide and n-butylphthalide, were quantitatively analyzed. It was shown that the qualitative result of the accelerated solvent extraction dispersive liquid-liquid microextraction approach was in good agreement with that of hydro-distillation, whereas the proposed approach took far less extraction time (30 min), consumed less plant material (usually <1 g, 0.01 g for this study) and solvent (<20 mL) than the conventional system. To sum up, the proposed method could be recommended as a new approach in the extraction and analysis of essential oil. PMID:26304788
Yang, Guang; Sun, Qiushi; Hu, Zhiyan; Liu, Hua; Zhou, Tingting; Fan, Guorong
2015-10-01
In this study, an accelerated solvent extraction dispersive liquid-liquid microextraction coupled with gas chromatography and mass spectrometry was established and employed for the extraction, concentration and analysis of essential oil constituents from Ligusticum chuanxiong Hort. Response surface methodology was performed to optimize the key parameters in accelerated solvent extraction on the extraction efficiency, and key parameters in dispersive liquid-liquid microextraction were discussed as well. Two representative constituents in Ligusticum chuanxiong Hort, (Z)-ligustilide and n-butylphthalide, were quantitatively analyzed. It was shown that the qualitative result of the accelerated solvent extraction dispersive liquid-liquid microextraction approach was in good agreement with that of hydro-distillation, whereas the proposed approach took far less extraction time (30 min), consumed less plant material (usually <1 g, 0.01 g for this study) and solvent (<20 mL) than the conventional system. To sum up, the proposed method could be recommended as a new approach in the extraction and analysis of essential oil.
Colgate, S.A.
1993-12-31
The origin of cosmic rays and applicable laboratory experiments are discussed. Some of the problems of shock acceleration for the production of cosmic rays are discussed in the context of astrophysical conditions. These are: The presumed unique explanation of the power law spectrum is shown instead to be a universal property of all lossy accelerators; the extraordinary isotropy of cosmic rays and the limited diffusion distances implied by supernova induced shock acceleration requires a more frequent and space-filling source than supernovae; the near perfect adiabaticity of strong hydromagnetic turbulence necessary for reflecting the accelerated particles each doubling in energy roughly 10{sup 5} to {sup 6} scatterings with negligible energy loss seems most unlikely; the evidence for acceleration due to quasi-parallel heliosphere shocks is weak. There is small evidence for the expected strong hydromagnetic turbulence, and instead, only a small number of particles accelerate after only a few shock traversals; the acceleration of electrons in the same collisionless shock that accelerates ions is difficult to reconcile with the theoretical picture of strong hydromagnetic turbulence that reflects the ions. The hydromagnetic turbulence will appear adiabatic to the electrons at their much higher Larmor frequency and so the electrons should not be scattered incoherently as they must be for acceleration. Therefore the electrons must be accelerated by a different mechanism. This is unsatisfactory, because wherever electrons are accelerated these sites, observed in radio emission, may accelerate ions more favorably. The acceleration is coherent provided the reconnection is coherent, in which case the total flux, as for example of collimated radio sources, predicts single charge accelerated energies much greater than observed.
Accelerator Production Options for 99MO
Bertsche, Kirk; /SLAC
2010-08-25
Shortages of {sup 99}Mo, the most commonly used diagnostic medical isotope, have caused great concern and have prompted numerous suggestions for alternate production methods. A wide variety of accelerator-based approaches have been suggested. In this paper we survey and compare the various accelerator-based approaches.
A Microcomputer-Controlled Measurement of Acceleration.
ERIC Educational Resources Information Center
Crandall, A. Jared; Stoner, Ronald
1982-01-01
Describes apparatus and method used to allow rapid and repeated measurement of acceleration of a ball rolling down an inclined plane. Acceleration measurements can be performed in an hour with the apparatus interfaced to a Commodore PET microcomputer. A copy of the BASIC program is available from the authors. (Author/JN)
Dubaniewicz, Thomas H.; DuCarme, Joseph P.
2016-01-01
Researchers with the National Institute for Occupational Safety and Health (NIOSH) studied the potential for lithium-ion cell thermal runaway from an internal short circuit in equipment for use in underground coal mines. In this third phase of the study, researchers compared plastic wedge crush-induced internal short circuit tests of selected lithium-ion cells within methane (CH4)-air mixtures with accelerated rate calorimetry tests of similar cells. Plastic wedge crush test results with metal oxide lithium-ion cells extracted from intrinsically safe evaluated equipment were mixed, with one cell model igniting the chamber atmosphere while another cell model did not. The two cells models exhibited different internal short circuit behaviors. A lithium iron phosphate (LiFePO4) cell model was tolerant to crush-induced internal short circuits within CH4-air, tested under manufacturer recommended charging conditions. Accelerating rate calorimetry tests with similar cells within a nitrogen purged 353-mL chamber produced ignitions that exceeded explosion proof and flameproof enclosure minimum internal pressure design criteria. Ignition pressures within a 20-L chamber with 6.5% CH4-air were relatively low, with much larger head space volume and less adiabatic test conditions. The literature indicates that sizeable lithium thionyl chloride (LiSOCl2) primary (non rechargeable) cell ignitions can be especially violent and toxic. Because ignition of an explosive atmosphere is expected within explosion proof or flameproof enclosures, there is a need to consider the potential for an internal explosive atmosphere ignition in combination with a lithium or lithium-ion battery thermal runaway process, and the resulting effects on the enclosure. PMID:27695201
Dubaniewicz, Thomas H.; DuCarme, Joseph P.
2016-01-01
Researchers with the National Institute for Occupational Safety and Health (NIOSH) studied the potential for lithium-ion cell thermal runaway from an internal short circuit in equipment for use in underground coal mines. In this third phase of the study, researchers compared plastic wedge crush-induced internal short circuit tests of selected lithium-ion cells within methane (CH4)-air mixtures with accelerated rate calorimetry tests of similar cells. Plastic wedge crush test results with metal oxide lithium-ion cells extracted from intrinsically safe evaluated equipment were mixed, with one cell model igniting the chamber atmosphere while another cell model did not. The two cells models exhibited different internal short circuit behaviors. A lithium iron phosphate (LiFePO4) cell model was tolerant to crush-induced internal short circuits within CH4-air, tested under manufacturer recommended charging conditions. Accelerating rate calorimetry tests with similar cells within a nitrogen purged 353-mL chamber produced ignitions that exceeded explosion proof and flameproof enclosure minimum internal pressure design criteria. Ignition pressures within a 20-L chamber with 6.5% CH4-air were relatively low, with much larger head space volume and less adiabatic test conditions. The literature indicates that sizeable lithium thionyl chloride (LiSOCl2) primary (non rechargeable) cell ignitions can be especially violent and toxic. Because ignition of an explosive atmosphere is expected within explosion proof or flameproof enclosures, there is a need to consider the potential for an internal explosive atmosphere ignition in combination with a lithium or lithium-ion battery thermal runaway process, and the resulting effects on the enclosure.
Kinematics of transition during human accelerated sprinting
Nagahara, Ryu; Matsubayashi, Takeo; Matsuo, Akifumi; Zushi, Koji
2014-01-01
ABSTRACT This study investigated kinematics of human accelerated sprinting through 50 m and examined whether there is transition and changes in acceleration strategies during the entire acceleration phase. Twelve male sprinters performed a 60-m sprint, during which step-to-step kinematics were captured using 60 infrared cameras. To detect the transition during the acceleration phase, the mean height of the whole-body centre of gravity (CG) during the support phase was adopted as a measure. Detection methods found two transitions during the entire acceleration phase of maximal sprinting, and the acceleration phase could thus be divided into initial, middle, and final sections. Discriminable kinematic changes were found when the sprinters crossed the detected first transition—the foot contacting the ground in front of the CG, the knee-joint starting to flex during the support phase, terminating an increase in step frequency—and second transition—the termination of changes in body postures and the start of a slight decrease in the intensity of hip-joint movements, thus validating the employed methods. In each acceleration section, different contributions of lower-extremity segments to increase in the CG forward velocity—thigh and shank for the initial section, thigh, shank, and foot for the middle section, shank and foot for the final section—were verified, establishing different acceleration strategies during the entire acceleration phase. In conclusion, there are presumably two transitions during human maximal accelerated sprinting that divide the entire acceleration phase into three sections, and different acceleration strategies represented by the contributions of the segments for running speed are employed. PMID:24996923
NASA Technical Reports Server (NTRS)
Kolyer, J. M.
1978-01-01
An important principle is that encapsulants should be tested in a total array system allowing realistic interaction of components. Therefore, micromodule test specimens were fabricated with a variety of encapsulants, substrates, and types of circuitry. One common failure mode was corrosion of circuitry and solar cell metallization due to moisture penetration. Another was darkening and/or opacification of encapsulant. A test program plan was proposed. It includes multicondition accelerated exposure. Another method was hyperaccelerated photochemical exposure using a solar concentrator. It simulates 20 year of sunlight exposure in a short period of one to two weeks. The study was beneficial in identifying some cost effective encapsulants and array designs.
The Dielectric Wall Accelerator
Caporaso, George J.; Chen, Yu-Jiuan; Sampayan, Stephen E.
2009-01-01
The Dielectric Wall Accelerator (DWA), a class of induction accelerators, employs a novel insulating beam tube to impress a longitudinal electric field on a bunch of charged particles. The surface flashover characteristics of this tube may permit the attainment of accelerating gradients on the order of 100 MV/m for accelerating pulses on the order of a nanosecond in duration. A virtual traveling wave of excitation along the tube is produced at any desired speed by controlling the timing of pulse generating modules that supply a tangential electric field to the tube wall. Because of the ability to control the speed of this virtual wave, the accelerator is capable of handling any charge to mass ratio particle; hence it can be used for electrons, protons and any ion. The accelerator architectures, key technologies and development challenges will be described.
Whittum, David H.; Tantawi, Sami G.
2001-01-01
We describe a new concept for a microwave circuit functioning as a charged-particle accelerator at mm wavelengths, permitting an accelerating gradient higher than conventional passive circuits can withstand consistent with cyclic fatigue. The device provides acceleration for multiple bunches in parallel channels, and permits a short exposure time for the conducting surface of the accelerating cavities. Our analysis includes scalings based on a smooth transmission line model and a complementary treatment with a coupled-cavity simulation. We also provide an electromagnetic design for the accelerating structure, arriving at rough dimensions for a seven-cell accelerator matched to standard waveguide and suitable for bench tests at low power in air at 91.392 GHz. A critical element in the concept is a fast mm-wave switch suitable for operation at high power, and we present the considerations for implementation in an H-plane tee. We discuss the use of diamond as the photoconductor switch medium.
Whittum, David H
2000-10-04
We describe a new concept for a microwave circuit functioning as a charged-particle accelerator at mm-wavelengths, permitting an accelerating gradient higher than conventional passive circuits can withstand consistent with cyclic fatigue. The device provides acceleration for multiple bunches in parallel channels, and permits a short exposure time for the conducting surface of the accelerating cavities. Our analysis includes scalings based on a smooth transmission line model and a complementary treatment with a coupled-cavity simulation. We provide also an electromagnetic design for the accelerating structure, arriving at rough dimensions for a seven-cell accelerator matched to standard waveguide and suitable for bench tests at low power in air at 91.392. GHz. A critical element in the concept is a fast mm-wave switch suitable for operation at high-power, and we present the considerations for implementation in an H-plane tee. We discuss the use of diamond as the photoconductor switch medium.
Wilson, P.B.
1986-02-01
In a wake field accelerator a high current driving bunch injected into a structure or plasma produces intense induced fields, which are in turn used to accelerate a trailing charge or bunch. The basic concepts of wake field acceleration are described. Wake potentials for closed cavities and periodic structures are derived, as are wake potentials on a collinear path with a charge distribution. Cylindrically symmetric structures excited by a beam in the form of a ring are considered. (LEW)
ACCELERATION RESPONSIVE SWITCH
Chabrek, A.F.; Maxwell, R.L.
1963-07-01
An acceleration-responsive device with dual channel capabilities whereby a first circuit is actuated upon attainment of a predetermined maximum acceleration level and when the acceleration drops to a predetermined minimum acceleriltion level another circuit is actuated is described. A fluid-damped sensing mass slidably mounted in a relatively frictionless manner on a shaft through the intermediation of a ball bushing and biased by an adjustable compression spring provides inertially operated means for actuating the circuits. (AEC)
Optically pulsed electron accelerator
Fraser, John S.; Sheffield, Richard L.
1987-01-01
An optically pulsed electron accelerator can be used as an injector for a free electron laser and comprises a pulsed light source, such as a laser, for providing discrete incident light pulses. A photoemissive electron source emits electron bursts having the same duration as the incident light pulses when impinged upon by same. The photoemissive electron source is located on an inside wall of a radio frequency powered accelerator cell which accelerates the electron burst emitted by the photoemissive electron source.
Optically pulsed electron accelerator
Fraser, J.S.; Sheffield, R.L.
1985-05-20
An optically pulsed electron accelerator can be used as an injector for a free electron laser and comprises a pulsed light source, such as a laser, for providing discrete incident light pulses. A photoemissive electron source emits electron bursts having the same duration as the incident light pulses when impinged upon by same. The photoemissive electron source is located on an inside wall of a radiofrequency-powered accelerator cell which accelerates the electron burst emitted by the photoemissive electron source.
Convex accelerated maximum entropy reconstruction
NASA Astrophysics Data System (ADS)
Worley, Bradley
2016-04-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm - called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm - is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra.
Accelerating optimization by tracing valley
NASA Astrophysics Data System (ADS)
Li, Qing-Xiao; He, Rong-Qiang; Lu, Zhong-Yi
2016-06-01
We propose an algorithm to accelerate optimization when an objective function locally resembles a long narrow valley. In such a case, a conventional optimization algorithm usually wanders with too many tiny steps in the valley. The new algorithm approximates the valley bottom locally by a parabola that is obtained by fitting a set of successive points generated recently by a conventional optimization method. Then large steps are taken along the parabola, accompanied by fine adjustment to trace the valley bottom. The effectiveness of the new algorithm has been demonstrated by accelerating the Newton trust-region minimization method and the Levenberg-Marquardt method on the nonlinear fitting problem in exact diagonalization dynamical mean-field theory and on the classic minimization problem of the Rosenbrock's function. Many times speedup has been achieved for both problems, showing the high efficiency of the new algorithm.
Acceleration of polarized protons in circular accelerators
Courant, E.D.; Ruth, R.D.
1980-09-12
The theory of depolarization in circular accelerators is presented. The spin equation is first expressed in terms of the particle orbit and then converted to the equivalent spinor equation. The spinor equation is then solved for three different situations: (1) a beam on a flat top near a resonance, (2) uniform acceleration through an isolated resonance, and (3) a model of a fast resonance jump. Finally, the depolarization coefficient, epsilon, is calculated in terms of properties of the particle orbit and the results are applied to a calculation of depolarization in the AGS.
Charged particle accelerator grating
Palmer, Robert B.
1986-01-01
A readily disposable and replaceable accelerator grating for a relativistic particle accelerator. The grating is formed for a plurality of liquid droplets that are directed in precisely positioned jet streams to periodically dispose rows of droplets along the borders of a predetermined particle beam path. A plurality of lasers are used to direct laser beams into the droplets, at predetermined angles, thereby to excite the droplets to support electromagnetic accelerating resonances on their surfaces. Those resonances operate to accelerate and focus particles moving along the beam path. As the droplets are distorted or destroyed by the incoming radiation, they are replaced at a predetermined frequency by other droplets supplied through the jet streams.
Particle acceleration in flares
NASA Technical Reports Server (NTRS)
Benz, Arnold O.; Kosugi, Takeo; Aschwanden, Markus J.; Benka, Steve G.; Chupp, Edward L.; Enome, Shinzo; Garcia, Howard; Holman, Gordon D.; Kurt, Victoria G.; Sakao, Taro
1994-01-01
Particle acceleration is intrinsic to the primary energy release in the impulsive phase of solar flares, and we cannot understand flares without understanding acceleration. New observations in soft and hard X-rays, gamma-rays and coherent radio emissions are presented, suggesting flare fragmentation in time and space. X-ray and radio measurements exhibit at least five different time scales in flares. In addition, some new observations of delayed acceleration signatures are also presented. The theory of acceleration by parallel electric fields is used to model the spectral shape and evolution of hard X-rays. The possibility of the appearance of double layers is further investigated.
Kreiner, A J; Baldo, M; Bergueiro, J R; Cartelli, D; Castell, W; Thatar Vento, V; Gomez Asoia, J; Mercuri, D; Padulo, J; Suarez Sandin, J C; Erhardt, J; Kesque, J M; Valda, A A; Debray, M E; Somacal, H R; Igarzabal, M; Minsky, D M; Herrera, M S; Capoulat, M E; Gonzalez, S J; del Grosso, M F; Gagetti, L; Suarez Anzorena, M; Gun, M; Carranza, O
2014-06-01
The activity in accelerator development for accelerator-based BNCT (AB-BNCT) both worldwide and in Argentina is described. Projects in Russia, UK, Italy, Japan, Israel, and Argentina to develop AB-BNCT around different types of accelerators are briefly presented. In particular, the present status and recent progress of the Argentine project will be reviewed. The topics will cover: intense ion sources, accelerator tubes, transport of intense beams, beam diagnostics, the (9)Be(d,n) reaction as a possible neutron source, Beam Shaping Assemblies (BSA), a treatment room, and treatment planning in realistic cases.
Kreiner, A J; Baldo, M; Bergueiro, J R; Cartelli, D; Castell, W; Thatar Vento, V; Gomez Asoia, J; Mercuri, D; Padulo, J; Suarez Sandin, J C; Erhardt, J; Kesque, J M; Valda, A A; Debray, M E; Somacal, H R; Igarzabal, M; Minsky, D M; Herrera, M S; Capoulat, M E; Gonzalez, S J; del Grosso, M F; Gagetti, L; Suarez Anzorena, M; Gun, M; Carranza, O
2014-06-01
The activity in accelerator development for accelerator-based BNCT (AB-BNCT) both worldwide and in Argentina is described. Projects in Russia, UK, Italy, Japan, Israel, and Argentina to develop AB-BNCT around different types of accelerators are briefly presented. In particular, the present status and recent progress of the Argentine project will be reviewed. The topics will cover: intense ion sources, accelerator tubes, transport of intense beams, beam diagnostics, the (9)Be(d,n) reaction as a possible neutron source, Beam Shaping Assemblies (BSA), a treatment room, and treatment planning in realistic cases. PMID:24365468
Charged particle accelerator grating
Palmer, Robert B.
1986-09-02
A readily disposable and replaceable accelerator grating for a relativistic particle accelerator. The grating is formed for a plurality of liquid droplets that are directed in precisely positioned jet streams to periodically dispose rows of droplets along the borders of a predetermined particle beam path. A plurality of lasers are used to direct laser beams into the droplets, at predetermined angles, thereby to excite the droplets to support electromagnetic accelerating resonances on their surfaces. Those resonances operate to accelerate and focus particles moving along the beam path. As the droplets are distorted or destroyed by the incoming radiation, they are replaced at a predetermined frequency by other droplets supplied through the jet streams.
Angular velocities, angular accelerations, and coriolis accelerations
NASA Technical Reports Server (NTRS)
Graybiel, A.
1975-01-01
Weightlessness, rotating environment, and mathematical analysis of Coriolis acceleration is described for man's biological effective force environments. Effects on the vestibular system are summarized, including the end organs, functional neurology, and input-output relations. Ground-based studies in preparation for space missions are examined, including functional tests, provocative tests, adaptive capacity tests, simulation studies, and antimotion sickness.
Designing and Running for High Accelerator Availability
Willeke,F.
2009-05-04
The report provides an overview and examples of high availability design considerations and operational aspects making references to some of the available methods to assess and improve on accelerator reliability.
Theoretical problems in accelerator physics. Progress report
Kroll, N.M.
1993-08-01
This report discusses the following topics in accelerator physics: radio frequency pulse compression and power transport; computational methods for the computer analysis of microwave components; persistent wakefields associated with waveguide damping of higher order modes; and photonic band gap cavities.
Anderson Acceleration for Fixed-Point Iterations
Walker, Homer F.
2015-08-31
The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.
Accelerated Thermal Cycling and Failure Mechanisms
NASA Technical Reports Server (NTRS)
Ghaffarian, R.
1999-01-01
This paper reviews the accelerated thermal cycling test methods that are currently used by industry to characterize the interconnect reliability of commercial-off-the-shelf (COTS) ball grid array (BGA) and chip scale package (CSP) assemblies.
Accelerators Beyond The Tevatron?
Lach, Joseph; /Fermilab
2010-07-01
Following the successful operation of the Fermilab superconducting accelerator three new higher energy accelerators were planned. They were the UNK in the Soviet Union, the LHC in Europe, and the SSC in the United States. All were expected to start producing physics about 1995. They did not. Why?
Accelerators Beyond The Tevatron?
Lach, Joseph
2010-07-29
Following the successful operation of the Fermilab superconducting accelerator three new higher energy accelerators were planned. They were the UNK in the Soviet Union, the LHC in Europe, and the SSC in the United States. All were expected to start producing physics about 1995. They did not. Why?.
None
2016-07-12
1a) Introduction and motivation 1b) History and accelerator types 2) Transverse beam dynamics 3a) Longitudinal beam dynamics 3b) Figure of merit of a synchrotron/collider 3c) Beam control 4) Main limiting factors 5) Technical challenges Prerequisite knowledge: Previous knowledge of accelerators is not required.
NASA Astrophysics Data System (ADS)
Birx, Daniel
1992-03-01
Among the family of particle accelerators, the Induction Linear Accelerator is the best suited for the acceleration of high current electron beams. Because the electromagnetic radiation used to accelerate the electron beam is not stored in the cavities but is supplied by transmission lines during the beam pulse it is possible to utilize very low Q (typically<10) structures and very large beam pipes. This combination increases the beam breakup limited maximum currents to of order kiloamperes. The micropulse lengths of these machines are measured in 10's of nanoseconds and duty factors as high as 10-4 have been achieved. Until recently the major problem with these machines has been associated with the pulse power drive. Beam currents of kiloamperes and accelerating potentials of megavolts require peak power drives of gigawatts since no energy is stored in the structure. The marriage of liner accelerator technology and nonlinear magnetic compressors has produced some unique capabilities. It now appears possible to produce electron beams with average currents measured in amperes, peak currents in kiloamperes and gradients exceeding 1 MeV/meter, with power efficiencies approaching 50%. The nonlinear magnetic compression technology has replaced the spark gap drivers used on earlier accelerators with state-of-the-art all-solid-state SCR commutated compression chains. The reliability of these machines is now approaching 1010 shot MTBF. In the following paper we will briefly review the historical development of induction linear accelerators and then discuss the design considerations.
None
2016-07-12
1a) Introduction and motivation 1b) History and accelerator types 2) Transverse beam dynamics 3a) Longitudinal beam dynamics 3b) Figure of merit of a synchrotron/collider 3c) Beam control 4) Main limiting factors 5) Technical challenges Prerequisite knowledge: Previous knowledge of accelerators is not required.
None
2016-07-12
1a) Introduction and motivation 1b) History and accelerator types 2) Transverse beam dynamics 3a) Longitudinal beam dynamics 3b) Figure of merit of a synchrotron/collider 3c) Beam control 4) Main limiting factors 5) Technical challenges Prerequisite knowledge: Previous knowledge of accelerators is not required.
2009-07-09
1a) Introduction and motivation 1b) History and accelerator types 2) Transverse beam dynamics 3a) Longitudinal beam dynamics 3b) Figure of merit of a synchrotron/collider 3c) Beam control 4) Main limiting factors 5) Technical challenges Prerequisite knowledge: Previous knowledge of accelerators is not required.