Bioimaging of cells and tissues using accelerator-based sources.
Petibois, Cyril; Cestelli Guidi, Mariangela
2008-07-01
A variety of techniques exist that provide chemical information in the form of a spatially resolved image: electron microprobe analysis, nuclear microprobe analysis, synchrotron radiation microprobe analysis, secondary ion mass spectrometry, and confocal fluorescence microscopy. Linear (LINAC) and circular (synchrotrons) particle accelerators have been constructed worldwide to provide to the scientific community unprecedented analytical performances. Now, these facilities match at least one of the three analytical features required for the biological field: (1) a sufficient spatial resolution for single cell (< 1 mum) or tissue (<1 mm) analyses, (2) a temporal resolution to follow molecular dynamics, and (3) a sensitivity in the micromolar to nanomolar range, thus allowing true investigations on biological dynamics. Third-generation synchrotrons now offer the opportunity of bioanalytical measurements at nanometer resolutions with incredible sensitivity. Linear accelerators are more specialized in their physical features but may exceed synchrotron performances. All these techniques have become irreplaceable tools for developing knowledge in biology. This review highlights the pros and cons of the most popular techniques that have been implemented on accelerator-based sources to address analytical issues on biological specimens.
Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging
NASA Astrophysics Data System (ADS)
Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke
2011-12-01
In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.
NASA Astrophysics Data System (ADS)
Olabanji, S. O.; Ige, O. A.; Mazzoli, C.; Ceccato, D.; Akintunde, J. A.; De Poli, M.; Moschini, G.
2005-10-01
For the first time, the complementary accelerator-based analytical technique of PIXE and electron microprobe analysis (EMPA) were employed for the characterization of some Nigeria's natural minerals namely fluorite, tourmaline and topaz. These minerals occur in different areas in Nigeria. The minerals are mainly used as gemstones and for other scientific and technological applications and therefore are very important. There is need to characterize them to know the quality of these gemstones and update the geochemical data on them geared towards useful applications. PIXE analysis was carried out using the 1.8 MeV collimated proton beam from the 2.5 MV AN 2000 Van de Graaff accelerator at INFN, LNL, Legnaro, Padova, Italy. The novel results which show many elements at different concentrations in these minerals are presented and discussed.
Studies of industrial emissions by accelerator-based techniques: A review of applications at CEDAD
NASA Astrophysics Data System (ADS)
Calcagnile, L.; Quarta, G.
2012-04-01
Different research activities are in progress at the Centre for Dating and Diagnostics (CEDAD), University of Salento, in the field of environmental monitoring by exploiting the potentialities given by the different experimental beam lines implemented on the 3 MV Tande-tron accelerator and dedicated to AMS (Accelerator Mass Spectrome-try) radiocarbon dating and IB A (Ion Beam Analysis). An overview of these activities is presented by showing how accelerator-based analytical techniques can be a powerful tool for monitoring the anthropogenic carbon dioxide emissions from industrial sources and for the assessment of the biogenic content in SRF (Solid Recovered Fuel) burned in WTE (Waste to Energy) plants.
Murnick, Daniel E; Dogru, Ozgur; Ilkmen, Erhan
2008-07-01
We show a new ultrasensitive laser-based analytical technique, intracavity optogalvanic spectroscopy, allowing extremely high sensitivity for detection of (14)C-labeled carbon dioxide. Capable of replacing large accelerator mass spectrometers, the technique quantifies attomoles of (14)C in submicrogram samples. Based on the specificity of narrow laser resonances coupled with the sensitivity provided by standing waves in an optical cavity and detection via impedance variations, limits of detection near 10(-15) (14)C/(12)C ratios are obtained. Using a 15-W (14)CO2 laser, a linear calibration with samples from 10(-15) to >1.5 x 10(-12) in (14)C/(12)C ratios, as determined by accelerator mass spectrometry, is demonstrated. Possible applications include microdosing studies in drug development, individualized subtherapeutic tests of drug metabolism, carbon dating and real time monitoring of atmospheric radiocarbon. The method can also be applied to detection of other trace entities.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Gravett, M R; Hopkins, F B; Self, A J; Webb, A J; Timperley, C M; Riches, J R
2014-08-01
In the event of alleged use of organophosphorus nerve agents, all kinds of environmental samples can be received for analysis. These might include decontaminated and charred matter collected from the site of a suspected chemical attack. In other scenarios, such matter might be sampled to confirm the site of a chemical weapon test or clandestine laboratory decontaminated and burned to prevent discovery. To provide an analytical capability for these contingencies, we present a preliminary investigation of the effect of accelerant-based fire and liquid decontamination on soil contaminated with the nerve agent O-ethyl S-2-diisopropylaminoethyl methylphosphonothiolate (VX). The objectives were (a) to determine if VX or its degradation products were detectable in soil after an accelerant-based fire promoted by aviation fuel, including following decontamination with Decontamination Solution 2 (DS2) or aqueous sodium hypochlorite, (b) to develop analytical methods to support forensic analysis of accelerant-soaked, decontaminated and charred soil and (c) to inform the design of future experiments of this type to improve analytical fidelity. Our results show for the first time that modern analytical techniques can be used to identify residual VX and its degradation products in contaminated soil after an accelerant-based fire and after chemical decontamination and then fire. Comparison of the gas chromatography-mass spectrometry (GC-MS) profiles of VX and its impurities/degradation products from contaminated burnt soil, and burnt soil spiked with VX, indicated that the fire resulted in the production of diethyl methylphosphonate and O,S-diethyl methylphosphonothiolate (by an unknown mechanism). Other products identified were indicative of chemical decontamination, and some of these provided evidence of the decontaminant used, for example, ethyl 2-methoxyethyl methylphosphonate and bis(2-methoxyethyl) methylphosphonate following decontamination with DS2. Sample preparation procedures and analytical methods suitable for investigating accelerant and decontaminant-soaked soil samples are presented. VX and its degradation products and/or impurities were detected under all the conditions studied, demonstrating that accelerant-based fire and liquid-based decontamination and then fire are unlikely to prevent the retrieval of evidence of chemical warfare agent (CWA) testing. This is the first published study of the effects of an accelerant-based fire on a CWA in environmental samples. The results will inform defence and security-based organisations worldwide and support the verification activities of the Organisation for the Prohibition of Chemical Weapons (OPCW), winner of the 2013 Nobel Peace Prize for its extensive efforts to eliminate chemical weapons.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C
2016-01-28
Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Olabanji, S. O.; Ige, A. O.; Mazzoli, C.; Ceccato, D.; Ajayi, E. O. B.; De Poli, M.; Moschini, G.
2005-10-01
Accelerator-based technique of PIXE was employed for the determination of the elemental concentration of an industrial mineral, talc. Talc is a very versatile mineral in industries with several applications. Due to this, there is a need to know its constituents to ensure that the workers are not exposed to health risks. Besides, microscopic tests on some talc samples in Nigeria confirm that they fall within the BP British Pharmacopoeia standard for tablet formation. However, for these samples to become a local source of raw material for pharmaceutical grade talc, the precise elemental compositions should be established which is the focus of this work. Proton beam produced by the 2.5 MV AN 2000 Van de Graaff accelerator at INFN, LNL, Legnaro, Padova, Italy was used for the PIXE measurements. The results which show the concentration of different elements in the talc samples, their health implications and metabolic roles are presented and discussed.
Big data analytics as a service infrastructure: challenges, desired properties and solutions
NASA Astrophysics Data System (ADS)
Martín-Márquez, Manuel
2015-12-01
CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.
Cicchetti, Esmeralda; Chaintreau, Alain
2009-06-01
Accelerated solvent extraction (ASE) of vanilla beans has been optimized using ethanol as a solvent. A theoretical model is proposed to account for this multistep extraction. This allows the determination, for the first time, of the total amount of analytes initially present in the beans and thus the calculation of recoveries using ASE or any other extraction technique. As a result, ASE and Soxhlet extractions have been determined to be efficient methods, whereas recoveries are modest for maceration techniques and depend on the solvent used. Because industrial extracts are obtained by many different procedures, including maceration in various solvents, authenticating vanilla extracts using quantitative ratios between the amounts of vanilla flavor constituents appears to be unreliable. When authentication techniques based on isotopic ratios are used, ASE is a valid sample preparation technique because it does not induce isotopic fractionation.
An efficient and accurate molecular alignment and docking technique using ab initio quality scoring
Füsti-Molnár, László; Merz, Kenneth M.
2008-01-01
An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561
Modeling of ion acceleration through drift and diffusion at interplanetary shocks
NASA Technical Reports Server (NTRS)
Decker, R. B.; Vlahos, L.
1986-01-01
A test particle simulation designed to model ion acceleration through drift and diffusion at interplanetary shocks is described. The technique consists of integrating along exact particle orbits in a system where the angle between the shock normal and mean upstream magnetic field, the level of magnetic fluctuations, and the energy of injected particles can assume a range of values. The technique makes it possible to study time-dependent shock acceleration under conditions not amenable to analytical techniques. To illustrate the capability of the numerical model, proton acceleration was considered under conditions appropriate for interplanetary shocks at 1 AU, including large-amplitude transverse magnetic fluctuations derived from power spectra of both ambient and shock-associated MHD waves.
On the Applications of IBA Techniques to Biological Samples Analysis: PIXE and RBS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falcon-Gonzalez, J. M.; Bernal-Alvarado, J.; Sosa, M.
2008-08-11
The analytical techniques based on ion beams or IBA techniques give quantitative information on elemental concentration in samples of a wide variety of nature. In this work, we focus on PIXE technique, analyzing thick target biological specimens (TTPIXE), using 3 MeV protons produced by an electrostatic accelerator. A nuclear microprobe was used performing PIXE and RBS simultaneously, in order to solve the uncertainties produced in the absolute PIXE quantifying. The advantages of using both techniques and a nuclear microprobe are discussed. Quantitative results are shown to illustrate the multielemental resolution of the PIXE technique; for this, a blood standard wasmore » used.« less
Lin, Shan-Yang; Wang, Shun-Li
2012-04-01
The solid-state chemistry of drugs has seen growing importance in the pharmaceutical industry for the development of useful API (active pharmaceutical ingredients) of drugs and stable dosage forms. The stability of drugs in various solid dosage forms is an important issue because solid dosage forms are the most common pharmaceutical formulation in clinical use. In solid-state stability studies of drugs, an ideal accelerated method must not only be selected by different complicated methods, but must also detect the formation of degraded product. In this review article, an analytical technique combining differential scanning calorimetry and Fourier-transform infrared (DSC-FTIR) microspectroscopy simulates the accelerated stability test, and simultaneously detects the decomposed products in real time. The pharmaceutical dipeptides aspartame hemihydrate, lisinopril dihydrate, and enalapril maleate either with or without Eudragit E were used as testing examples. This one-step simultaneous DSC-FTIR technique for real-time detection of diketopiperazine (DKP) directly evidenced the dehydration process and DKP formation as an impurity common in pharmaceutical dipeptides. DKP formation in various dipeptides determined by different analytical methods had been collected and compiled. Although many analytical methods have been applied, the combined DSC-FTIR technique is an easy and fast analytical method which not only can simulate the accelerated drug stability testing but also at the same time enable to explore phase transformation as well as degradation due to thermal-related reactions. This technique offers quick and proper interpretations. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Olabanji, S. O.; Omobuwajo, O. R.; Ceccato, D.; Adebajo, A. C.; Buoso, M. C.; Moschini, G.
2008-05-01
Diabetes mellitus, a clinical syndrome characterized by hyperglycemia due to deficiency of insulin, is a disease involving the endocrine pancreas and causes considerable morbidity and mortality in the world. In Nigeria, many plants, especially those implicated in herbal recipes for the treatment of diabetes, have not been screened for their elemental constituents while information on phytochemistry of some of them is not available. There is therefore the need to document these constituents as some of these plants are becoming increasingly important as herbal drugs or food additives. The accelerator-based technique PIXE, using the 1.8 MeV collimated proton beam from the 2.5 MV AN 2000 Van de Graaff accelerator at INFN, LNL, Legnaro (Padova) Italy, was employed in the determination of the elemental constituents of these anti-diabetic medicinal plants. Leaves of Gardenia ternifolia, Caesalpina pulcherrima, Solemostenon monostachys, whole plant of Momordica charantia and leaf and stem bark of Hunteria umbellata could be taken as vegetables, neutraceuticals, food additives and supplements in the management of diabetes. However, Hexabolus monopetalus root should be used under prescription.
Determination of Ignitable Liquids in Fire Debris: Direct Analysis by Electronic Nose
Ferreiro-González, Marta; Barbero, Gerardo F.; Palma, Miguel; Ayuso, Jesús; Álvarez, José A.; Barroso, Carmelo G.
2016-01-01
Arsonists usually use an accelerant in order to start or accelerate a fire. The most widely used analytical method to determine the presence of such accelerants consists of a pre-concentration step of the ignitable liquid residues followed by chromatographic analysis. A rapid analytical method based on headspace-mass spectrometry electronic nose (E-Nose) has been developed for the analysis of Ignitable Liquid Residues (ILRs). The working conditions for the E-Nose analytical procedure were optimized by studying different fire debris samples. The optimized experimental variables were related to headspace generation, specifically, incubation temperature and incubation time. The optimal conditions were 115 °C and 10 min for these two parameters. Chemometric tools such as hierarchical cluster analysis (HCA) and linear discriminant analysis (LDA) were applied to the MS data (45–200 m/z) to establish the most suitable spectroscopic signals for the discrimination of several ignitable liquids. The optimized method was applied to a set of fire debris samples. In order to simulate post-burn samples several ignitable liquids (gasoline, diesel, citronella, kerosene, paraffin) were used to ignite different substrates (wood, cotton, cork, paper and paperboard). A full discrimination was obtained on using discriminant analysis. This method reported here can be considered as a green technique for fire debris analyses. PMID:27187407
Kellogg, Joshua J.; Wallace, Emily D.; Graf, Tyler N.; Oberlies, Nicholas H.; Cech, Nadja B.
2018-01-01
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. PMID:28787673
Time-dependent inertia analysis of vehicle mechanisms
NASA Astrophysics Data System (ADS)
Salmon, James Lee
Two methods for performing transient inertia analysis of vehicle hardware systems are developed in this dissertation. The analysis techniques can be used to predict the response of vehicle mechanism systems to the accelerations associated with vehicle impacts. General analytical methods for evaluating translational or rotational system dynamics are generated and evaluated for various system characteristics. The utility of the derived techniques are demonstrated by applying the generalized methods to two vehicle systems. Time dependent acceleration measured during a vehicle to vehicle impact are used as input to perform a dynamic analysis of an automobile liftgate latch and outside door handle. Generalized Lagrange equations for a non-conservative system are used to formulate a second order nonlinear differential equation defining the response of the components to the transient input. The differential equation is solved by employing the fourth order Runge-Kutta method. The events are then analyzed using commercially available two dimensional rigid body dynamic analysis software. The results of the two analytical techniques are compared to experimental data generated by high speed film analysis of tests of the two components performed on a high G acceleration sled at Ford Motor Company.
NASA Astrophysics Data System (ADS)
Zuiani, Federico; Vasile, Massimiliano
2015-03-01
This paper presents a set of analytical formulae for the perturbed Keplerian motion of a spacecraft under the effect of a constant control acceleration. The proposed set of formulae can treat control accelerations that are fixed in either a rotating or inertial reference frame. Moreover, the contribution of the zonal harmonic is included in the analytical formulae. It will be shown that the proposed analytical theory allows for the fast computation of long, multi-revolution spirals while maintaining good accuracy. The combined effect of different perturbations and of the shadow regions due to solar eclipse is also included. Furthermore, a simplified control parameterisation is introduced to optimise thrusting patterns with two thrust arcs and two cost arcs per revolution. This simple parameterisation is shown to ensure enough flexibility to describe complex low thrust spirals. The accuracy and speed of the proposed analytical formulae are compared against a full numerical integration with different integration schemes. An averaging technique is then proposed as an application of the analytical formulae. Finally, the paper presents an example of design of an optimal low-thrust spiral to transfer a spacecraft from an elliptical to a circular orbit around the Earth.
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
Kellogg, Joshua J; Wallace, Emily D; Graf, Tyler N; Oberlies, Nicholas H; Cech, Nadja B
2017-10-25
Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. Copyright © 2017. Published by Elsevier B.V.
Melendez, Johan H.; Santaus, Tonya M.; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A.; Geddes, Chris D.
2016-01-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by the detection of the genomic target often involving PCR-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (GC) DNA. Our approach is based on the use of highly-focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the present study, we show that highly focused microwaves at 2.45 GHz, using 12.3 mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification in less than 10 minutes total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward towards the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503
Environmental exposure effects on composite materials for commercial aircraft
NASA Technical Reports Server (NTRS)
Gibbons, M. N.
1982-01-01
The data base for composite materials' properties as they are affected by the environments encountered in operating conditions, both in flight and at ground terminals is expanded. Absorbed moisture degrades the mechanical properties of graphite/epoxy laminates at elevated temperatures. Since airplane components are frequently exposed to atmospheric moisture, rain, and accumulated water, quantitative data are required to evaluate the amount of fluids absorbed under various environmental conditions and the subsequent effects on material properties. In addition, accelerated laboratory test techniques are developed are reliably capable of predicting long term behavior. An accelerated environmental exposure testing procedure is developed, and experimental results are correlated and compared with analytical results to establish the level of confidence for predicting composite material properties.
One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.
Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz
2009-07-15
The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.
Torque-based optimal acceleration control for electric vehicle
NASA Astrophysics Data System (ADS)
Lu, Dongbin; Ouyang, Minggao
2014-03-01
The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.
Wu, Qi; Yuan, Huiming; Zhang, Lihua; Zhang, Yukui
2012-06-20
With the acceleration of proteome research, increasing attention has been paid to multidimensional liquid chromatography-mass spectrometry (MDLC-MS) due to its high peak capacity and separation efficiency. Recently, many efforts have been put to improve MDLC-based strategies including "top-down" and "bottom-up" to enable highly sensitive qualitative and quantitative analysis of proteins, as well as accelerate the whole analytical procedure. Integrated platforms with combination of sample pretreatment, multidimensional separations and identification were also developed to achieve high throughput and sensitive detection of proteomes, facilitating highly accurate and reproducible quantification. This review summarized the recent advances of such techniques and their applications in qualitative and quantitative analysis of proteomes. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
The evolution of cosmic-ray-mediated magnetohydrodynamic shocks: A two-fluid approach
NASA Astrophysics Data System (ADS)
Jun, Byung-Il; Clarke, David A.; Norman, Michael L.
1994-07-01
We study the shock structure and acceleration efficiency of cosmic-ray mediated Magnetohydrodynamic (MHD) shocks both analytically and numerically by using a two-fluid model. Our model includes the dynamical effect of magnetic fields and cosmic rays on a background thermal fluid. The steady state solution is derived by following the technique of Drury & Voelk (1981) and compared to numerical results. We explore the time evolution of plane-perpendicular, piston-driven shocks. From the results of analytical and numerical studies, we conclude that the mean magnetic field plays an important role in the structure and acceleration efficiency of cosmic-ray mediated MHD shocks. The acceleration of cosmic-ray particles becomes less efficient in the presence of strong magnetic pressure since the field makes the shock less compressive. This feature is more prominent at low Mach numbers than at high Mach numbers.
The evolution of cosmic-ray-mediated magnetohydrodynamic shocks: A two-fluid approach
NASA Technical Reports Server (NTRS)
Jun, Byung-Il; Clarke, David A.; Norman, Michael L.
1994-01-01
We study the shock structure and acceleration efficiency of cosmic-ray mediated Magnetohydrodynamic (MHD) shocks both analytically and numerically by using a two-fluid model. Our model includes the dynamical effect of magnetic fields and cosmic rays on a background thermal fluid. The steady state solution is derived by following the technique of Drury & Voelk (1981) and compared to numerical results. We explore the time evolution of plane-perpendicular, piston-driven shocks. From the results of analytical and numerical studies, we conclude that the mean magnetic field plays an important role in the structure and acceleration efficiency of cosmic-ray mediated MHD shocks. The acceleration of cosmic-ray particles becomes less efficient in the presence of strong magnetic pressure since the field makes the shock less compressive. This feature is more prominent at low Mach numbers than at high Mach numbers.
Melendez, Johan H; Santaus, Tonya M; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A; Geddes, Chris D
2016-10-01
Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by detection of the genomic target often involving polymerase chain reaction (PCR)-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (gonorrhea, GC) DNA. Our approach is based on the use of highly focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the current study, we show that highly focused microwaves at 2.45 GHz, using 12.3-mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification, in less than 10 min total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward toward the development of a point-of-care (POC) platform for detection of gonorrhea infections. Copyright © 2016 Elsevier Inc. All rights reserved.
Enabling the High Level Synthesis of Data Analytics Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino
Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less
Review of online coupling of sample preparation techniques with liquid chromatography.
Pan, Jialiang; Zhang, Chengjiang; Zhang, Zhuomin; Li, Gongke
2014-03-07
Sample preparation is still considered as the bottleneck of the whole analytical procedure, and efforts has been conducted towards the automation, improvement of sensitivity and accuracy, and low comsuption of organic solvents. Development of online sample preparation techniques (SP) coupled with liquid chromatography (LC) is a promising way to achieve these goals, which has attracted great attention. This article reviews the recent advances on the online SP-LC techniques. Various online SP techniques have been described and summarized, including solid-phase-based extraction, liquid-phase-based extraction assisted with membrane, microwave assisted extraction, ultrasonic assisted extraction, accelerated solvent extraction and supercritical fluids extraction. Specially, the coupling approaches of online SP-LC systems and the corresponding interfaces have been discussed and reviewed in detail, such as online injector, autosampler combined with transport unit, desorption chamber and column switching. Typical applications of the online SP-LC techniques have been summarized. Then the problems and expected trends in this field are attempted to be discussed and proposed in order to encourage the further development of online SP-LC techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Lysak, Tatiana M.
2018-04-01
We investigate both numerically and analytically the spectrum evolution of a novel type soliton - nonlinear chirped accelerating or decelerating soliton - at a femtosecond pulse propagation in a medium containing noble nanoparticles. In our consideration, we take into account one- or two-photon absorption of laser radiation by nanorods, and time-dependent nanorod aspect ratio changing due to their melting or reshaping because of laser energy absorption. The chirped solitons are formed due to the trapping of laser radiation by the nanorods reshaping fronts, if a positive or negative phase-amplitude grating is induced by laser radiation. Accelerating or slowing down chirped soliton formation is accompanied by the soliton spectrum blue or red shift. To prove our numerical results, we derived the approximate analytical law for the spectrum maximum intensity evolution along the propagation coordinate, based on earlier developed approximate analytical solutions for accelerating and decelerating solitons.
Analytical tools in accelerator physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litvinenko, V.N.
2010-09-01
This paper is a sub-set of my lectures presented in the Accelerator Physics course (USPAS, Santa Rosa, California, January 14-25, 2008). It is based on my notes I wrote during period from 1976 to 1979 in Novosibirsk. Only few copies (in Russian) were distributed to my colleagues in Novosibirsk Institute of Nuclear Physics. The goal of these notes is a complete description starting from the arbitrary reference orbit, explicit expressions for 4-potential and accelerator Hamiltonian and finishing with parameterization with action and angle variables. To a large degree follow logic developed in Theory of Cyclic Particle Accelerators by A.A.Kolmensky andmore » A.N.Lebedev [Kolomensky], but going beyond the book in a number of directions. One of unusual feature is these notes use of matrix function and Sylvester formula for calculating matrices of arbitrary elements. Teaching the USPAS course motivated me to translate significant part of my notes into the English. I also included some introductory materials following Classical Theory of Fields by L.D. Landau and E.M. Liftsitz [Landau]. A large number of short notes covering various techniques are placed in the Appendices.« less
Accelerated testing of space mechanisms
NASA Technical Reports Server (NTRS)
Murray, S. Frank; Heshmat, Hooshang
1995-01-01
This report contains a review of various existing life prediction techniques used for a wide range of space mechanisms. Life prediction techniques utilized in other non-space fields such as turbine engine design are also reviewed for applicability to many space mechanism issues. The development of new concepts on how various tribological processes are involved in the life of the complex mechanisms used for space applications are examined. A 'roadmap' for the complete implementation of a tribological prediction approach for complex mechanical systems including standard procedures for test planning, analytical models for life prediction and experimental verification of the life prediction and accelerated testing techniques are discussed. A plan is presented to demonstrate a method for predicting the life and/or performance of a selected space mechanism mechanical component.
Medical Applications at CERN and the ENLIGHT Network
Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh
2016-01-01
State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN. PMID:26835422
Medical Applications at CERN and the ENLIGHT Network.
Dosanjh, Manjit; Cirilli, Manuela; Myers, Steve; Navin, Sparsh
2016-01-01
State-of-the-art techniques derived from particle accelerators, detectors, and physics computing are routinely used in clinical practice and medical research centers: from imaging technologies to dedicated accelerators for cancer therapy and nuclear medicine, simulations, and data analytics. Principles of particle physics themselves are the foundation of a cutting edge radiotherapy technique for cancer treatment: hadron therapy. This article is an overview of the involvement of CERN, the European Organization for Nuclear Research, in medical applications, with specific focus on hadron therapy. It also presents the history, achievements, and future scientific goals of the European Network for Light Ion Hadron Therapy, whose co-ordination office is at CERN.
Validation of Force Limited Vibration Testing at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Rice, Chad; Buehrle, Ralph D.
2003-01-01
Vibration tests were performed to develop and validate the forced limited vibration testing capability at the NASA Langley Research Center. The force limited vibration test technique has been utilized at the Jet Propulsion Laboratory and other NASA centers to provide more realistic vibration test environments for aerospace flight hardware. In standard random vibration tests, the payload is mounted to a rigid fixture and the interface acceleration is controlled to a specified level based on a conservative estimate of the expected flight environment. In force limited vibration tests, both the acceleration and force are controlled at the mounting interface to compensate for differences between the flexible flight mounting and rigid test fixture. This minimizes the over test at the payload natural frequencies and results in more realistic forces being transmitted at the mounting interface. Force and acceleration response data was provided by NASA Goddard Space Flight Center for a test article that was flown in 1998 on a Black Brant sounding rocket. The measured flight interface acceleration data was used as the reference acceleration spectrum. Using this acceleration spectrum, three analytical methods were used to estimate the force limits. Standard random and force limited vibration tests were performed and the results are compared with the flight data.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Kiselev, A. N.; Shepelev, S. V.; Galanin, A. V.
2015-02-01
Specific features relating to development of the information-analytical system on the problem of flow-accelerated corrosion of pipeline elements in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh nuclear power plant are considered. The results from a statistical analysis of data on the quantity, location, and operating conditions of the elements and preinserted segments of pipelines used in the condensate-feedwater and wet steam paths are presented. The principles of preparing and using the information-analytical system for determining the lifetime to reaching inadmissible wall thinning in elements of pipelines used in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered.
Nanoscale optical interferometry with incoherent light
Li, Dongfang; Feng, Jing; Pacifici, Domenico
2016-01-01
Optical interferometry has empowered an impressive variety of biosensing and medical imaging techniques. A widely held assumption is that devices based on optical interferometry require coherent light to generate a precise optical signature in response to an analyte. Here we disprove that assumption. By directly embedding light emitters into subwavelength cavities of plasmonic interferometers, we demonstrate coherent generation of surface plasmons even when light with extremely low degrees of spatial and temporal coherence is employed. This surprising finding enables novel sensor designs with cheaper and smaller light sources, and consequently increases accessibility to a variety of analytes, such as biomarkers in physiological fluids, or even airborne nanoparticles. Furthermore, these nanosensors can now be arranged along open detection surfaces, and in dense arrays, accelerating the rate of parallel target screening used in drug discovery, among other high volume and high sensitivity applications. PMID:26880171
Nanoscale optical interferometry with incoherent light.
Li, Dongfang; Feng, Jing; Pacifici, Domenico
2016-02-16
Optical interferometry has empowered an impressive variety of biosensing and medical imaging techniques. A widely held assumption is that devices based on optical interferometry require coherent light to generate a precise optical signature in response to an analyte. Here we disprove that assumption. By directly embedding light emitters into subwavelength cavities of plasmonic interferometers, we demonstrate coherent generation of surface plasmons even when light with extremely low degrees of spatial and temporal coherence is employed. This surprising finding enables novel sensor designs with cheaper and smaller light sources, and consequently increases accessibility to a variety of analytes, such as biomarkers in physiological fluids, or even airborne nanoparticles. Furthermore, these nanosensors can now be arranged along open detection surfaces, and in dense arrays, accelerating the rate of parallel target screening used in drug discovery, among other high volume and high sensitivity applications.
NASA Astrophysics Data System (ADS)
Sakai, Hirotaka; Urakawa, Fumihiro; Aikawa, Akira; Namura, Akira
The vibration of concrete sleepers is an important factor engendering track deterioration. In this paper, we created a three-dimensional finite element model to reproduce a prestressed concrete (PC) sleeper in detail, expressing influence of ballast layers with a 3D spring series and dampers to reproduce their vibration and dynamic characteristics. Determination of these parameters bases on the experimental modal analysis using an impact excitation technique for PC sleepers by adjusting the accelerance between the analytical results and experimental results. Furthermore, we compared the difference of these characteristics between normal sleepers and those with some structural modifications. Analytical results clarified that such means as sleeper width extension and increased sleeper thickness will influence the reduction of ballasted track vibration as improvements of PC sleepers.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Historical review of missile aerodynamic developments
NASA Technical Reports Server (NTRS)
Spearman, M. Leroy
1989-01-01
The development of missiles from early history up to about 1970 is discussed. Early unpowered missiles beyond the rock include the spear, the bow and arrow, the gun and bullet, and the cannon and projectile. Combining gunpowder with projectiles resulted in the first powered missiles. In the early 1900's, the development of guided missiles was begun. Significant advances in missile technology were made by German scientists during World War II. The dispersion of these advances to other countries following the war resulted in accelerating the development of guided missiles. In the late 1940's and early 1950's there was a proliferation in the development of missile systems in many countries. These developments were based primarily on experimental work and on relatively crude analytical techniques. Discussed here are some of the missile systems that were developed up to about 1970; some of the problems encountered; the development of an experimental data base for use with missiles; and early efforts to develop analytical methods applicable to missiles.
The analysis of cable forces based on natural frequency
NASA Astrophysics Data System (ADS)
Suangga, Made; Hidayat, Irpan; Juliastuti; Bontan, Darwin Julius
2017-12-01
A cable is a flexible structural member that is effective at resisting tensile forces. Cables are used in a variety of structures that employ their unique characteristics to create efficient design tension members. The condition of the cable forces in the cable supported structure is an important indication of judging whether the structure is in good condition. Several methods have been developed to measure on site cable forces. Vibration technique using correlation between natural frequency and cable forces is a simple method to determine in situ cable forces, however the method need accurate information on the boundary condition, cable mass, and cable length. The natural frequency of the cable is determined using FFT (Fast Fourier Transform) Technique to the acceleration record of the cable. Based on the natural frequency obtained, the cable forces then can be determine by analytical or by finite element program. This research is focus on the vibration techniques to determine the cable forces, to understand the physical parameter effect of the cable and also modelling techniques to the natural frequency and cable forces.
Theory of unfolded cyclotron accelerator
NASA Astrophysics Data System (ADS)
Rax, J.-M.; Robiche, J.
2010-10-01
An acceleration process based on the interaction between an ion, a tapered periodic magnetic structure, and a circularly polarized oscillating electric field is identified and analyzed, and its potential is evaluated. A Hamiltonian analysis is developed in order to describe the interplay between the cyclotron motion, the electric acceleration, and the magnetic modulation. The parameters of this universal class of magnetic modulation leading to continuous acceleration without Larmor radius increase are expressed analytically. Thus, this study provides the basic scaling of what appears as a compact unfolded cyclotron accelerator.
NASA Astrophysics Data System (ADS)
Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad
2015-11-01
In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A. H.; Rule, D. W.; Downer, M. C.
We report the initial considerations of using linearly polarized optical transition radiation (OTR) to characterize the electron beams of laser plasma accelerators (LPAs) such as at the Univ. of Texas at Austin. The two LPAs operate at 100 MeV and 2-GeV, and they currently have estimated normalized emittances at ~ 1-mm mrad regime with beam divergences less than 1/γ and beam sizes to be determined at the micron level. Analytical modeling results indicate the feasibility of using these OTR techniques for the LPA applications.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Synergia: an accelerator modeling tool with 3-D space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amundson, James F.; Spentzouris, P.; /Fermilab
2004-07-01
High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less
Emittance preservation in plasma-based accelerators with ion motion
Benedetti, C.; Schroeder, C. B.; Esarey, E.; ...
2017-11-01
In a plasma-accelerator-based linear collider, the density of matched, low-emittance, high-energy particle bunches required for collider applications can be orders of magnitude above the background ion density, leading to ion motion, perturbation of the focusing fields, and, hence, to beam emittance growth. By analyzing the response of the background ions to an ultrahigh density beam, analytical expressions, valid for nonrelativistic ion motion, are derived for the transverse wakefield and for the final (i.e., after saturation) bunch emittance. Analytical results are validated against numerical modeling. Initial beam distributions are derived that are equilibrium solutions, which require head-to-tail bunch shaping, enabling emittancemore » preservation with ion motion.« less
On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials
NASA Technical Reports Server (NTRS)
Gates, Thomas S.
2003-01-01
A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keck, B D; Ognibene, T; Vogel, J S
2010-02-05
Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of anymore » separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30 fg equivalents. AMS provides an sensitive, accurate and precise method of measuring drug compounds in biological matrices.« less
A New Paradigm for Flare Particle Acceleration
NASA Astrophysics Data System (ADS)
Guidoni, Silvina E.; Karpen, Judith T.; DeVore, C. Richard
2017-08-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission and its spectra in solar flares is not well understood. Here, we propose a first-principle-based model of particle acceleration that produces energy spectra that closely resemble those derived from hard X-ray observations. Our mechanism uses contracting magnetic islands formed during fast reconnection in solar flares to accelerate electrons, as first proposed by Drake et al. (2006) for kinetic-scale plasmoids. We apply these ideas to MHD-scale islands formed during fast reconnection in a simulated eruptive flare. A simple analytic model based on the particles’ adiabatic invariants is used to calculate the energy gain of particles orbiting field lines in our ultrahigh-resolution, 2.5D, MHD numerical simulation of a solar eruption (flare + coronal mass ejection). Then, we analytically model electrons visiting multiple contracting islands to account for the observed high-energy flare emission. Our acceleration mechanism inherently produces sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each macroscopic island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare current sheet is a promising candidate for electron acceleration in solar eruptions. This work was supported in part by the NASA LWS and H-SR programs..
Analytical impact time and angle guidance via time-varying sliding mode technique.
Zhao, Yao; Sheng, Yongzhi; Liu, Xiangdong
2016-05-01
To concretely provide a feasible solution for homing missiles with the precise impact time and angle, this paper develops a novel guidance law, based on the nonlinear engagement dynamics. The guidance law is firstly designed with the prior assumption of a stationary target, followed by the practical extension to a moving target scenario. The time-varying sliding mode (TVSM) technique is applied to fulfill the terminal constraints, in which a specific TVSM surface is constructed with two unknown coefficients. One is tuned to meet the impact time requirement and the other one is targeted with a global sliding mode, so that the impact angle constraint as well as the zero miss distance can be satisfied. Because the proposed law possesses three guidance gain as design parameters, the intercept trajectory can be shaped according to the operational conditions and missile׳s capability. To improve the tolerance of initial heading errors and broaden the application, a new frame of reference is also introduced. Furthermore, the analytical solutions of the flight trajectory, heading angle and acceleration command can be totally expressed for the prediction and offline parameter selection by solving a first-order linear differential equation. Numerical simulation results for various scenarios validate the effectiveness of the proposed guidance law and demonstrate the accuracy of the analytic solutions. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
NASA Astrophysics Data System (ADS)
Sarni, W.
2017-12-01
Water scarcity and poor quality impacts economic development, business growth, and social well-being. Water has become, in our generation, the foremost critical local, regional, and global issue of our time. Despite these needs, there is no water hub or water technology accelerator solely dedicated to water data and tools. There is a need by the public and private sectors for vastly improved data management and visualization tools. This is the WetDATA opportunity - to develop a water data tech hub dedicated to water data acquisition, analytics, and visualization tools for informed policy and business decisions. WetDATA's tools will help incubate disruptive water data technologies and accelerate adoption of current water data solutions. WetDATA is a Colorado-based (501c3), global hub for water data analytics and technology innovation. WetDATA's vision is to be a global leader in water information, data technology innovation and collaborate with other US and global water technology hubs. ROADMAP * Portal (www.wetdata.org) to provide stakeholders with tools/resources to understand related water risks. * The initial activities will provide education, awareness and tools to stakeholders to support the implementation of the Colorado State Water Plan. * Leverage the Western States Water Council Water Data Exchange database. * Development of visualization, predictive analytics and AI tools to engage with stakeholders and provide actionable data and information. TOOLS Education: Provide information on water issues and risks at the local, state, national and global scale. Visualizations: Development of data analytics and visualization tools based upon the 2030 Water Resources Group methodology to support the implementation of the Colorado State Water Plan. Predictive Analytics: Accessing publically available water databases and using machine learning to develop water availability forecasting tools, and time lapse images to support city / urban planning.
Model-independent particle accelerator tuning
Scheinker, Alexander; Pang, Xiaoying; Rybarcyk, Larry
2013-10-21
We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: 1) It has the ability to handle unknown, time-varying systems, 2) It gives known bounds on parameter update rates, 3) We give an analytic proof of its convergence and its stability, and 4) It has a simple digital implementation through a control system such as the Experimental Physics and Industrial Control System (EPICS). Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme formore » uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multi-particle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of twenty two quadrupole magnets and two RF buncher cavities in the Los Alamos Neutron Science Center Linear Accelerator’s transport region, while the beam properties and RF phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.« less
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.
1996-01-01
Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.
Exact method for numerically analyzing a model of local denaturation in superhelically stressed DNA
NASA Astrophysics Data System (ADS)
Fye, Richard M.; Benham, Craig J.
1999-03-01
Local denaturation, the separation at specific sites of the two strands comprising the DNA double helix, is one of the most fundamental processes in biology, required to allow the base sequence to be read both in DNA transcription and in replication. In living organisms this process can be mediated by enzymes which regulate the amount of superhelical stress imposed on the DNA. We present a numerically exact technique for analyzing a model of denaturation in superhelically stressed DNA. This approach is capable of predicting the locations and extents of transition in circular superhelical DNA molecules of kilobase lengths and specified base pair sequences. It can also be used for closed loops of DNA which are typically found in vivo to be kilobases long. The analytic method consists of an integration over the DNA twist degrees of freedom followed by the introduction of auxiliary variables to decouple the remaining degrees of freedom, which allows the use of the transfer matrix method. The algorithm implementing our technique requires O(N2) operations and O(N) memory to analyze a DNA domain containing N base pairs. However, to analyze kilobase length DNA molecules it must be implemented in high precision floating point arithmetic. An accelerated algorithm is constructed by imposing an upper bound M on the number of base pairs that can simultaneously denature in a state. This accelerated algorithm requires O(MN) operations, and has an analytically bounded error. Sample calculations show that it achieves high accuracy (greater than 15 decimal digits) with relatively small values of M (M<0.05N) for kilobase length molecules under physiologically relevant conditions. Calculations are performed on the superhelical pBR322 DNA sequence to test the accuracy of the method. With no free parameters in the model, the locations and extents of local denaturation predicted by this analysis are in quantitatively precise agreement with in vitro experimental measurements. Calculations performed on the fructose-1,6-bisphosphatase gene sequence from yeast show that this approach can also accurately treat in vivo denaturation.
Tannamala, Pavan Kumar; Azhagarasan, Nagarasampatti Sivaprakasam; Shankar, K Chitra
2013-01-01
Conventional casting techniques following the manufacturers' recommendations are time consuming. Accelerated casting techniques have been reported, but their accuracy with base metal alloys has not been adequately studied. We measured the vertical marginal gap of nickel-chromium copings made by conventional and accelerated casting techniques and determined the clinical acceptability of the cast copings in this study. Experimental design, in vitro study, lab settings. Ten copings each were cast by conventional and accelerated casting techniques. All copings were identical, only their mold preparation schedules differed. Microscopic measurements were recorded at ×80 magnification on the perpendicular to the axial wall at four predetermined sites. The marginal gap values were evaluated by paired t test. The mean marginal gap by conventional technique (34.02 μm) is approximately 10 μm lesser than that of accelerated casting technique (44.62 μm). As the P value is less than 0.0001, there is highly significant difference between the two techniques with regard to vertical marginal gap. The accelerated casting technique is time saving and the marginal gap measured was within the clinically acceptable limits and could be an alternative to time-consuming conventional techniques.
The accurate quantitation of proteins or peptides using Mass Spectrometry (MS) is gaining prominence in the biomedical research community as an alternative method for analyte measurement. The Clinical Proteomic Tumor Analysis Consortium (CPTAC) investigators have been at the forefront in the promotion of reproducible MS techniques, through the development and application of standardized proteomic methods for protein quantitation on biologically relevant samples.
Analytical and Numerical Solutions of Generalized Fokker-Planck Equations - Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prinja, Anil K.
The overall goal of this project was to develop advanced theoretical and numerical techniques to quantitatively describe the spreading of a collimated beam of charged particles in space, in angle, and in energy, as a result of small deflection, small energy transfer Coulomb collisions with the target nuclei and electrons. Such beams arise in several applications of great interest in nuclear engineering, and include electron and ion radiotherapy, ion beam modification of materials, accelerator transmutation of waste, and accelerator production of tritium, to name some important candidates. These applications present unique and difficult modeling challenges, but from the outset aremore » amenable to the language of ''transport theory'', which is very familiar to nuclear engineers and considerably less-so to physicists and material scientists. Thus, our approach has been to adopt a fundamental description based on transport equations, but the forward peakedness associated with charged particle interactions precludes a direct application of solution methods developed for neutral particle transport. Unique problem formulations and solution techniques are necessary to describe the transport and interaction of charged particles. In particular, we have developed the Generalized Fokker-Planck (GFP) approach to describe the angular and radial spreading of a collimated beam and a renormalized transport model to describe the energy-loss straggling of an initially monoenergetic distribution. Both analytic and numerical solutions have been investigated and in particular novel finite element numerical methods have been developed. In the first phase of the project, asymptotic methods were used to develop closed form solutions to the GFP equation for different orders of expansion, and was described in a previous progress report. In this final report we present a detailed description of (i) a novel energy straggling model based on a Fokker-Planck approximation but which is adapted for a multigroup transport setting, and (ii) two unique families of discontinuous finite element schemes, one linear and the other nonlinear.« less
A gas-dynamical approach to radiation pressure acceleration
NASA Astrophysics Data System (ADS)
Schmidt, Peter; Boine-Frankenheim, Oliver
2016-06-01
The study of high intensity ion beams driven by high power pulsed lasers is an active field of research. Of particular interest is the radiation pressure acceleration, for which simulations predict narrow band ion energies up to GeV. We derive a laser-piston model by applying techniques for non-relativistic gas-dynamics. The model reveals a laser intensity limit, below which sufficient laser-piston acceleration is impossible. The relation between target thickness and piston velocity as a function of the laser pulse length yields an approximation for the permissible target thickness. We performed one-dimensional Particle-In-Cell simulations to confirm the predictions of the analytical model. These simulations also reveal the importance of electromagnetic energy transport. We find that this energy transport limits the achievable compression and rarefies the plasma.
Big data analytics for the Future Circular Collider reliability and availability studies
NASA Astrophysics Data System (ADS)
Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter
2017-10-01
Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.
NASA Astrophysics Data System (ADS)
Mahadevan, Sankaran; Neal, Kyle; Nath, Paromita; Bao, Yanqing; Cai, Guowei; Orme, Peter; Adams, Douglas; Agarwal, Vivek
2017-02-01
This research is seeking to develop a probabilistic framework for health diagnosis and prognosis of aging concrete structures in nuclear power plants that are subjected to physical, chemical, environment, and mechanical degradation. The proposed framework consists of four elements: monitoring, data analytics, uncertainty quantification, and prognosis. The current work focuses on degradation caused by ASR (alkali-silica reaction). Controlled concrete specimens with reactive aggregate are prepared to develop accelerated ASR degradation. Different monitoring techniques — infrared thermography, digital image correlation (DIC), mechanical deformation measurements, nonlinear impact resonance acoustic spectroscopy (NIRAS), and vibro-acoustic modulation (VAM) — are studied for ASR diagnosis of the specimens. Both DIC and mechanical measurements record the specimen deformation caused by ASR gel expansion. Thermography is used to compare the thermal response of pristine and damaged concrete specimens and generate a 2-D map of the damage (i.e., ASR gel and cracked area), thus facilitating localization and quantification of damage. NIRAS and VAM are two separate vibration-based techniques that detect nonlinear changes in dynamic properties caused by the damage. The diagnosis results from multiple techniques are then fused using a Bayesian network, which also helps to quantify the uncertainty in the diagnosis. Prognosis of ASR degradation is then performed based on the current state of degradation obtained from diagnosis, by using a coupled thermo-hydro-mechanical-chemical (THMC) model for ASR degradation. This comprehensive approach of monitoring, data analytics, and uncertainty-quantified diagnosis and prognosis will facilitate the development of a quantitative, risk informed framework that will support continuous assessment and risk management of structural health and performance.
Recent advances in computational-analytical integral transforms for convection-diffusion problems
NASA Astrophysics Data System (ADS)
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
Evolution of accelerometer methods for physical activity research.
Troiano, Richard P; McClain, James J; Brychta, Robert J; Chen, Kong Y
2014-07-01
The technology and application of current accelerometer-based devices in physical activity (PA) research allow the capture and storage or transmission of large volumes of raw acceleration signal data. These rich data not only provide opportunities to improve PA characterisation, but also bring logistical and analytic challenges. We discuss how researchers and developers from multiple disciplines are responding to the analytic challenges and how advances in data storage, transmission and big data computing will minimise logistical challenges. These new approaches also bring the need for several paradigm shifts for PA researchers, including a shift from count-based approaches and regression calibrations for PA energy expenditure (PAEE) estimation to activity characterisation and EE estimation based on features extracted from raw acceleration signals. Furthermore, a collaborative approach towards analytic methods is proposed to facilitate PA research, which requires a shift away from multiple independent calibration studies. Finally, we make the case for a distinction between PA represented by accelerometer-based devices and PA assessed by self-report. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Buvé, Carolien; Van Bedts, Tine; Haenen, Annelien; Kebede, Biniam; Braekers, Roel; Hendrickx, Marc; Van Loey, Ann; Grauwet, Tara
2018-07-01
Accurate shelf-life dating of food products is crucial for consumers and industries. Therefore, in this study we applied a science-based approach for shelf-life assessment, including accelerated shelf-life testing (ASLT), acceptability testing and the screening of analytical attributes for fast shelf-life predictions. Shelf-stable strawberry juice was selected as a case study. Ambient storage (20 °C) had no effect on the aroma-based acceptance of strawberry juice. The colour-based acceptability decreased during storage under ambient and accelerated (28-42 °C) conditions. The application of survival analysis showed that the colour-based shelf-life was reached in the early stages of storage (≤11 weeks) and that the shelf-life was shortened at higher temperatures. None of the selected attributes (a * and ΔE * value, anthocyanin and ascorbic acid content) is an ideal analytical marker for shelf-life predictions in the investigated temperature range (20-42 °C). Nevertheless, an overall analytical cut-off value over the whole temperature range can be selected. Colour changes of strawberry juice during storage are shelf-life limiting. Combining ASLT with acceptability testing allowed to gain faster insight into the change in colour-based acceptability and to perform shelf-life predictions relying on scientific data. An analytical marker is a convenient tool for shelf-life predictions in the context of ASLT. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei; Chen, Xi; Lin, Xu-Dong; Tan, Ning
The mean first passage time (MFPT) in a phenomenological gene transcriptional regulatory model with non-Gaussian noise is analytically investigated based on the singular perturbation technique. The effect of the non-Gaussian noise on the phenomenon of stochastic resonance (SR) is then disclosed based on a new combination of adiabatic elimination and linear response approximation. Compared with the results in the Gaussian noise case, it is found that bounded non-Gaussian noise inhibits the transition between different concentrations of protein, while heavy-tailed non-Gaussian noise accelerates the transition. It is also found that the optimal noise intensity for SR in the heavy-tailed noise case is smaller, while the optimal noise intensity in the bounded noise case is larger. These observations can be explained by the heavy-tailed noise easing random transitions.
Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing
2017-07-07
With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.
Boonen, Kurt; Landuyt, Bart; Baggerman, Geert; Husson, Steven J; Huybrechts, Jurgen; Schoofs, Liliane
2008-02-01
MS is currently one of the most important analytical techniques in biological and medical research. ESI and MALDI launched the field of MS into biology. The performance of mass spectrometers increased tremendously over the past decades. Other technological advances increased the analytical power of biological MS even more. First, the advent of the genome projects allowed an automated analysis of mass spectrometric data. Second, improved separation techniques, like nanoscale HPLC, are essential for MS analysis of biomolecules. The recent progress in bioinformatics is the third factor that accelerated the biochemical analysis of macromolecules. The first part of this review will introduce the basics of these techniques. The field that integrates all these techniques to identify endogenous peptides is called peptidomics and will be discussed in the last section. This integrated approach aims at identifying all the present peptides in a cell, organ or organism (the peptidome). Today, peptidomics is used by several fields of research. Special emphasis will be given to the identification of neuropeptides, a class of short proteins that fulfil several important intercellular signalling functions in every animal. MS imaging techniques and biomarker discovery will also be discussed briefly.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Budanov, V. A.; Golubeva, T. N.
2015-03-01
Matters concerned with making efficient use of the information-analytical system on the flow-accelerated corrosion problem in setting up in-service examination of the metal of pipeline elements operating in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered. The principles used to select samples of pipeline elements in planning ultrasonic thickness measurements for timely revealing metal thinning due to flow-accelerated corrosion along with reducing the total amount of measurements in the condensate-feedwater path are discussed.
Simulation of FIB-SEM images for analysis of porous microstructures.
Prill, Torben; Schladitz, Katja
2013-01-01
Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.
V Soares Maciel, Edvaldo; de Toffoli, Ana Lúcia; Lanças, Fernando Mauro
2018-04-20
The accelerated rising of the world's population increased the consumption of food, thus demanding more rigors in the control of residue and contaminants in food-based products marketed for human consumption. In view of the complexity of most food matrices, including fruits, vegetables, different types of meat, beverages, among others, a sample preparation step is important to provide more reliable results when combined with HPLC separations. An adequate sample preparation step before the chromatographic analysis is mandatory in obtaining higher precision and accuracy in order to improve the extraction of the target analytes, one of the priorities in analytical chemistry. The recent discovery of new materials such as ionic liquids, graphene-derived materials, molecularly imprinted polymers, restricted access media, magnetic nanoparticles, and carbonaceous nanomaterials, provided high sensitivity and selectivity results in an extensive variety of applications. These materials, as well as their several possible combinations, have been demonstrated to be highly appropriate for the extraction of different analytes in complex samples such as food products. The main characteristics and application of these new materials in food analysis will be presented and discussed in this paper. Another topic discussed in this review covers the main advantages and limitations of sample preparation microtechniques, as well as their off-line and on-line combination with HPLC for food analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Electro-optic spatial decoding on the spherical-wavefront Coulomb fields of plasma electron sources.
Huang, K; Esirkepov, T; Koga, J K; Kotaki, H; Mori, M; Hayashi, Y; Nakanii, N; Bulanov, S V; Kando, M
2018-02-13
Detections of the pulse durations and arrival timings of relativistic electron beams are important issues in accelerator physics. Electro-optic diagnostics on the Coulomb fields of electron beams have the advantages of single shot and non-destructive characteristics. We present a study of introducing the electro-optic spatial decoding technique to laser wakefield acceleration. By placing an electro-optic crystal very close to a gas target, we discovered that the Coulomb field of the electron beam possessed a spherical wavefront and was inconsistent with the previously widely used model. The field structure was demonstrated by experimental measurement, analytic calculations and simulations. A temporal mapping relationship with generality was derived in a geometry where the signals had spherical wavefronts. This study could be helpful for the applications of electro-optic diagnostics in laser plasma acceleration experiments.
Second International Conference on Accelerating Biopharmaceutical Development
2009-01-01
The Second International Conference on Accelerating Biopharmaceutical Development was held in Coronado, California. The meeting was organized by the Society for Biological Engineering (SBE) and the American Institute of Chemical Engineers (AIChE); SBE is a technological community of the AIChE. Bob Adamson (Wyeth) and Chuck Goochee (Centocor) were co-chairs of the event, which had the theme “Delivering cost-effective, robust processes and methods quickly and efficiently.” The first day focused on emerging disruptive technologies and cutting-edge analytical techniques. Day two featured presentations on accelerated cell culture process development, critical quality attributes, specifications and comparability, and high throughput protein formulation development. The final day was dedicated to discussion of technology options and new analysis methods provided by emerging disruptive technologies; functional interaction, integration and synergy in platform development; and rapid and economic purification process development. PMID:20065637
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2017-01-01
Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.
NASA Astrophysics Data System (ADS)
Clark, A. E.; Yoon, S.; Sheesley, R. J.; Usenko, S.
2014-12-01
DISCOVER-AQ is a NASA mission seeking to better understand air quality in cities across the United States. In September 2013, flight, satellite and ground-based data was collected in Houston, TX and the surrounding metropolitan area. Over 300 particulate matter filter samples were collected as part of the ground-based sampling efforts, at four sites across Houston. Samples include total suspended particle matter (TSP) and fine particulate matter (less than 2.5 μm in aerodynamic diameter; PM2.5). For this project, an analytical method has been developed for the pressurized liquid extraction (PLE) of a wide variety of organic tracers and contaminants from quartz fiber filters (QFFs). Over 100 compounds were selected including polycyclic aromatic hydrocarbons (PAHs), hopanes, levoglucosan, organochlorine pesticides, polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), and organophosphate flame retardants (OPFRs). Currently, there is no analytical method validated for the reproducible extraction of all seven compound classes in a single automated technique. Prior to extraction, QFF samples were spiked with known amounts of target analyte standards and isotopically-labeled surrogate standards. The QFF were then extracted with methylene chloride:acetone at high temperatures (100˚C) and pressures (1500 psi) using a Thermo Dionex Accelerated Solvent Extractor system (ASE 350). Extracts were concentrated, spiked with known amounts of isotopically-labeled internal standards, and analyzed by gas chromatography coupled with mass spectrometry utilizing electron ionization and electron capture negative ionization. Target analytes were surrogate recovery-corrected to account for analyte loss during sample preparation. Ambient concentrations of over 100 organic tracers and contaminants will be presented for four sites in Houston during DISCOVER-AQ.
Analysis and control of high-speed wheeled vehicles
NASA Astrophysics Data System (ADS)
Velenis, Efstathios
In this work we reproduce driving techniques to mimic expert race drivers and obtain the open-loop control signals that may be used by auto-pilot agents driving autonomous ground wheeled vehicles. Race drivers operate their vehicles at the limits of the acceleration envelope. An accurate characterization of the acceleration capacity of the vehicle is required. Understanding and reproduction of such complex maneuvers also require a physics-based mathematical description of the vehicle dynamics. While most of the modeling issues of ground-vehicles/automobiles are already well established in the literature, lack of understanding of the physics associated with friction generation results in ad-hoc approaches to tire friction modeling. In this work we revisit this aspect of the overall vehicle modeling and develop a tire friction model that provides physical interpretation of the tire forces. The new model is free of those singularities at low vehicle speed and wheel angular rate that are inherent in the widely used empirical static models. In addition, the dynamic nature of the tire model proposed herein allows the study of dynamic effects such as transients and hysteresis. The trajectory-planning problem for an autonomous ground wheeled vehicle is formulated in an optimal control framework aiming to minimize the time of travel and maximize the use of the available acceleration capacity. The first approach to solve the optimal control problem is using numerical techniques. Numerical optimization allows incorporation of a vehicle model of high fidelity and generates realistic solutions. Such an optimization scheme provides an ideal platform to study the limit operation of the vehicle, which would not be possible via straightforward simulation. In this work we emphasize the importance of online applicability of the proposed methodologies. This underlines the need for optimal solutions that require little computational cost and are able to incorporate real, unpredictable environments. A semi-analytic methodology is developed to generate the optimal velocity profile for minimum time travel along a prescribed path. The semi-analytic nature ensures minimal computational cost while a receding horizon implementation allows application of the methodology in uncertain environments. Extensions to increase fidelity of the vehicle model are finally provided.
Neural Networks for Modeling and Control of Particle Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
Neural Networks for Modeling and Control of Particle Accelerators
NASA Astrophysics Data System (ADS)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.
2016-04-01
Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.
Neural Networks for Modeling and Control of Particle Accelerators
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; ...
2016-04-01
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
A new method for flight test determination of propulsive efficiency and drag coefficient
NASA Technical Reports Server (NTRS)
Bull, G.; Bridges, P. D.
1983-01-01
A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.
Algorithms for computing the geopotential using a simple density layer
NASA Technical Reports Server (NTRS)
Morrison, F.
1976-01-01
Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.
Modelling and Closed-Loop System Identification of a Quadrotor-Based Aerial Manipulator
NASA Astrophysics Data System (ADS)
Dube, Chioniso; Pedro, Jimoh O.
2018-05-01
This paper presents the modelling and system identification of a quadrotor-based aerial manipulator. The aerial manipulator model is first derived analytically using the Newton-Euler formulation for the quadrotor and Recursive Newton-Euler formulation for the manipulator. The aerial manipulator is then simulated with the quadrotor under Proportional Derivative (PD) control, with the manipulator in motion. The simulation data is then used for system identification of the aerial manipulator. Auto Regressive with eXogenous inputs (ARX) models are obtained from the system identification for linear accelerations \\ddot{X} and \\ddot{Y} and yaw angular acceleration \\ddot{\\psi }. For linear acceleration \\ddot{Z}, and pitch and roll angular accelerations \\ddot{θ } and \\ddot{φ }, Auto Regressive Moving Average with eXogenous inputs (ARMAX) models are identified.
Late-time structure of the Bunch-Davies FRW wavefunction
NASA Astrophysics Data System (ADS)
Konstantinidis, George; Mahajan, Raghu; Shaghoulian, Edgar
2016-10-01
In this short note we organize a perturbation theory for the Bunch-Davies wavefunction in flat, accelerating cosmologies. The calculational technique avoids the in-in formalism and instead uses an analytic continuation from Euclidean signature. We will consider both massless and conformally coupled self-interacting scalars. These calculations explicitly illustrate two facts. The first is that IR divergences get sharper as the acceleration slows. The second is that UV-divergent contact terms in the Euclidean computation can contribute to the absolute value of the wavefunction in Lorentzian signature. Here UV divergent refers to terms involving inverse powers of the radial cutoff in the Euclidean computation. In Lorentzian signature such terms encode physical time dependence of the wavefunction.
Estimates of effects of residual acceleration on USML-1 experiments
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
The purpose of this study effort was to develop analytical models to describe the effects of residual accelerations on the experiments to be carried on the first U.S. Microgravity Lab mission (USML-1) and to test the accuracy of these models by comparing the pre-flight predicted effects with the post-flight measured effects. After surveying the experiments to be performed on USML-1, it became evident that the anticipated residual accelerations during the USML-1 mission were well below the threshold for most of the primary experiments and all of the secondary (Glovebox) experiments and that the only set of experiments that could provide quantifiable effects, and thus provide a definitive test of the analytical models, were the three melt growth experiments using the Bridgman-Stockbarger type Crystal Growth Furnace (CGF). This class of experiments is by far the most sensitive to low level quasi-steady accelerations that are unavoidable on space craft operating in low earth orbit. Because of this, they have been the drivers for the acceleration requirements imposed on the Space Station. Therefore, it is appropriate that the models on which these requirements are based are tested experimentally. Also, since solidification proceeds directionally over a long period of time, the solidified ingot provides a more or less continuous record of the effects from acceleration disturbances.
Strategies for Analyzing Sub-Micrometer Features with the FE-EPMA
NASA Astrophysics Data System (ADS)
McSwiggen, P.; Armstrong, J. T.; Nielsen, C.
2013-12-01
Changes in column design and electronics, as well as new types of spectrometers and analyzing crystals, have significantly advanced electron microprobes, in terms of stability, reproducibility and detection limits. A major advance in spatial resolution has occurred through the use of the field emission electron gun. The spatial resolution of an analysis is controlled by the diameter of the electron beam and the amount of scatter that takes place within the sample. The beam diameter is controlled by the column and type of electron gun being used. The accelerating voltage and the average atomic number/density of the sample control the amount of electron scatter within the sample. However a large electron interaction volume does not necessarily mean a large analytical volume. The beam electrons may spread out within a large volume, but if the electrons lack sufficient energy to produce the X-ray of interest, the analytical volume could be significantly smaller. Therefore there are two competing strategies for creating the smallest analytical volumes. The first strategy is to reduce the accelerating voltage to produce the smallest electron interaction volume. This low kV analytical approach is ultimately limited by the size of the electron beam itself. With a field emission gun, normally the smallest analytical area is achieved at around 5-7 kV. At lower accelerating voltages, the increase in the beam diameter begins to overshadow the reduction in internal scattering. For tungsten filament guns, the smallest analytical volume is reached at higher accelerating voltages. The second strategy is to minimize the overvoltage during the analysis. If the accelerating voltage is only 1-3 kV greater than the critical ionization energy for the X-ray line of interest, then even if the overall electron interaction volume is large, those electrons quickly loose sufficient energy to produce the desired X-rays. The portion of the interaction volume in which the desired X-rays will be produce will be very small and very near the surface. Both strategies have advantages and disadvantages depending on the ultimate goal of the analysis and the elements involved. This work will examine a number of considerations when attempting to decide which approach is best for a given analytical situation. These include: (1) the size of the analytical volumes, (2) minimum detection limits, (3) quality of the matrix corrections, (4) secondary fluorescence, (5) effects of surface contamination, oxide layers, and carbon coatings. This work is based on results largely from the Fe-Ni binary. A simple conclusion cannot be draw as to which strategy is better overall. The determination is highly system dependent. For many mineral systems, both strategies used in combination will produce the best results. Using multiple accelerating voltages to preform a single analysis allows the analyst to optimize their analytical conditions for each element individually.
Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael
2018-05-01
Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.
Gan, Heng Hui; Yan, Bingnan; Linforth, Robert S.T.; Fisk, Ian D.
2016-01-01
Headspace techniques have been extensively employed in food analysis to measure volatile compounds, which play a central role in the perceived quality of food. In this study atmospheric pressure chemical ionisation-mass spectrometry (APCI-MS), coupled with gas chromatography–mass spectrometry (GC–MS), was used to investigate the complex mix of volatile compounds present in Cheddar cheeses of different maturity, processing and recipes to enable characterisation of the cheeses based on their ripening stages. Partial least squares-linear discriminant analysis (PLS-DA) provided a 70% success rate in correct prediction of the age of the cheeses based on their key headspace volatile profiles. In addition to predicting maturity, the analytical results coupled with chemometrics offered a rapid and detailed profiling of the volatile component of Cheddar cheeses, which could offer a new tool for quality assessment and accelerate product development. PMID:26212994
Sabatini, Angelo Maria; Ligorio, Gabriele; Mannini, Andrea
2015-11-23
In biomechanical studies Optical Motion Capture Systems (OMCS) are considered the gold standard for determining the orientation and the position (pose) of an object in a global reference frame. However, the use of OMCS can be difficult, which has prompted research on alternative sensing technologies, such as body-worn inertial sensors. We developed a drift-free method to estimate the three-dimensional (3D) displacement of a body part during cyclical motions using body-worn inertial sensors. We performed the Fourier analysis of the stride-by-stride estimates of the linear acceleration, which were obtained by transposing the specific forces measured by the tri-axial accelerometer into the global frame using a quaternion-based orientation estimation algorithm and detecting when each stride began using a gait-segmentation algorithm. The time integration was performed analytically using the Fourier series coefficients; the inverse Fourier series was then taken for reconstructing the displacement over each single stride. The displacement traces were concatenated and spline-interpolated to obtain the entire trace. The method was applied to estimate the motion of the lower trunk of healthy subjects that walked on a treadmill and it was validated using OMCS reference 3D displacement data; different approaches were tested for transposing the measured specific force into the global frame, segmenting the gait and performing time integration (numerically and analytically). The width of the limits of agreements were computed between each tested method and the OMCS reference method for each anatomical direction: Medio-Lateral (ML), VerTical (VT) and Antero-Posterior (AP); using the proposed method, it was observed that the vertical component of displacement (VT) was within ±4 mm (±1.96 standard deviation) of OMCS data and each component of horizontal displacement (ML and AP) was within ±9 mm of OMCS data. Fourier harmonic analysis was applied to model stride-by-stride linear accelerations during walking and to perform their analytical integration. Our results showed that analytical integration based on Fourier series coefficients was a useful approach to accurately estimate 3D displacement from noisy acceleration data.
NASA Astrophysics Data System (ADS)
Takahashi, Tomoko; Thornton, Blair
2017-12-01
This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.
NASA Technical Reports Server (NTRS)
Shriver, E. L.
1972-01-01
The coaxial plasma accelerator for use as a projectile accelerator is discussed. The accelerator is described physically and analytically by solution of circuit equations, and by solving for the magnetic pressures which are formed by the j cross B vector forces on the plasma. It is shown that the plasma density must be increased if the accelerator is to be used as a projectile accelerator. Three different approaches to increasing plasma density are discussed. When a magnetic field containment scheme was used to increase the plasma density, glass beads of 0.66 millimeter diameter were accelerated to 7 to 8 kilometers per second velocities. Glass beads of smaller diameter were accelerated to more than twice this velocity.
Comparison Tools for Assessing the Microgravity Environment of Missions, Carriers and Conditions
NASA Technical Reports Server (NTRS)
DeLombard, Richard; McPherson, Kevin; Moskowitz, Milton; Hrovat, Ken
1997-01-01
The Principal Component Spectral Analysis and the Quasi-steady Three-dimensional Histogram techniques provide the means to describe the microgravity acceleration environment of an entire mission on a single plot. This allows a straight forward comparison of the microgravity environment between missions, carriers, and conditions. As shown in this report, the PCSA and QTH techniques bring both the range and median of the microgravity environment onto a single page for an entire mission or another time period or condition of interest. These single pages may then be used to compare similar analyses of other missions, time periods or conditions. The PCSA plot is based on the frequency distribution of the vibrational energy and is normally used for an acceleration data set containing frequencies above the lowest natural frequencies of the vehicle. The QTH plot is based on the direction and magnitude of the acceleration and is normally used for acceleration data sets with frequency content less than 0.1 Hz. Various operating conditions are made evident by using PCSA and QTH plots. Equipment operating either full or part time with sufficient magnitude to be considered a disturbance is very evident as well as equipment contributing to the background acceleration environment. A source's magnitude and/or frequency variability is also evident by the source's appearance on a PCSA plot. The PCSA and QTH techniques are valuable tools for extracting useful information from acceleration data taken over large spans of time. This report shows that these techniques provide a tool for comparison between different sets of microgravity acceleration data, for example different missions, different activities within a mission, and/or different attitudes within a mission. These techniques, as well as others, may be employed in order to derive useful information from acceleration data.
A systematic FPGA acceleration design for applications based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao
2018-04-01
Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.
Techniques for hazard analysis and their use at CERN.
Nuttall, C; Schönbacher, H
2001-01-01
CERN, The European Organisation for Nuclear Research is situated near Geneva and has its accelerators and experimental facilities astride the Swiss and French frontiers attracting physicists from all over the world to this unique laboratory. The main accelerator is situated in a 27 km underground ring and the experiments take place in huge underground caverns in order to detect the fragments resulting from the collision of subatomic particles at speeds approaching that of light. These detectors contain many hundreds of tons of flammable materials, mainly plastics in cables and structural components, flammable gases in the detectors themselves, and cryogenic fluids such as helium and argon. The experiments consume high amounts of electrical power, thus the dangers involved have necessitated the use of analytical techniques to identify the hazards and quantify the risks to personnel and the infrastructure. The techniques described in the paper have been developed in the process industries where they have been to be of great value. They have been successfully applied to CERN industrial and experimental installations and, in some cases, have been instrumental in changing the philosophy of the experimentalists and their detectors.
Reichert, Janice M; Jacob, Nitya; Amanullah, Ashraf
2009-01-01
The Second International Conference on Accelerating Biopharmaceutical Development was held in Coronado, California. The meeting was organized by the Society for Biological Engineering (SBE) and the American Institute of Chemical Engineers (AIChE); SBE is a technological community of the AIChE. Bob Adamson (Wyeth) and Chuck Goochee (Centocor) were co-chairs of the event, which had the theme "Delivering cost-effective, robust processes and methods quickly and efficiently." The first day focused on emerging disruptive technologies and cutting-edge analytical techniques. Day two featured presentations on accelerated cell culture process development, critical quality attributes, specifications and comparability, and high throughput protein formulation development. The final day was dedicated to discussion of technology options and new analysis methods provided by emerging disruptive technologies; functional interaction, integration and synergy in platform development; and rapid and economic purification process development.
Reichert, Janice M; Jacob, Nitya M; Amanullah, Ashraf
2009-01-01
The Second International Conference on Accelerating Biopharmaceutical Development was held in Coronado, California. The meeting was organized by the Society for Biological Engineering (SBE) and the American Institute of Chemical Engineers (AIChE); SBE is a technological community of the AIChE. Bob Adamson (Wyeth) and Chuck Goochee (Centocor) were co-chairs of the event, which had the theme "Delivering cost-effective, robust processes and methods quickly and efficiently." The first day focused on emerging disruptive technologies and cutting-edge analytical techniques. Day two featured presentations on accelerated cell culture process development, critical quality attributes, specifications and comparability, and high throughput protein formulation development. The final day was dedicated to discussion of technology options and new analysis methods provided by emerging disruptive technologies; functional interaction, integration and synergy in platform development; and rapid and economic purification process development.
DNA barcode-based delineation of putative species: efficient start for taxonomic workflows
Kekkonen, Mari; Hebert, Paul D N
2014-01-01
The analysis of DNA barcode sequences with varying techniques for cluster recognition provides an efficient approach for recognizing putative species (operational taxonomic units, OTUs). This approach accelerates and improves taxonomic workflows by exposing cryptic species and decreasing the risk of synonymy. This study tested the congruence of OTUs resulting from the application of three analytical methods (ABGD, BIN, GMYC) to sequence data for Australian hypertrophine moths. OTUs supported by all three approaches were viewed as robust, but 20% of the OTUs were only recognized by one or two of the methods. These OTUs were examined for three criteria to clarify their status. Monophyly and diagnostic nucleotides were both uninformative, but information on ranges was useful as sympatric sister OTUs were viewed as distinct, while allopatric OTUs were merged. This approach revealed 124 OTUs of Hypertrophinae, a more than twofold increase from the currently recognized 51 species. Because this analytical protocol is both fast and repeatable, it provides a valuable tool for establishing a basic understanding of species boundaries that can be validated with subsequent studies. PMID:24479435
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
On the performance of piezoelectric harvesters loaded by finite width impulses
NASA Astrophysics Data System (ADS)
Doria, A.; Medè, C.; Desideri, D.; Maschio, A.; Codecasa, L.; Moro, F.
2018-02-01
The response of cantilevered piezoelectric harvesters loaded by finite width impulses of base acceleration is studied analytically in the frequency domain in order to identify the parameters that influence the generated voltage. Experimental tests are then performed on harvesters loaded by hammer impacts. The latter are used to confirm analytical results and to validate a linear finite element (FE) model of a unimorph harvester. The FE model is, in turn, used to extend analytical results to more general harvesters (tapered, inverse tapered, triangular) and to more general impulses (heel strike in human gait). From analytical and numerical results design criteria for improving harvester performance are obtained.
The Role of a Physical Analysis Laboratory in a 300 mm IC Development and Manufacturing Centre
NASA Astrophysics Data System (ADS)
Kwakman, L. F. Tz.; Bicais-Lepinay, N.; Courtas, S.; Delille, D.; Juhel, M.; Trouiller, C.; Wyon, C.; de la Bardonnie, M.; Lorut, F.; Ross, R.
2005-09-01
To remain competitive IC manufacturers have to accelerate the development of most advanced (CMOS) technology and to deliver high yielding products with best cycle times and at a competitive pricing. With the increase of technology complexity, also the need for physical characterization support increases, however many of the existing techniques are no longer adequate to effectively support the 65-45 nm technology node developments. New and improved techniques are definitely needed to better characterize the often marginal processes, but these should not significantly impact fabrication costs or cycle time. Hence, characterization and metrology challenges in state-of-the-art IC manufacturing are both of technical and economical nature. TEM microscopy is needed for high quality, high volume analytical support but several physical and practical hurdles have to be taken. The success rate of FIB-SEM based failure analysis drops as defects often are too small to be detected and fault isolation becomes more difficult in the nano-scale device structures. To remain effective and efficient, SEM and OBIRCH techniques have to be improved or complemented with other more effective methods. Chemical analysis of novel materials and critical interfaces requires improvements in the field of e.g. SIMS, ToF-SIMS. Techniques that previously were only used sporadically, like EBSD and XRD, have become a `must' to properly support backend process development. At the bright side, thanks to major technical advances, techniques that previously were practiced at laboratory level only now can be used effectively for at-line fab metrology: Voltage Contrast based defectivity control, XPS based gate dielectric metrology and XRD based control of copper metallization processes are practical examples. In this paper capabilities and shortcomings of several techniques and corresponding equipment are presented with practical illustrations of use in our Crolles facilities.
NASA Astrophysics Data System (ADS)
Vallianatos, Filippos; Chatzopoulos, George
2014-05-01
Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
NASA Astrophysics Data System (ADS)
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
NASA Astrophysics Data System (ADS)
Naga Raju, G. J.; Sarita, P.; Murthy, K. S. R.
2017-08-01
Particle Induced X-ray Emission (PIXE), an accelerator based analytical technique has been employed in this work for the analysis of trace elements in the cancerous and non-cancerous tissues of rectal cancer patients. A beam of 3 MeV protons generated from 3 MV Pelletron accelerator at the Ion Beam Laboratory of Institute of Physics, Bhubaneswar, India was used as projectile to excite the atoms present in the tissues samples. PIXE technique, with its capability to detect simultaneously several elements present at very low concentrations, offers an excellent tool for trace element analysis. The characteristic X-rays emitted by the samples were recorded by a high resolution Si (Li) detector. On the basis of the PIXE spectrum obtained for each sample, the elements Cl, K, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, and Br were identified and their relative concentrations were estimated in the cancerous and non-cancerous tissues of rectum. The levels of Mn, Fe, Co, Cu, Zn, and As were higher (p < 0.005) while the levels of Ca, Cr and Ni were lower (p < 0.005) in the cancer tissues relative to the normal tissues. The alterations in the levels of the trace elements observed in the present work are discussed in this paper with respect to their potential role in the initiation, promotion and inhibition of cancer of the rectum.
Evidence from Central Mexico Supporting the Younger Dryas Extraterrestrial Impact Hypothesis
2012-03-05
identified glassy spherules, CSps, high- temperature melt- rocks , shocked quartz, and a YDB black mat analogue in the Venezuelan Andes. Those authors...debate, we have examined a diverse assemblage of YDB markers at Lake Cuitzeo using a more comprehensive array of analytical techniques than in previous...accelerator mass spectroscopy (AMS) 14C dates on bulk sediment and used in a linear interpolation with the YD onset identified at approximately 2.8 m. To
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Zhang, Chenxi; Hu, Zhaochu; Zhang, Wen; Liu, Yongsheng; Zong, Keqing; Li, Ming; Chen, Haihong; Hu, Shenghong
2016-10-18
Sample preparation of whole-rock powders is the major limitation for their accurate and precise elemental analysis by laser ablation inductively-coupled plasma mass spectrometry (ICPMS). In this study, a green, efficient, and simplified fusion technique using a high energy infrared laser was developed for major and trace elemental analysis. Fusion takes only tens of milliseconds for each sample. Compared to the pressed pellet sample preparation, the analytical precision of the developed laser fusion technique is higher by an order of magnitude for most elements in granodiorite GSP-2. Analytical results obtained for five USGS reference materials (ranging from mafic to intermediate to felsic) using the laser fusion technique generally agree with recommended values with discrepancies of less than 10% for most elements. However, high losses (20-70%) of highly volatile elements (Zn and Pb) and the transition metal Cu are observed. The achieved precision is within 5% for major elements and within 15% for most trace elements. Direct laser fusion of rock powders is a green and notably simple method to obtain homogeneous samples, which will significantly accelerate the application of laser ablation ICPMS for whole-rock sample analysis.
Calibrating a novel multi-sensor physical activity measurement system.
John, D; Liu, S; Sasaki, J E; Howe, C A; Staudenmayer, J; Gao, R X; Freedson, P S
2011-09-01
Advancing the field of physical activity (PA) monitoring requires the development of innovative multi-sensor measurement systems that are feasible in the free-living environment. The use of novel analytical techniques to combine and process these multiple sensor signals is equally important. This paper describes a novel multi-sensor 'integrated PA measurement system' (IMS), the lab-based methodology used to calibrate the IMS, techniques used to predict multiple variables from the sensor signals, and proposes design changes to improve the feasibility of deploying the IMS in the free-living environment. The IMS consists of hip and wrist acceleration sensors, two piezoelectric respiration sensors on the torso, and an ultraviolet radiation sensor to obtain contextual information (indoors versus outdoors) of PA. During lab-based calibration of the IMS, data were collected on participants performing a PA routine consisting of seven different ambulatory and free-living activities while wearing a portable metabolic unit (criterion measure) and the IMS. Data analyses on the first 50 adult participants are presented. These analyses were used to determine if the IMS can be used to predict the variables of interest. Finally, physical modifications for the IMS that could enhance the feasibility of free-living use are proposed and refinement of the prediction techniques is discussed.
Extraction Techniques for Polycyclic Aromatic Hydrocarbons in Soils
Lau, E. V.; Gan, S.; Ng, H. K.
2010-01-01
This paper aims to provide a review of the analytical extraction techniques for polycyclic aromatic hydrocarbons (PAHs) in soils. The extraction technologies described here include Soxhlet extraction, ultrasonic and mechanical agitation, accelerated solvent extraction, supercritical and subcritical fluid extraction, microwave-assisted extraction, solid phase extraction and microextraction, thermal desorption and flash pyrolysis, as well as fluidised-bed extraction. The influencing factors in the extraction of PAHs from soil such as temperature, type of solvent, soil moisture, and other soil characteristics are also discussed. The paper concludes with a review of the models used to describe the kinetics of PAH desorption from soils during solvent extraction. PMID:20396670
Hardware implementation of hierarchical volume subdivision-based elastic registration.
Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj
2006-01-01
Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.
Ghosh, Sourav K; Ostanin, Victor P; Johnson, Christian L; Lowe, Christopher R; Seshia, Ashwin A
2011-11-15
Receptor-based detection of pathogens often suffers from non-specific interactions, and as most detection techniques cannot distinguish between affinities of interactions, false positive responses remain a plaguing reality. Here, we report an anharmonic acoustic based method of detection that addresses the inherent weakness of current ligand dependant assays. Spores of Bacillus subtilis (Bacillus anthracis simulant) were immobilized on a thickness-shear mode AT-cut quartz crystal functionalized with anti-spore antibody and the sensor was driven by a pure sinusoidal oscillation at increasing amplitude. Biomolecular interaction forces between the coupled spores and the accelerating surface caused a nonlinear modulation of the acoustic response of the crystal. In particular, the deviation in the third harmonic of the transduced electrical response versus oscillation amplitude of the sensor (signal) was found to be significant. Signals from the specifically-bound spores were clearly distinguishable in shape from those of the physisorbed streptavidin-coated polystyrene microbeads. The analytical model presented here enables estimation of the biomolecular interaction forces from the measured response. Thus, probing biomolecular interaction forces using the described technique can quantitatively detect pathogens and distinguish specific from non-specific interactions, with potential applicability to rapid point-of-care detection. This also serves as a potential tool for rapid force-spectroscopy, affinity-based biomolecular screening and mapping of molecular interaction networks. Copyright © 2011 Elsevier B.V. All rights reserved.
Aslam; Prestwich, W V; McNeill, F E
2003-03-01
The operating conditions at McMaster KN Van de Graaf accelerator have been optimized to produce neutrons via the (7)Li(p, n)(7)Be reaction for in vivo neutron activation analysis. In a number of earlier studies (development of an accelerator based system for in vivo neutron activation analysis measurements of manganese in humans, Ph.D. Thesis, McMaster University, Hamilton, ON, Canada; Appl. Radiat. Isot. 53 (2000) 657; in vivo measurement of some trace elements in human Bone, Ph.D. Thesis. McMaster University, Hamilton, ON, Canada), a significant discrepancy between the experimental and the calculated neutron doses has been pointed out. The hypotheses formulated in the above references to explain the deviation of the experimental results from analytical calculations, have been tested experimentally. The performance of the lithium target for neutron production has been evaluated by measuring the (7)Be activity produced as a result of (p, n) interaction with (7)Li. In contradiction to the formulated hypotheses, lithium target performance was found to be mainly affected by inefficient target cooling and the presence of oxides layer on target surface. An appropriate choice of these parameters resulted in neutron yields same as predicated by analytical calculations.
NASA Astrophysics Data System (ADS)
Schlickeiser, R.; Oppotsch, J.
2017-12-01
The analytical theory of diffusive acceleration of cosmic rays at parallel stationary shock waves of arbitrary speed with magnetostatic turbulence is developed from first principles. The theory is based on the diffusion approximation to the gyrotropic cosmic-ray particle phase-space distribution functions in the respective rest frames of the up- and downstream medium. We derive the correct cosmic-ray jump conditions for the cosmic-ray current and density, and match the up- and downstream distribution functions at the position of the shock. It is essential to account for the different particle momentum coordinates in the up- and downstream media. Analytical expressions for the momentum spectra of shock-accelerated cosmic rays are calculated. These are valid for arbitrary shock speeds including relativistic shocks. The correctly taken limit for nonrelativistic shock speeds leads to a universal broken power-law momentum spectrum of accelerated particles with velocities well above the injection velocity threshold, where the universal power-law spectral index q≃ 2-{γ }1-4 is independent of the flow compression ratio r. For nonrelativistic shock speeds, we calculate for the first time the injection velocity threshold, settling the long-standing injection problem for nonrelativistic shock acceleration.
Automated Predictive Big Data Analytics Using Ontology Based Semantics.
Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A
2015-10-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.
Automated Predictive Big Data Analytics Using Ontology Based Semantics
Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.
2017-01-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.
2014-07-01
DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less
Cost and schedule analytical techniques development
NASA Technical Reports Server (NTRS)
1994-01-01
This contract provided technical services and products to the Marshall Space Flight Center's Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) for the period of 3 Aug. 1991 - 30 Nov. 1994. Accomplishments summarized cover the REDSTAR data base, NASCOM hard copy data base, NASCOM automated data base, NASCOM cost model, complexity generators, program planning, schedules, NASA computer connectivity, other analytical techniques, and special project support.
Mizuno, T; Taniguchi, M; Kashiwagi, M; Umeda, N; Tobari, H; Watanabe, K; Dairaku, M; Sakamoto, K; Inoue, T
2010-02-01
Heat load on acceleration grids by secondary particles such as electrons, neutrals, and positive ions, is a key issue for long pulse acceleration of negative ion beams. Complicated behaviors of the secondary particles in multiaperture, multigrid (MAMuG) accelerator have been analyzed using electrostatic accelerator Monte Carlo code. The analytical result is compared to experimental one obtained in a long pulse operation of a MeV accelerator, of which second acceleration grid (A2G) was removed for simplification of structure. The analytical results show that relatively high heat load on the third acceleration grid (A3G) since stripped electrons were deposited mainly on A3G. This heat load on the A3G can be suppressed by installing the A2G. Thus, capability of MAMuG accelerator is demonstrated for suppression of heat load due to secondary particles by the intermediate grids.
Beam by design: Laser manipulation of electrons in modern accelerators
NASA Astrophysics Data System (ADS)
Hemsing, Erik; Stupakov, Gennady; Xiang, Dao; Zholents, Alexander
2014-07-01
Accelerator-based light sources such as storage rings and free-electron lasers use relativistic electron beams to produce intense radiation over a wide spectral range for fundamental research in physics, chemistry, materials science, biology, and medicine. More than a dozen such sources operate worldwide, and new sources are being built to deliver radiation that meets with the ever-increasing sophistication and depth of new research. Even so, conventional accelerator techniques often cannot keep pace with new demands and, thus, new approaches continue to emerge. In this article, a variety of recently developed and promising techniques that rely on lasers to manipulate and rearrange the electron distribution in order to tailor the properties of the radiation are reviewed. Basic theories of electron-laser interactions, techniques to create microstructures and nanostructures in electron beams, and techniques to produce radiation with customizable waveforms are reviewed. An overview of laser-based techniques for the generation of fully coherent x rays, mode-locked x-ray pulse trains, light with orbital angular momentum, and attosecond or even zeptosecond long coherent pulses in free-electron lasers is presented. Several methods to generate femtosecond pulses in storage rings are also discussed. Additionally, various schemes designed to enhance the performance of light sources through precision beam preparation including beam conditioning, laser heating, emittance exchange, and various laser-based diagnostics are described. Together these techniques represent a new emerging concept of "beam by design" in modern accelerators, which is the primary focus of this article.
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
An analytical and experimental evaluation of a Fresnel lens solar concentrator
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Allums, S. A.; Cosby, R. M.
1976-01-01
An analytical and experimental evaluation of line focusing Fresnel lenses with application potential in the 200 to 370 C range was studied. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves down lens. Experimentation was based on a 56 cm wide, f/1.0 lens. A Sun tracking heliostat provided a nonmoving solar source. Measured data indicated more spreading at the profile base than analytically predicted, resulting in a peak concentration 18 percent lower than the computed peak of 57. The measured and computed transmittances were 85 and 87 percent, respectively. Preliminary testing with a subsequent lens indicated that modified manufacturing techniques corrected the profile spreading problem and should enable improved analytical experimental correlation.
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Ion Beam Analysis of Diffusion in Diamondlike Carbon Films
NASA Astrophysics Data System (ADS)
Chaffee, Kevin Paul
The van de Graaf accelerator facility at Case Western Reserve University was developed into an analytical research center capable of performing Rutherford Backscattering Spectrometry, Elastic Recoil Detection Analysis for hydrogen profiling, Proton Enhanced Scattering, and ^4 He resonant scattering for ^{16 }O profiling. These techniques were applied to the study of Au, Na^+, Cs ^+, and H_2O water diffusion in a-C:H films. The results are consistent with the fully constrained network model of the microstructure as described by Angus and Jansen.
Vapor ingestion in a cylindrical tank with a concave elliptical bottom
NASA Technical Reports Server (NTRS)
Klavins, A.
1974-01-01
An approximate analytical technique is presented for estimating the liquid residual in a tank of arbitrary geometry due to vapor ingestion at any drain rate and acceleration level. The bulk liquid depth at incipient pull-through is defined in terms of the Weber and Bond numbers and two functions that describe the fluid velocity field and free surface shape appropriate to the tank geometry. Numerical results are obtained for the Centaur LH2 tank using limiting approximations to these functions.
Protein assay structured on paper by using lithography
NASA Astrophysics Data System (ADS)
Wilhelm, E.; Nargang, T. M.; Al Bitar, W.; Waterkotte, B.; Rapp, B. E.
2015-03-01
There are two main challenges in producing a robust, paper-based analytical device. The first one is to create a hydrophobic barrier which unlike the commonly used wax barriers does not break if the paper is bent. The second one is the creation of the (bio-)specific sensing layer. For this proteins have to be immobilized without diminishing their activity. We solve both problems using light-based fabrication methods that enable fast, efficient manufacturing of paper-based analytical devices. The first technique relies on silanization by which we create a flexible hydrophobic barrier made of dimethoxydimethylsilane. The second technique demonstrated within this paper uses photobleaching to immobilize proteins by means of maskless projection lithography. Both techniques have been tested on a classical lithography setup using printed toner masks and on a lithography system for maskless lithography. Using these setups we could demonstrate that the proposed manufacturing techniques can be carried out at low costs. The resolution of the paper-based analytical devices obtained with static masks was lower due to the lower mask resolution. Better results were obtained using advanced lithography equipment. By doing so we demonstrated, that our technique enables fabrication of effective hydrophobic boundary layers with a thickness of only 342 μm. Furthermore we showed that flourescine-5-biotin can be immobilized on the non-structured paper and be employed for the detection of streptavidinalkaline phosphatase. By carrying out this assay on a paper-based analytical device which had been structured using the silanization technique we proofed biological compatibility of the suggested patterning technique.
Accelerated bridge construction (ABC) decision making and economic modeling tool.
DOT National Transportation Integrated Search
2011-12-01
In this FHWA-sponsored pool funded study, a set of decision making tools, based on the Analytic Hierarchy Process (AHP) was developed. This tool set is prepared for transportation specialists and decision-makers to determine if ABC is more effective ...
Harries, Bruce; Filiatrault, Lyne; Abu-Laban, Riyad B
2018-05-30
Quality improvement (QI) analytic methodology is rarely encountered in the emergency medicine literature. We sought to comparatively apply QI design and analysis techniques to an existing data set, and discuss these techniques as an alternative to standard research methodology for evaluating a change in a process of care. We used data from a previously published randomized controlled trial on triage-nurse initiated radiography using the Ottawa ankle rules (OAR). QI analytic tools were applied to the data set from this study and evaluated comparatively against the original standard research methodology. The original study concluded that triage nurse-initiated radiographs led to a statistically significant decrease in mean emergency department length of stay. Using QI analytic methodology, we applied control charts and interpreted the results using established methods that preserved the time sequence of the data. This analysis found a compelling signal of a positive treatment effect that would have been identified after the enrolment of 58% of the original study sample, and in the 6th month of this 11-month study. Our comparative analysis demonstrates some of the potential benefits of QI analytic methodology. We found that had this approach been used in the original study, insights regarding the benefits of nurse-initiated radiography using the OAR would have been achieved earlier, and thus potentially at a lower cost. In situations where the overarching aim is to accelerate implementation of practice improvement to benefit future patients, we believe that increased consideration should be given to the use of QI analytic methodology.
Text Mining in Organizational Research
Kobayashi, Vladimer B.; Berkers, Hannah A.; Kismihók, Gábor; Den Hartog, Deanne N.
2017-01-01
Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies. PMID:29881248
Text Mining in Organizational Research.
Kobayashi, Vladimer B; Mol, Stefan T; Berkers, Hannah A; Kismihók, Gábor; Den Hartog, Deanne N
2018-07-01
Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loebman, Sarah R.; Ivezic, Zeljko; Quinn, Thomas R.
2012-10-10
We search for evidence of dark matter in the Milky Way by utilizing the stellar number density distribution and kinematics measured by the Sloan Digital Sky Survey (SDSS) to heliocentric distances exceeding {approx}10 kpc. We employ the cylindrically symmetric form of Jeans equations and focus on the morphology of the resulting acceleration maps, rather than the normalization of the total mass as done in previous, mostly local, studies. Jeans equations are first applied to a mock catalog based on a cosmologically derived N-body+SPH simulation, and the known acceleration (gradient of gravitational potential) is successfully recovered. The same simulation is alsomore » used to quantify the impact of dark matter on the total acceleration. We use Galfast, a code designed to quantitatively reproduce SDSS measurements and selection effects, to generate a synthetic stellar catalog. We apply Jeans equations to this catalog and produce two-dimensional maps of stellar acceleration. These maps reveal that in a Newtonian framework, the implied gravitational potential cannot be explained by visible matter alone. The acceleration experienced by stars at galactocentric distances of {approx}20 kpc is three times larger than what can be explained by purely visible matter. The application of an analytic method for estimating the dark matter halo axis ratio to SDSS data implies an oblate halo with q{sub DM} = 0.47 {+-} 0.14 within the same distance range. These techniques can be used to map the dark matter halo to much larger distances from the Galactic center using upcoming deep optical surveys, such as LSST.« less
Mohammadi, Saeed; Busa, Lori Shayne Alamo; Maeki, Masatoshi; Mohamadi, Reza M; Ishida, Akihiko; Tani, Hirofumi; Tokeshi, Manabu
2016-11-01
A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported. Liquids can flow through a porous medium (such as paper) in the absence of external pressure as a result of capillary action. Uniform results were achieved when washing a paper substrate in a PDMS holder which was integrated with a cartridge absorber acting as a porous medium. Our study demonstrated that applying this washing technique would allow μPADs to become the least expensive microfluidic device platform with high reproducibility and sensitivity. In a model μPAD assay that utilized this novel washing technique, C-reactive protein (CRP) was detected with a limit of detection (LOD) of 5 μg mL -1 . Graphical Abstract A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported.
SHEAR ACCELERATION IN EXPANDING FLOWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rieger, F. M.; Duffy, P., E-mail: frank.rieger@mpi-hd.mpg.de, E-mail: peter.duffy@ucd.ie
Shear flows are naturally expected to occur in astrophysical environments and potential sites of continuous non-thermal Fermi-type particle acceleration. Here we investigate the efficiency of expanding relativistic outflows to facilitate the acceleration of energetic charged particles to higher energies. To this end, the gradual shear acceleration coefficient is derived based on an analytical treatment. The results are applied to the context of the relativistic jets from active galactic nuclei. The inferred acceleration timescale is investigated for a variety of conical flow profiles (i.e., power law, Gaussian, Fermi–Dirac) and compared to the relevant radiative and non-radiative loss timescales. The results exemplifymore » that relativistic shear flows are capable of boosting cosmic-rays to extreme energies. Efficient electron acceleration, on the other hand, requires weak magnetic fields and may thus be accompanied by a delayed onset of particle energization and affect the overall jet appearance (e.g., core, ridge line, and limb-brightening).« less
NASA Technical Reports Server (NTRS)
Coleman, R. A.; Cofer, W. R., III; Edahl, R. A., Jr.
1985-01-01
An analytical technique for the determination of trace (sub-ppbv) quantities of volatile organic compounds in air was developed. A liquid nitrogen-cooled trap operated at reduced pressures in series with a Dupont Nafion-based drying tube and a gas chromatograph was utilized. The technique is capable of analyzing a variety of organic compounds, from simple alkanes to alcohols, while offering a high level of precision, peak sharpness, and sensitivity.
Plasma density characterization at SPARC_LAB through Stark broadening of Hydrogen spectral lines
NASA Astrophysics Data System (ADS)
Filippi, F.; Anania, M. P.; Bellaveglia, M.; Biagioni, A.; Chiadroni, E.; Cianchi, A.; Di Giovenale, D.; Di Pirro, G.; Ferrario, M.; Mostacci, A.; Palumbo, L.; Pompili, R.; Shpakov, V.; Vaccarezza, C.; Villa, F.; Zigler, A.
2016-09-01
Plasma-based acceleration techniques are of great interest for future, compact accelerators due to their high accelerating gradient. Both particle-driven and laser-driven Plasma Wakefield Acceleration experiments are foreseen at the SPARC_LAB Test Facility (INFN National Laboratories of Frascati, Italy), with the aim to accelerate high-brightness electron beams. In order to optimize the efficiency of the acceleration in the plasma and preserve the quality of the accelerated beam, the knowledge of the plasma electron density is mandatory. The Stark broadening of the Hydrogen spectral lines is one of the candidates used to characterize plasma density. The implementation of this diagnostic for plasma-based experiments at SPARC_LAB is presented.
Fluctuation analysis of relativistic nucleus-nucleus collisions in emulsion chambers
NASA Technical Reports Server (NTRS)
Mcguire, Stephen C.
1988-01-01
An analytical technique was developed for identifying enhanced fluctuations in the angular distributions of secondary particles produced from relativistic nucleus-nucleus collisions. The method is applied under the assumption that the masses of the produced particles are small compared to their linear momenta. The importance of particles rests in the fact that enhanced fluctuations in the rapidity distributions is considered to be an experimental signal for the creation of the quark-gluon-plasma (QGP), a state of nuclear matter predicted from the quantum chromodynamics theory (QCD). In the approach, Monte Carlo simulations are employed that make use of a portable random member generator that allow the calculations to be performed on a desk-top computer. The method is illustrated with data taken from high altitude emulsion exposures and is immediately applicable to similar data from accelerator-based emulsion exposures.
Application of real-time digitization techniques in beam measurement for accelerators
NASA Astrophysics Data System (ADS)
Zhao, Lei; Zhan, Lin-Song; Gao, Xing-Shun; Liu, Shu-Bin; An, Qi
2016-04-01
Beam measurement is very important for accelerators. In this paper, modern digital beam measurement techniques based on IQ (In-phase & Quadrature-phase) analysis are discussed. Based on this method and high-speed high-resolution analog-to-digital conversion, we have completed three beam measurement electronics systems designed for the China Spallation Neutron Source (CSNS), Shanghai Synchrotron Radiation Facility (SSRF), and Accelerator Driven Sub-critical system (ADS). Core techniques of hardware design and real-time system calibration are discussed, and performance test results of these three instruments are also presented. Supported by National Natural Science Foundation of China (11205153, 10875119), Knowledge Innovation Program of the Chinese Academy of Sciences (KJCX2-YW-N27), and the Fundamental Research Funds for the Central Universities (WK2030040029),and the CAS Center for Excellence in Particle Physics (CCEPP).
Islas, Gabriela; Hernandez, Prisciliano
2017-01-01
To achieve analytical success, it is necessary to develop thorough clean-up procedures to extract analytes from the matrix. Dispersive solid phase extraction (DSPE) has been used as a pretreatment technique for the analysis of several compounds. This technique is based on the dispersion of a solid sorbent in liquid samples in the extraction isolation and clean-up of different analytes from complex matrices. DSPE has found a wide range of applications in several fields, and it is considered to be a selective, robust, and versatile technique. The applications of dispersive techniques in the analysis of veterinary drugs in different matrices involve magnetic sorbents, molecularly imprinted polymers, carbon-based nanomaterials, and the Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method. Techniques based on DSPE permit minimization of additional steps such as precipitation, centrifugation, and filtration, which decreases the manipulation of the sample. In this review, we describe the main procedures used for synthesis, characterization, and application of this pretreatment technique and how it has been applied to food analysis. PMID:29181027
Approximate analytical relationships for linear optimal aeroelastic flight control laws
NASA Astrophysics Data System (ADS)
Kassem, Ayman Hamdy
1998-09-01
This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.
Burt, Tal; John, Christy S; Ruckle, Jon L; Vuong, Le T
2017-05-01
Phase-0 studies, including microdosing, also called Exploratory Investigational New Drug (eIND) or exploratory clinical trials, are a regulatory framework for first-in-human (FIH) trials. Common to these approaches is the use and implied safety of limited exposures to test articles. Use of sub-pharmacological doses in phase-0/microdose studies requires sensitive analytic tools such as accelerator mass spectrometer (AMS), Positron Emission Tomography (PET), and Liquid Chromatography Tandem Mass Spectrometry (LC-MS/MS) to determine drug disposition. Areas covered: Here we present a practical guide to the range of methodologies, design options, and conduct strategies that can be used to increase the efficiency of drug development. We provide detailed examples of relevant developmental scenarios. Expert opinion: Validation studies over the past decade demonstrated the reliability of extrapolation of sub-pharmacological to therapeutic-level exposures in more than 80% of cases, an improvement over traditional allometric approaches. Applications of phase-0/microdosing approaches include study of pharmacokinetic and pharmacodynamic properties, target tissue localization, drug-drug interactions, effects in vulnerable populations (e.g. pediatric), and intra-target microdosing (ITM). Study design should take into account the advantages and disadvantages of each analytic tool. Utilization of combinations of these analytic techniques increases the versatility of study designs and the power of data obtained.
Analytic Solution of the Electromagnetic Eigenvalues Problem in a Cylindrical Resonator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Checchin, Mattia; Martinello, Martina
Resonant accelerating cavities are key components in modern particles accelerating facilities. These take advantage of electromagnetic fields resonating at microwave frequencies to accelerate charged particles. Particles gain finite energy at each passage through a cavity if in phase with the resonating field, reaching energies even of the order of $TeV$ when a cascade of accelerating resonators are present. In order to understand how a resonant accelerating cavity transfers energy to charged particles, it is important to determine how the electromagnetic modes are exited into such resonators. In this paper we present a complete analytical calculation of the resonating fields formore » a simple cylindrical-shaped cavity.« less
MS-Based Analytical Techniques: Advances in Spray-Based Methods and EI-LC-MS Applications
Medina, Isabel; Cappiello, Achille; Careri, Maria
2018-01-01
Mass spectrometry is the most powerful technique for the detection and identification of organic compounds. It can provide molecular weight information and a wealth of structural details that give a unique fingerprint for each analyte. Due to these characteristics, mass spectrometry-based analytical methods are showing an increasing interest in the scientific community, especially in food safety, environmental, and forensic investigation areas where the simultaneous detection of targeted and nontargeted compounds represents a key factor. In addition, safety risks can be identified at the early stage through online and real-time analytical methodologies. In this context, several efforts have been made to achieve analytical instrumentation able to perform real-time analysis in the native environment of samples and to generate highly informative spectra. This review article provides a survey of some instrumental innovations and their applications with particular attention to spray-based MS methods and food analysis issues. The survey will attempt to cover the state of the art from 2012 up to 2017. PMID:29850370
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
Cryogenic parallel, single phase flows: an analytical approach
NASA Astrophysics Data System (ADS)
Eichhorn, R.
2017-02-01
Managing the cryogenic flows inside a state-of-the-art accelerator cryomodule has become a demanding endeavour: In order to build highly efficient modules, all heat transfers are usually intercepted at various temperatures. For a multi-cavity module, operated at 1.8 K, this requires intercepts at 4 K and at 80 K at different locations with sometimes strongly varying heat loads which for simplicity reasons are operated in parallel. This contribution will describe an analytical approach, based on optimization theories.
Analytical Techniques and Pharmacokinetics of Gastrodia elata Blume and Its Constituents.
Wu, Jinyi; Wu, Bingchu; Tang, Chunlan; Zhao, Jinshun
2017-07-08
Gastrodia elata Blume ( G. elata ), commonly called Tianma in Chinese, is an important and notable traditional Chinese medicine (TCM), which has been used in China as an anticonvulsant, analgesic, sedative, anti-asthma, anti-immune drug since ancient times. The aim of this review is to provide an overview of the abundant efforts of scientists in developing analytical techniques and performing pharmacokinetic studies of G. elata and its constituents, including sample pretreatment methods, analytical techniques, absorption, distribution, metabolism, excretion (ADME) and influence factors to its pharmacokinetics. Based on the reported pharmacokinetic property data of G. elata and its constituents, it is hoped that more studies will focus on the development of rapid and sensitive analytical techniques, discovering new therapeutic uses and understanding the specific in vivo mechanisms of action of G. elata and its constituents from the pharmacokinetic viewpoint in the near future. The present review discusses analytical techniques and pharmacokinetics of G. elata and its constituents reported from 1985 onwards.
On accelerated flow of MHD powell-eyring fluid via homotopy analysis method
NASA Astrophysics Data System (ADS)
Salah, Faisal; Viswanathan, K. K.; Aziz, Zainal Abdul
2017-09-01
The aim of this article is to obtain the approximate analytical solution for incompressible magnetohydrodynamic (MHD) flow for Powell-Eyring fluid induced by an accelerated plate. Both constant and variable accelerated cases are investigated. Approximate analytical solution in each case is obtained by using the Homotopy Analysis Method (HAM). The resulting nonlinear analysis is carried out to generate the series solution. Finally, Graphical outcomes of different values of the material constants parameters on the velocity flow field are discussed and analyzed.
Accelerator mass spectrometry of small biological samples.
Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran
2008-12-01
Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.
ERIC Educational Resources Information Center
Toh, Chee-Seng
2007-01-01
A project is described which incorporates nonlaboratory research skills in a graduate level course on analytical chemistry. This project will help students to grasp the basic principles and concepts of modern analytical techniques and also help them develop relevant research skills in analytical chemistry.
IEC-Based Neutron Generator for Security Inspection System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Linchun; Miley, George H.
2002-07-01
Large nuclear reactors are widely employed for electricity power generation, but small nuclear radiation sources can also be used for a variety of industrial/government applications. In this paper we will discuss the use of a small neutron source based on Inertial Electrostatic Confinement (IEC) of accelerated deuterium ions. There is an urgent need of highly effective detection systems for explosives, especially in airports. While current airport inspection systems are strongly based on X-ray technique, neutron activation including Thermal Neutron Analysis (TNA) and Fast Neutron Analysis (FNA) is powerful in detecting certain types of explosives in luggage and in cargoes. Basicmore » elements present in the explosives can be measured through the (n, n'?) reaction initiated by fast neutrons. Combined with a time-of-flight technique, a complete imaging of key elements, hence of the explosive materials, is obtained. Among the various neutron source generators, the IEC is an ideal candidate to meet the neutron activation analysis requirements. Compared with other accelerators and radioisotopes such as {sup 252}Cf, the IEC is simpler, can be switched on or off, and can reliably produce neutrons with minimum maintenance. Theoretical and experimental studies of a spherical IEC have been conducted at the University of Illinois. In a spherical IEC device, 2.54-MeV neutrons of {approx}10{sup 8} n/s via DD reactions over recent years or 14-MeV neutrons of {approx}2x10{sup 10} n/s via DT reactions can be obtained using an ion gun injection technique. The possibility of the cylindrical IEC in pulsed operation mode combining with pulsed FNA method would also be discussed. In this paper we examine the possibility of using an alternative cylindrical IEC configuration. Such a device was studied earlier at the University of Illinois and it provides a very convenient geometry for security inspection. However, to calculate the neutron yield precisely with this configuration, an understanding of the potential wall trapping and acceleration of ions is needed. The theory engaged is an extension of original analytic study by R.L. Hirsh on the potential well structure in a spherical IEC device, i.e. roughly a 'line' source of neutrons from a cylindrical IEC is a 'point' source from the spherical geometry. Thus our present study focuses on the cylindrical IEC for its convenient application in an FNA detecting system. The conceptual design and physics of ion trapping and re-circulation in a cylindrical IEC intended for neutron-based inspection system will be presented. (authors)« less
Covington, Brett C; McLean, John A; Bachmann, Brian O
2017-01-04
Covering: 2000 to 2016The labor-intensive process of microbial natural product discovery is contingent upon identifying discrete secondary metabolites of interest within complex biological extracts, which contain inventories of all extractable small molecules produced by an organism or consortium. Historically, compound isolation prioritization has been driven by observed biological activity and/or relative metabolite abundance and followed by dereplication via accurate mass analysis. Decades of discovery using variants of these methods has generated the natural pharmacopeia but also contributes to recent high rediscovery rates. However, genomic sequencing reveals substantial untapped potential in previously mined organisms, and can provide useful prescience of potentially new secondary metabolites that ultimately enables isolation. Recently, advances in comparative metabolomics analyses have been coupled to secondary metabolic predictions to accelerate bioactivity and abundance-independent discovery work flows. In this review we will discuss the various analytical and computational techniques that enable MS-based metabolomic applications to natural product discovery and discuss the future prospects for comparative metabolomics in natural product discovery.
Thermoelectrically cooled water trap
Micheels, Ronald H [Concord, MA
2006-02-21
A water trap system based on a thermoelectric cooling device is employed to remove a major fraction of the water from air samples, prior to analysis of these samples for chemical composition, by a variety of analytical techniques where water vapor interferes with the measurement process. These analytical techniques include infrared spectroscopy, mass spectrometry, ion mobility spectrometry and gas chromatography. The thermoelectric system for trapping water present in air samples can substantially improve detection sensitivity in these analytical techniques when it is necessary to measure trace analytes with concentrations in the ppm (parts per million) or ppb (parts per billion) partial pressure range. The thermoelectric trap design is compact and amenable to use in a portable gas monitoring instrumentation.
Experimental and analytical study of water pipe's rupture for damage identification purposes
NASA Astrophysics Data System (ADS)
Papakonstantinou, Konstantinos G.; Shinozuka, Masanobu; Beikae, Mohsen
2011-04-01
A malfunction, local damage or sudden pipe break of a pipeline system can trigger significant flow variations. As shown in the paper, pressure variations and pipe vibrations are two strongly correlated parameters. A sudden change in the flow velocity and pressure of a pipeline system can induce pipe vibrations. Thus, based on acceleration data, a rapid detection and localization of a possible damage may be carried out by inexpensive, nonintrusive monitoring techniques. To illustrate this approach, an experiment on a single pipe was conducted in the laboratory. Pressure gauges and accelerometers were installed and their correlation was checked during an artificially created transient flow. The experimental findings validated the correlation between the parameters. The interaction between pressure variations and pipe vibrations was also theoretically justified. The developed analytical model explains the connection among flow pressure, velocity, pressure wave propagation and pipe vibration. The proposed method provides a rapid, efficient and practical way to identify and locate sudden failures of a pipeline system and sets firm foundations for the development and implementation of an advanced, new generation Supervisory Control and Data Acquisition (SCADA) system for continuous health monitoring of pipe networks.
Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko
2015-06-01
Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
An analytical and experimental evaluation of the plano-cylindrical Fresnel lens solar concentrator
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Allums, S. L.; Cosby, R. M.
1976-01-01
Plastic Fresnel lenses for solar concentration are attractive because of potential for low-cost mass production. An analytical and experimental evaluation of line-focusing Fresnel lenses with application potential in the 200 to 370 C range is reported. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves-down lens. Experimentation was based primarily on a 56 cm-wide lens with f-number 1.0. A sun-tracking heliostat provided a non-moving solar source. Measured data indicated more spreading at the profile base than analytically predicted. The measured and computed transmittances were 85 and 87% respectively. Preliminary testing with a second lens (1.85 m) indicated that modified manufacturing techniques corrected the profile spreading problem.
Accelerator-based neutrino oscillation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Deborah A.; /Fermilab
2007-12-01
Neutrino oscillations were first discovered by experiments looking at neutrinos coming from extra-terrestrial sources, namely the sun and the atmosphere, but we will be depending on earth-based sources to take many of the next steps in this field. This article describes what has been learned so far from accelerator-based neutrino oscillation experiments, and then describe very generally what the next accelerator-based steps are. In section 2 the article discusses how one uses an accelerator to make a neutrino beam, in particular, one made from decays in flight of charged pions. There are several different neutrino detection methods currently in use,more » or under development. In section 3 these are presented, with a description of the general concept, an example of such a detector, and then a brief discussion of the outstanding issues associated with this detection technique. Finally, section 4 describes how the measurements of oscillation probabilities are made. This includes a description of the near detector technique and how it can be used to make the most precise measurements of neutrino oscillations.« less
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Vacuum electron acceleration by coherent dipole radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troha, A.L.; Van Meter, J.R.; Landahl, E.C.
1999-07-01
The validity of the concept of laser-driven vacuum acceleration has been questioned, based on an extrapolation of the well-known Lawson-Woodward theorem, which stipulates that plane electromagnetic waves cannot accelerate charged particles in vacuum. To formally demonstrate that electrons can indeed be accelerated in vacuum by focusing or diffracting electromagnetic waves, the interaction between a point charge and coherent dipole radiation is studied in detail. The corresponding four-potential exactly satisfies both Maxwell{close_quote}s equations and the Lorentz gauge condition everywhere, and is analytically tractable. It is found that in the far-field region, where the field distribution closely approximates that of a planemore » wave, we recover the Lawson-Woodward result, while net acceleration is obtained in the near-field region. The scaling of the energy gain with wave-front curvature and wave amplitude is studied systematically. {copyright} {ital 1999} {ital The American Physical Society}« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
Particle Acceleration and Radiative Losses at Relativistic Shocks
NASA Astrophysics Data System (ADS)
Dempsey, P.; Duffy, P.
A semi-analytic approach to the relativistic transport equation with isotropic diffusion and consistent radiative losses is presented. It is based on the eigenvalue method first introduced in Kirk & Schneider [5]and Heavens & Drury [3]. We demonstrate the pitch-angle dependence of the cut-off in relativistic shocks.
On the physics of waves in the solar atmosphere: Wave heating and wind acceleration
NASA Technical Reports Server (NTRS)
Musielak, Z. E.
1994-01-01
This paper presents work performed on the generation and physics of acoustic waves in the solar atmosphere. The investigators have incorporated spatial and temporal turbulent energy spectra in a newly corrected version of the Lighthill-Stein theory of acoustic wave generation in order to calculate the acoustic wave energy fluxes generated in the solar convective zone. The investigators have also revised and improved the treatment of the generation of magnetic flux tube waves, which can carry energy along the tubes far away from the region of their origin, and have calculated the tube wave energy fluxes for the sun. They also examine the transfer of the wave energy originated in the solar convective zone to the outer atmospheric layers through computation of wave propagation and dissipation in highly nonhomogeneous solar atmosphere. These waves may efficiently heat the solar atmosphere and the heating will be especially significant in the chromospheric network. It is also shown that the role played by Alfven waves in solar wind acceleration and coronal hole heating is dominant. The second part of the project concerned investigation of wave propagation in highly inhomogeneous stellar atmospheres using an approach based on an analytic tool developed by Musielak, Fontenla, and Moore. In addition, a new technique based on Dirac equations has been developed to investigate coupling between different MHD waves propagating in stratified stellar atmospheres.
On the physics of waves in the solar atmosphere: Wave heating and wind acceleration
NASA Technical Reports Server (NTRS)
Musielak, Z. E.
1993-01-01
This paper presents work performed on the generation and physics of acoustic waves in the solar atmosphere. The investigators have incorporated spatial and temporal turbulent energy spectra in a newly corrected version of the Lighthill-Stein theory of acoustic wave generation in order to calculate the acoustic wave energy fluxes generated in the solar convective zone. The investigators have also revised and improved the treatment of the generation of magnetic flux tube waves, which can carry energy along the tubes far away from the region of their origin, and have calculated the tube energy fluxes for the sun. They also examine the transfer of the wave energy originated in the solar convective zone to the outer atmospheric layers through computation of wave propagation and dissipation in highly nonhomogeneous solar atmosphere. These waves may efficiently heat the solar atmosphere and the heating will be especially significant in the chromospheric network. It is also shown that the role played by Alfven waves in solar wind acceleration and coronal hole heating is dominant. The second part of the project concerned investigation of wave propagation in highly inhomogeneous stellar atmospheres using an approach based on an analytic tool developed by Musielak, Fontenla, and Moore. In addition, a new technique based on Dirac equations has been developed to investigate coupling between different MHD waves propagating in stratified stellar atmospheres.
Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján
2016-02-04
Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given. Copyright © 2015 Elsevier B.V. All rights reserved.
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-01-01
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-12-15
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.
Minimum impulse three-body trajectories.
NASA Technical Reports Server (NTRS)
D'Amario, L.; Edelbaum, T. N.
1973-01-01
A rapid and accurate method of calculating optimal impulsive transfers in the restricted problem of three bodies has been developed. The technique combines a multi-conic method of trajectory integration with primer vector theory and an accelerated gradient method of trajectory optimization. A unique feature is that the state transition matrix and the primer vector are found analytical without additional integrations or differentiations. The method has been applied to the determination of optimal two and three impulse transfers between the L2 libration point and circular orbits about both the earth and the moon.
Preconditioned upwind methods to solve 3-D incompressible Navier-Stokes equations for viscous flows
NASA Technical Reports Server (NTRS)
Hsu, C.-H.; Chen, Y.-M.; Liu, C. H.
1990-01-01
A computational method for calculating low-speed viscous flowfields is developed. The method uses the implicit upwind-relaxation finite-difference algorithm with a nonsingular eigensystem to solve the preconditioned, three-dimensional, incompressible Navier-Stokes equations in curvilinear coordinates. The technique of local time stepping is incorporated to accelerate the rate of convergence to a steady-state solution. An extensive study of optimizing the preconditioned system is carried out for two viscous flow problems. Computed results are compared with analytical solutions and experimental data.
NASA Technical Reports Server (NTRS)
Lee, Jonghyun; Hyers, Robert W.; Rogers, Jan R.; Rathz, Thomas J.; Choo, Hahn; Liaw, Peter
2006-01-01
Responsive access to space requires re-use of components such as rocket nozzles that operate at extremely high temperatures. For such applications, new ultra-hightemperature materials that can operate over 2,000 C are required. At the temperatures higher than the fifty percent of the melting temperature, the characterization of creep properties is indispensable. Since conventional methods for the measurement of creep is limited below 1,700 C, a new technique that can be applied at higher temperatures is strongly demanded. This research develops a non-contact method for the measurement of creep at the temperatures over 2,300 C. Using the electrostatic levitator in NASA MSFC, a spherical sample was rotated to cause creep deformation by centrifugal acceleration. The deforming sample was captured with a digital camera and analyzed to measure creep deformation. Numerical and analytical analyses have also been conducted to compare the experimental results. Analytical, numerical, and experimental results showed a good agreement with one another.
Analytical N beam position monitor method
NASA Astrophysics Data System (ADS)
Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.
2017-11-01
Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.
Ion beams provided by small accelerators for material synthesis and characterization
NASA Astrophysics Data System (ADS)
Mackova, Anna; Havranek, Vladimir
2017-06-01
The compact, multipurpose electrostatic tandem accelerators are extensively used for production of ion beams with energies in the range from 400 keV to 24 MeV of almost all elements of the periodic system for the trace element analysis by means of nuclear analytical methods. The ion beams produced by small accelerators have a broad application, mainly for material characterization (Rutherford Back-Scattering spectrometry, Particle Induced X ray Emission analysis, Nuclear Reaction Analysis and Ion-Microprobe with 1 μm lateral resolution among others) and for high-energy implantation. Material research belongs to traditionally progressive fields of technology. Due to the continuous miniaturization, the underlying structures are far beyond the analytical limits of the most conventional methods. Ion Beam Analysis (IBA) techniques provide this possibility as they use probes of similar or much smaller dimensions (particles, radiation). Ion beams can be used for the synthesis of new progressive functional nanomaterials for optics, electronics and other applications. Ion beams are extensively used in studies of the fundamental energetic ion interaction with matter as well as in the novel nanostructure synthesis using ion beam irradiation in various amorphous and crystalline materials in order to get structures with extraordinary functional properties. IBA methods serve for investigation of materials coming from material research, industry, micro- and nano-technology, electronics, optics and laser technology, chemical, biological and environmental investigation in general. Main research directions in laboratories employing small accelerators are also the preparation and characterization of micro- and nano-structured materials which are of interest for basic and oriented research in material science, and various studies of biological, geological, environmental and cultural heritage artefacts are provided too.
NASA Astrophysics Data System (ADS)
Bisesto, F. G.; Anania, M. P.; Chiadroni, E.; Cianchi, A.; Costa, G.; Curcio, A.; Ferrario, M.; Galletti, M.; Pompili, R.; Schleifer, E.; Zigler, A.
2017-05-01
Plasma wakefield acceleration is the most promising acceleration technique known nowadays, able to provide very high accelerating fields (> 100 GV/m), enabling acceleration of electrons to GeV energy in few centimeters. Here we present all the plasma related activities currently underway at SPARC LAB exploiting the high power laser FLAME. In particular, we will give an overview of the single shot diagnostics employed: Electro Optic Sampling (EOS) for temporal measurement and optical transition radiation (OTR) for an innovative one shot emittance measurements. In detail, the EOS technique has been employed to measure for the first time the longitudinal profile of electric field of fast electrons escaping from a solid target, driving the ions and protons acceleration, and to study the impact of using different target shapes. Moreover, a novel scheme for one shot emittance measurements based on OTR, developed and tested at SPARC LAB LINAC, will be shown.
A new perspective on global mean sea level (GMSL) acceleration
NASA Astrophysics Data System (ADS)
Watson, Phil J.
2016-06-01
The vast body of contemporary climate change science is largely underpinned by the premise of a measured acceleration from anthropogenic forcings evident in key climate change proxies -- greenhouse gas emissions, temperature, and mean sea level. By virtue, over recent years, the issue of whether or not there is a measurable acceleration in global mean sea level has resulted in fierce, widespread professional, social, and political debate. Attempts to measure acceleration in global mean sea level (GMSL) have often used comparatively crude analysis techniques providing little temporal instruction on these key questions. This work proposes improved techniques to measure real-time velocity and acceleration based on five GMSL reconstructions spanning the time frame from 1807 to 2014 with substantially improved temporal resolution. While this analysis highlights key differences between the respective reconstructions, there is now more robust, convincing evidence of recent acceleration in the trend of GMSL.
Plasma electron hole kinematics. II. Hole tracking Particle-In-Cell simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C.; Hutchinson, I. H.
The kinematics of a 1-D electron hole is studied using a novel Particle-In-Cell simulation code. A hole tracking technique enables us to follow the trajectory of a fast-moving solitary hole and study quantitatively hole acceleration and coupling to ions. We observe a transient at the initial stage of hole formation when the hole accelerates to several times the cold-ion sound speed. Artificially imposing slow ion speed changes on a fully formed hole causes its velocity to change even when the ion stream speed in the hole frame greatly exceeds the ion thermal speed, so there are no reflected ions. Themore » behavior that we observe in numerical simulations agrees very well with our analytic theory of hole momentum conservation and the effects of “jetting.”.« less
Recent Advances in Paper-Based Sensors
Liana, Devi D.; Raguse, Burkhard; Gooding, J. Justin; Chow, Edith
2012-01-01
Paper-based sensors are a new alternative technology for fabricating simple, low-cost, portable and disposable analytical devices for many application areas including clinical diagnosis, food quality control and environmental monitoring. The unique properties of paper which allow passive liquid transport and compatibility with chemicals/biochemicals are the main advantages of using paper as a sensing platform. Depending on the main goal to be achieved in paper-based sensors, the fabrication methods and the analysis techniques can be tuned to fulfill the needs of the end-user. Current paper-based sensors are focused on microfluidic delivery of solution to the detection site whereas more advanced designs involve complex 3-D geometries based on the same microfluidic principles. Although paper-based sensors are very promising, they still suffer from certain limitations such as accuracy and sensitivity. However, it is anticipated that in the future, with advances in fabrication and analytical techniques, that there will be more new and innovative developments in paper-based sensors. These sensors could better meet the current objectives of a viable low-cost and portable device in addition to offering high sensitivity and selectivity, and multiple analyte discrimination. This paper is a review of recent advances in paper-based sensors and covers the following topics: existing fabrication techniques, analytical methods and application areas. Finally, the present challenges and future outlooks are discussed. PMID:23112667
Injection of thermal and suprathermal seed particles into coronal shocks of varying obliquity
NASA Astrophysics Data System (ADS)
Battarbee, M.; Vainio, R.; Laitinen, T.; Hietala, H.
2013-10-01
Context. Diffusive shock acceleration in the solar corona can accelerate solar energetic particles to very high energies. Acceleration efficiency is increased by entrapment through self-generated waves, which is highly dependent on the amount of accelerated particles. This, in turn, is determined by the efficiency of particle injection into the acceleration process. Aims: We present an analysis of the injection efficiency at coronal shocks of varying obliquity. We assessed injection through reflection and downstream scattering, including the effect of a cross-shock potential. Both quasi-thermal and suprathermal seed populations were analysed. We present results on the effect of cross-field diffusion downstream of the shock on the injection efficiency. Methods: Using analytical methods, we present applicable injection speed thresholds that were compared with both semi-analytical flux integration and Monte Carlo simulations, which do not resort to binary thresholds. Shock-normal angle θBn and shock-normal velocity Vs were varied to assess the injection efficiency with respect to these parameters. Results: We present evidence of a significant bias of thermal seed particle injection at small shock-normal angles. We show that downstream isotropisation methods affect the θBn-dependence of this result. We show a non-negligible effect caused by the cross-shock potential, and that the effect of downstream cross-field diffusion is highly dependent on boundary definitions. Conclusions: Our results show that for Monte Carlo simulations of coronal shock acceleration a full distribution function assessment with downstream isotropisation through scatterings is necessary to realistically model particle injection. Based on our results, seed particle injection at quasi-parallel coronal shocks can result in significant acceleration efficiency, especially when combined with varying field-line geometry. Appendices are available in electronic form at http://www.aanda.org
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
Goal-Oriented Probability Density Function Methods for Uncertainty Quantification
2015-12-11
approximations or data-driven approaches. We investigated the accuracy of analytical tech- niques based Kubo -Van Kampen operator cumulant expansions for...analytical techniques based Kubo -Van Kampen operator cumulant expansions for Langevin equations driven by fractional Brownian motion and other noises
Norwood, Daniel L; Mullis, James O; Davis, Mark; Pennino, Scott; Egert, Thomas; Gonnella, Nina C
2013-01-01
The structural analysis (i.e., identification) of organic chemical entities leached into drug product formulations has traditionally been accomplished with techniques involving the combination of chromatography with mass spectrometry. These include gas chromatography/mass spectrometry (GC/MS) for volatile and semi-volatile compounds, and various forms of liquid chromatography/mass spectrometry (LC/MS or HPLC/MS) for semi-volatile and relatively non-volatile compounds. GC/MS and LC/MS techniques are complementary for structural analysis of leachables and potentially leachable organic compounds produced via laboratory extraction of pharmaceutical container closure/delivery system components and corresponding materials of construction. Both hyphenated analytical techniques possess the separating capability, compound specific detection attributes, and sensitivity required to effectively analyze complex mixtures of trace level organic compounds. However, hyphenated techniques based on mass spectrometry are limited by the inability to determine complete bond connectivity, the inability to distinguish between many types of structural isomers, and the inability to unambiguously determine aromatic substitution patterns. Nuclear magnetic resonance spectroscopy (NMR) does not have these limitations; hence it can serve as a complement to mass spectrometry. However, NMR technology is inherently insensitive and its ability to interface with chromatography has been historically challenging. This article describes the application of NMR coupled with liquid chromatography and automated solid phase extraction (SPE-LC/NMR) to the structural analysis of extractable organic compounds from a pharmaceutical packaging material of construction. The SPE-LC/NMR technology combined with micro-cryoprobe technology afforded the sensitivity and sample mass required for full structure elucidation. Optimization of the SPE-LC/NMR analytical method was achieved using a series of model compounds representing the chemical diversity of extractables. This study demonstrates the complementary nature of SPE-LC/NMR with LC/MS for this particular pharmaceutical application. The identification of impurities leached into drugs from the components and materials associated with pharmaceutical containers, packaging components, and materials has historically been done using laboratory techniques based on the combination of chromatography with mass spectrometry. Such analytical techniques are widely recognized as having the selectivity and sensitivity required to separate the complex mixtures of impurities often encountered in such identification studies, including both the identification of leachable impurities as well as potential leachable impurities produced by laboratory extraction of packaging components and materials. However, while mass spectrometry-based analytical techniques have limitations for this application, newer analytical techniques based on the combination of chromatography with nuclear magnetic resonance spectroscopy provide an added dimension of structural definition. This article describes the development, optimization, and application of an analytical technique based on the combination of chromatography and nuclear magnetic resonance spectroscopy to the identification of potential leachable impurities from a pharmaceutical packaging material. The complementary nature of the analytical techniques for this particular pharmaceutical application is demonstrated.
NASA Technical Reports Server (NTRS)
Hipol, Philip J.
1990-01-01
The development of force and acceleration control spectra for vibration testing of Space Shuttle (STS) orbiter sidewall-mounted payloads requiresreliable estimates of the sidewall apparent weight and free (i.e. unloaded) vibration during lift-off. The feasibility of analytically predicting these quantities has been investigated through the development and analysis of a finite element model of the STS cargo bay. Analytical predictions of the sidewall apparent weight were compared with apparent weight measurements made on OV-101, and analytical predictions of the sidewall free vibration response during lift-off were compared with flight measurements obtained from STS-3 and STS-4. These analysis suggest that the cargo bay finite element model has potential application for the estimation of force and acceleration control spectra for STS sidewall-mounted payloads.
Innovative single-shot diagnostics for electrons from laser wakefield acceleration at FLAME
NASA Astrophysics Data System (ADS)
Bisesto, F. G.; Anania, M. P.; Cianchi, A.; Chiadroni, E.; Curcio, A.; Ferrario, M.; Pompili, R.; Zigler, A.
2017-07-01
Plasma wakefield acceleration is the most promising acceleration technique known nowadays, able to provide very high accelerating fields (> 100 GV/m), enabling acceleration of electrons to GeV energy in few centimeters. Here we present all the plasma related activities currently underway at SPARC_LAB exploiting the high power laser FLAME. In particular, we will give an overview of the single shot diagnostics employed: Electro Optic Sampling (EOS) for temporal measurement and Optical Transition Radiation (OTR) for an innovative one shot emittance measurements. In detail, the EOS technique has been employed to measure for the first time the longitudinal profile of electric field of fast electrons escaping from a solid target, driving the ions and protons acceleration, and to study the impact of using different target shapes. Moreover, a novel scheme for one shot emittance measurements based on OTR, developed and tested at SPARC_LAB LINAC, used in an experiment on electrons from laser wakefield acceleration still undergoing, will be shown.
Hyphenated analytical techniques for materials characterisation
NASA Astrophysics Data System (ADS)
Armstrong, Gordon; Kailas, Lekshmi
2017-09-01
This topical review will provide a survey of the current state of the art in ‘hyphenated’ techniques for characterisation of bulk materials, surface, and interfaces, whereby two or more analytical methods investigating different properties are applied simultaneously to the same sample to better characterise the sample than can be achieved by conducting separate analyses in series using different instruments. It is intended for final year undergraduates and recent graduates, who may have some background knowledge of standard analytical techniques, but are not familiar with ‘hyphenated’ techniques or hybrid instrumentation. The review will begin by defining ‘complementary’, ‘hybrid’ and ‘hyphenated’ techniques, as there is not a broad consensus among analytical scientists as to what each term means. The motivating factors driving increased development of hyphenated analytical methods will also be discussed. This introduction will conclude with a brief discussion of gas chromatography-mass spectroscopy and energy dispersive x-ray analysis in electron microscopy as two examples, in the context that combining complementary techniques for chemical analysis were among the earliest examples of hyphenated characterisation methods. The emphasis of the main review will be on techniques which are sufficiently well-established that the instrumentation is commercially available, to examine physical properties including physical, mechanical, electrical and thermal, in addition to variations in composition, rather than methods solely to identify and quantify chemical species. Therefore, the proposed topical review will address three broad categories of techniques that the reader may expect to encounter in a well-equipped materials characterisation laboratory: microscopy based techniques, scanning probe-based techniques, and thermal analysis based techniques. Examples drawn from recent literature, and a concluding case study, will be used to explain the practical issues that arise in combining different techniques. We will consider how the complementary and varied information obtained by combining these techniques may be interpreted together to better understand the sample in greater detail than that was possible before, and also how combining different techniques can simplify sample preparation and ensure reliable comparisons are made between multiple analyses on the same samples—a topic of particular importance as nanoscale technologies become more prevalent in applied and industrial research and development (R&D). The review will conclude with a brief outline of the emerging state of the art in the research laboratory, and a suggested approach to using hyphenated techniques, whether in the teaching, quality control or R&D laboratory.
Parker, Richard; Markov, Marko
2015-09-01
This article presents a novel modality for accelerating the repair of tendon and ligament lesions by means of a specifically designed electromagnetic field in an equine model. This novel therapeutic approach employs a delivery system that induces a specific electrical signal from an external magnetic field derived from Superconductive QUantum Interference Device (SQUID) measurements of injured vs. healthy tissue. Evaluation of this therapy technique is enabled by a proposed new technology described as Predictive Analytical Imagery (PAI™). This technique examines an ultrasound grayscale image and seeks to evaluate it by means of look-ahead predictive algorithms and digital signal processing. The net result is a significant reduction in background noise and the production of a high-resolution grayscale or digital image.
Use of pressure manifestations following the water plasma expansion for phytomass disintegration.
Maroušek, Josef; Kwan, Jason Tai Hong
2013-01-01
A prototype capable of generating underwater high-voltage discharges (3.5 kV) coupled with water plasma expansion was constructed. The level of phytomass disintegration caused by transmission of the pressure shockwaves (50-60 MPa) followed by this expansion was analyzed using gas adsorption techniques. The dynamics of the external surface area and the micropore volume on multiple pretreatment stages of maize silage and sunflower seeds was approximated with robust analytical techniques. The multiple increases on the reaction surface were manifest in up to a 15% increase in cumulative methane production, which was itself manifest in the overall acceleration of the anaerobic fermentation process. Disintegration of the sunflower seeds allowed up to 45% higher oil yields using the same operating pressure.
Laboratory technology and cosmochemistry
Zinner, Ernst K.; Moynier, Frederic; Stroud, Rhonda M.
2011-01-01
Recent developments in analytical instrumentation have led to revolutionary discoveries in cosmochemistry. Instrumental advances have been made along two lines: (i) increase in spatial resolution and sensitivity of detection, allowing for the study of increasingly smaller samples, and (ii) increase in the precision of isotopic analysis that allows more precise dating, the study of isotopic heterogeneity in the Solar System, and other studies. A variety of instrumental techniques are discussed, and important examples of discoveries are listed. Instrumental techniques and instruments include the ion microprobe, laser ablation gas MS, Auger EM, resonance ionization MS, accelerator MS, transmission EM, focused ion-beam microscopy, atom probe tomography, X-ray absorption near-edge structure/electron loss near-edge spectroscopy, Raman microprobe, NMR spectroscopy, and inductively coupled plasma MS. PMID:21498689
NASA Astrophysics Data System (ADS)
Cao, Liang; Liu, Jiepeng; Li, Jiang; Zhang, Ruizhi
2018-04-01
An extensive experimental and theoretical research study was undertaken to study the vibration serviceability of a long-span prestressed concrete floor system to be used in the lounge of a major airport. Specifically, jumping impact tests were carried out to obtain the floor's modal parameters, followed by an analysis of the distribution of peak accelerations. Running tests were also performed to capture the acceleration responses. The prestressed concrete floor was found to have a low fundamental natural frequency (≈ 8.86 Hz) corresponding to the average modal damping ratio of ≈ 2.17%. A coefficients β rp is proposed for convenient calculation of the maximum root-mean-square acceleration for running. In the theoretical analysis, the prestressed concrete floor under running excitation is treated as a two-span continuous anisotropic rectangular plate with simply-supported edges. The calculated analytical results (natural frequencies and root-mean-square acceleration) agree well with the experimental ones. The analytical approach is thus validated.
Critical review of analytical techniques for safeguarding the thorium-uranium fuel cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakkila, E.A.
1978-10-01
Conventional analytical methods applicable to the determination of thorium, uranium, and plutonium in feed, product, and waste streams from reprocessing thorium-based nuclear reactor fuels are reviewed. Separations methods of interest for these analyses are discussed. Recommendations concerning the applicability of various techniques to reprocessing samples are included. 15 tables, 218 references.
NASA Technical Reports Server (NTRS)
Betts, W. S., Jr.
1972-01-01
A computer program called HOPI was developed to predict reorientation flow dynamics, wherein liquids move from one end of a closed, partially filled, rigid container to the other end under the influence of container acceleration. The program uses the simplified marker and cell numerical technique and, using explicit finite-differencing, solves the Navier-Stokes equations for an incompressible viscous fluid. The effects of turbulence are also simulated in the program. HOPI can consider curved as well as straight walled boundaries. Both free-surface and confined flows can be calculated. The program was used to simulate five liquid reorientation cases. Three of these cases simulated actual NASA LeRC drop tower test conditions while two cases simulated full-scale Centaur tank conditions. It was concluded that while HOPI can be used to analytically determine the fluid motion in a typical settling problem, there is a current need to optimize HOPI. This includes both reducing the computer usage time and also reducing the core storage required for a given size problem.
Electron Beam Analysis of Micrometeoroids Captured in Aerogel as Stardust Analogues
NASA Technical Reports Server (NTRS)
Graham, G. A.; Sheffield-Parker, J.; Bradley, P.; Kearsley, A. T.; Dai, Z. R.; Mayo, S. C.; Teslich, N.; Snead, C.; Westphal, A. J.; Ishii, H.
2005-01-01
In January 2004, NASA s Stardust spacecraft passed through the tail of Comet 81P/Wild-2. The on-board dust flux monitor instrument indicated that numerous micro- and nano-meter sized cometary dust particles were captured by the dedicated silica aerogel capture cell. The collected cometary particles will be returned to Earth in January 2006. Current Stardust analogues are: (i) Light-gas-gun accelerated individual mineral grains and carbonaceous meteoritic material in aerogels at the Stardust encounter velocity ca.approximately 6 kilometers per second. (ii) Aerogels exposed in low-Earth orbit (LEO) containing preserved cosmic dust grains. Studies of these impacts offer insight into the potential state of the captured cometary dust by Stardust and the suitability of various analytical techniques. A number of papers have discussed the application of sophisticated synchrotron analytical techniques to analyze Stardust particles. Yet much of the understanding gained on the composition and mineralogy of interplanetary dust particles (IDPs) has come from electron microscopy studies. Here we discuss the application of scanning electron microscopy (SEM) for Stardust during the preliminary phase of post-return investigations.
Reddy, S Srikanth; Revathi, Kakkirala; Reddy, S Kranthikumar
2013-01-01
Conventional casting technique is time consuming when compared to accelerated casting technique. In this study, marginal accuracy of castings fabricated using accelerated and conventional casting technique was compared. 20 wax patterns were fabricated and the marginal discrepancy between the die and patterns were measured using Optical stereomicroscope. Ten wax patterns were used for Conventional casting and the rest for Accelerated casting. A Nickel-Chromium alloy was used for the casting. The castings were measured for marginal discrepancies and compared. Castings fabricated using Conventional casting technique showed less vertical marginal discrepancy than the castings fabricated by Accelerated casting technique. The values were statistically highly significant. Conventional casting technique produced better marginal accuracy when compared to Accelerated casting. The vertical marginal discrepancy produced by the Accelerated casting technique was well within the maximum clinical tolerance limits. Accelerated casting technique can be used to save lab time to fabricate clinical crowns with acceptable vertical marginal discrepancy.
NASA Astrophysics Data System (ADS)
Yazdchi, K.; Salehi, M.; Shokrieh, M. M.
2009-03-01
By introducing a new simplified 3D representative volume element for wavy carbon nanotubes, an analytical model is developed to study the stress transfer in single-walled carbon nanotube-reinforced polymer composites. Based on the pull-out modeling technique, the effects of waviness, aspect ratio, and Poisson ratio on the axial and interfacial shear stresses are analyzed in detail. The results of the present analytical model are in a good agreement with corresponding results for straight nanotubes.
An introduction to the physics of high energy accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, D.A.; Syphers, J.J.
1993-01-01
This book is an outgrowth of a course given by the authors at various universities and particle accelerator schools. It starts from the basic physics principles governing particle motion inside an accelerator, and leads to a full description of the complicated phenomena and analytical tools encountered in the design and operation of a working accelerator. The book covers acceleration and longitudinal beam dynamics, transverse motion and nonlinear perturbations, intensity dependent effects, emittance preservation methods and synchrotron radiation. These subjects encompass the core concerns of a high energy synchrotron. The authors apparently do not assume the reader has much previous knowledgemore » about accelerator physics. Hence, they take great care to introduce the physical phenomena encountered and the concepts used to describe them. The mathematical formulae and derivations are deliberately kept at a level suitable for beginners. After mastering this course, any interested reader will not find it difficult to follow subjects of more current interests. Useful homework problems are provided at the end of each chapter. Many of the problems are based on actual activities associated with the design and operation of existing accelerators.« less
NASA Astrophysics Data System (ADS)
Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.
2002-04-01
Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.
A reference web architecture and patterns for real-time visual analytics on large streaming data
NASA Astrophysics Data System (ADS)
Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer
2013-12-01
Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.
NASA Astrophysics Data System (ADS)
Miranda Guedes, Rui
2018-02-01
Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.
Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek
2013-12-20
Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahadevan, Sankaran; Agarwal, Vivek; Neal, Kyle
Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This ongoing research project is seeking to develop a probabilistic framework for health diagnosis and prognosis of aging concrete structures in a nuclear power plant that is subjected to physical, chemical, environment, and mechanical degradation. The proposed framework consists of fourmore » elements: monitoring, data analytics, uncertainty quantification and prognosis. This report focuses on degradation caused by ASR (alkali-silica reaction). Controlled specimens were prepared to develop accelerated ASR degradation. Different monitoring techniques – thermography, digital image correlation (DIC), mechanical deformation measurements, nonlinear impact resonance acoustic spectroscopy (NIRAS), and vibro-acoustic modulation (VAM) -- were used to detect the damage caused by ASR. Heterogeneous data from the multiple techniques was used for damage diagnosis and prognosis, and quantification of the associated uncertainty using a Bayesian network approach. Additionally, MapReduce technique has been demonstrated with synthetic data. This technique can be used in future to handle large amounts of observation data obtained from the online monitoring of realistic structures.« less
Cortez, Juliana; Pasquini, Celio
2013-02-05
The ring-oven technique, originally applied for classical qualitative analysis in the years 1950s to 1970s, is revisited to be used in a simple though highly efficient and green procedure for analyte preconcentration prior to its determination by the microanalytical techniques presently available. The proposed preconcentration technique is based on the dropwise delivery of a small volume of sample to a filter paper substrate, assisted by a flow-injection-like system. The filter paper is maintained in a small circular heated oven (the ring oven). Drops of the sample solution diffuse by capillarity from the center to a circular area of the paper substrate. After the total sample volume has been delivered, a ring with a sharp (c.a. 350 μm) circular contour, of about 2.0 cm diameter, is formed on the paper to contain most of the analytes originally present in the sample volume. Preconcentration coefficients of the analyte can reach 250-fold (on a m/m basis) for a sample volume as small as 600 μL. The proposed system and procedure have been evaluated to concentrate Na, Fe, and Cu in fuel ethanol, followed by simultaneous direct determination of these species in the ring contour, employing the microanalytical technique of laser induced breakdown spectroscopy (LIBS). Detection limits of 0.7, 0.4, and 0.3 μg mL(-1) and mean recoveries of (109 ± 13)%, (92 ± 18)%, and (98 ± 12)%, for Na, Fe, and Cu, respectively, were obtained in fuel ethanol. It is possible to anticipate the application of the technique, coupled to modern microanalytical and multianalyte techniques, to several analytical problems requiring analyte preconcentration and/or sample stabilization.
Quantitative metabolomics by H-NMR and LC-MS/MS confirms altered metabolic pathways in diabetes.
Lanza, Ian R; Zhang, Shucha; Ward, Lawrence E; Karakelides, Helen; Raftery, Daniel; Nair, K Sreekumaran
2010-05-10
Insulin is as a major postprandial hormone with profound effects on carbohydrate, fat, and protein metabolism. In the absence of exogenous insulin, patients with type 1 diabetes exhibit a variety of metabolic abnormalities including hyperglycemia, glycosurea, accelerated ketogenesis, and muscle wasting due to increased proteolysis. We analyzed plasma from type 1 diabetic (T1D) humans during insulin treatment (I+) and acute insulin deprivation (I-) and non-diabetic participants (ND) by (1)H nuclear magnetic resonance spectroscopy and liquid chromatography-tandem mass spectrometry. The aim was to determine if this combination of analytical methods could provide information on metabolic pathways known to be altered by insulin deficiency. Multivariate statistics differentiated proton spectra from I- and I+ based on several derived plasma metabolites that were elevated during insulin deprivation (lactate, acetate, allantoin, ketones). Mass spectrometry revealed significant perturbations in levels of plasma amino acids and amino acid metabolites during insulin deprivation. Further analysis of metabolite levels measured by the two analytical techniques indicates several known metabolic pathways that are perturbed in T1D (I-) (protein synthesis and breakdown, gluconeogenesis, ketogenesis, amino acid oxidation, mitochondrial bioenergetics, and oxidative stress). This work demonstrates the promise of combining multiple analytical methods with advanced statistical methods in quantitative metabolomics research, which we have applied to the clinical situation of acute insulin deprivation in T1D to reflect the numerous metabolic pathways known to be affected by insulin deficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Kenneth Paul
Capillary electrophoresis (CE) and high-performance liquid chromatography (HPLC) are widely used analytical separation techniques with many applications in chemical, biochemical, and biomedical sciences. Conventional analyte identification in these techniques is based on retention/migration times of standards; requiring a high degree of reproducibility, availability of reliable standards, and absence of coelution. From this, several new information-rich detection methods (also known as hyphenated techniques) are being explored that would be capable of providing unambiguous on-line identification of separating analytes in CE and HPLC. As further discussed, a number of such on-line detection methods have shown considerable success, including Raman, nuclear magnetic resonancemore » (NMR), mass spectrometry (MS), and fluorescence line-narrowing spectroscopy (FLNS). In this thesis, the feasibility and potential of combining the highly sensitive and selective laser-based detection method of FLNS with analytical separation techniques are discussed and presented. A summary of previously demonstrated FLNS detection interfaced with chromatography and electrophoresis is given, and recent results from on-line FLNS detection in CE (CE-FLNS), and the new combination of HPLC-FLNS, are shown.« less
Konstantinidis, Spyridon; Heldin, Eva; Chhatre, Sunil; Velayudhan, Ajoy; Titchener-Hooker, Nigel
2012-01-01
High throughput approaches to facilitate the development of chromatographic separations have now been adopted widely in the biopharmaceutical industry, but issues of how to reduce the associated analytical burden remain. For example, acquiring experimental data by high level factorial designs in 96 well plates can place a considerable strain upon assay capabilities, generating a bottleneck that limits significantly the speed of process characterization. This article proposes an approach designed to counter this challenge; Strategic Assay Deployment (SAD). In SAD, a set of available analytical methods is investigated to determine which set of techniques is the most appropriate to use and how best to deploy these to reduce the consumption of analytical resources while still enabling accurate and complete process characterization. The approach is demonstrated by investigating how salt concentration and pH affect the binding of green fluorescent protein from Escherichia coli homogenate to an anion exchange resin presented in a 96-well filter plate format. Compared with the deployment of routinely used analytical methods alone, the application of SAD reduced both the total assay time and total assay material consumption by at least 40% and 5%, respectively. SAD has significant utility in accelerating bioprocess development activities. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
Shaida, Nadeem; Priest, Andrew N; See, T C; Winterbottom, Andrew P; Graves, Martin J; Lomas, David J
2017-06-01
To evaluate the diagnostic performance of velocity- and acceleration-sensitized noncontrast-enhanced magnetic resonance angiography (NCE-MRA) of the infrageniculate arteries using contrast-enhanced MRA (CE-MRA) as a reference standard. Twenty-four patients with symptoms of peripheral arterial disease were recruited. Each patient's infrageniculate arterial tree was examined using a velocity-dependent flow-sensitized dephasing (VEL-FSD) technique, an acceleration-dependent (ACC-FSD) technique, and our conventional CE-MRA technique performed at 1.5T. The images were independently reviewed by two experienced vascular radiologists, who evaluated each vessel segment to assess visibility, diagnostic confidence, venous contamination, and detection of pathology. In all, 432 segments were evaluated by each of the three techniques by each reader in total. Overall diagnostic confidence was rated as moderate or high in 98.5% of segments with CE-MRA, 92.1% with VEL-FSD, and 79.9% with ACC-FSD. No venous contamination was seen in 96% of segments with CE-MRA, 72.2% with VEL-FSD, and 85.8% with ACC-FSD. Per-segment, per-limb, and per-patient sensitivities for detecting significant stenotic disease were 63.4%, 73%, and 92%, respectively, for ACC-FSD, and 65.3%, 87.2%, and 96% for VEL-FSD, and as such no significant statistical change was detected using McNemar's chi-squared test with P-values of 1.00, 0.13, and 0.77 obtained, respectively. Flow-dependent NCE-MRA techniques may have a role to play in evaluation of patients with peripheral vascular disease. Increased sensitivity of a velocity-based technique compared to an acceleration-based technique comes at the expense of greater venous contamination. 2J. Technical Efficacy: Stage 2 J. MAGN. RESON. IMAGING 2017;45:1846-1853. © 2016 International Society for Magnetic Resonance in Medicine.
Pavement Performance : Approaches Using Predictive Analytics
DOT National Transportation Integrated Search
2018-03-23
Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...
Analytical learning and term-rewriting systems
NASA Technical Reports Server (NTRS)
Laird, Philip; Gamble, Evan
1990-01-01
Analytical learning is a set of machine learning techniques for revising the representation of a theory based on a small set of examples of that theory. When the representation of the theory is correct and complete but perhaps inefficient, an important objective of such analysis is to improve the computational efficiency of the representation. Several algorithms with this purpose have been suggested, most of which are closely tied to a first order logical language and are variants of goal regression, such as the familiar explanation based generalization (EBG) procedure. But because predicate calculus is a poor representation for some domains, these learning algorithms are extended to apply to other computational models. It is shown that the goal regression technique applies to a large family of programming languages, all based on a kind of term rewriting system. Included in this family are three language families of importance to artificial intelligence: logic programming, such as Prolog; lambda calculus, such as LISP; and combinatorial based languages, such as FP. A new analytical learning algorithm, AL-2, is exhibited that learns from success but is otherwise quite different from EBG. These results suggest that term rewriting systems are a good framework for analytical learning research in general, and that further research should be directed toward developing new techniques.
MIBPB: a software package for electrostatic analysis.
Chen, Duan; Chen, Zhan; Chen, Changjun; Geng, Weihua; Wei, Guo-Wei
2011-03-01
The Poisson-Boltzmann equation (PBE) is an established model for the electrostatic analysis of biomolecules. The development of advanced computational techniques for the solution of the PBE has been an important topic in the past two decades. This article presents a matched interface and boundary (MIB)-based PBE software package, the MIBPB solver, for electrostatic analysis. The MIBPB has a unique feature that it is the first interface technique-based PBE solver that rigorously enforces the solution and flux continuity conditions at the dielectric interface between the biomolecule and the solvent. For protein molecular surfaces, which may possess troublesome geometrical singularities, the MIB scheme makes the MIBPB by far the only existing PBE solver that is able to deliver the second-order convergence, that is, the accuracy increases four times when the mesh size is halved. The MIBPB method is also equipped with a Dirichlet-to-Neumann mapping technique that builds a Green's function approach to analytically resolve the singular charge distribution in biomolecules in order to obtain reliable solutions at meshes as coarse as 1 Å--whereas it usually takes other traditional PB solvers 0.25 Å to reach similar level of reliability. This work further accelerates the rate of convergence of linear equation systems resulting from the MIBPB by using the Krylov subspace (KS) techniques. Condition numbers of the MIBPB matrices are significantly reduced by using appropriate KS solver and preconditioner combinations. Both linear and nonlinear PBE solvers in the MIBPB package are tested by protein-solvent solvation energy calculations and analysis of salt effects on protein-protein binding energies, respectively. Copyright © 2010 Wiley Periodicals, Inc.
MIBPB: A software package for electrostatic analysis
Chen, Duan; Chen, Zhan; Chen, Changjun; Geng, Weihua; Wei, Guo-Wei
2010-01-01
The Poisson-Boltzmann equation (PBE) is an established model for the electrostatic analysis of biomolecules. The development of advanced computational techniques for the solution of the PBE has been an important topic in the past two decades. This paper presents a matched interface and boundary (MIB) based PBE software package, the MIBPB solver, for electrostatic analysis. The MIBPB has a unique feature that it is the first interface technique based PBE solver that rigorously enforces the solution and flux continuity conditions at the dielectric interface between the biomolecule and the solvent. For protein molecular surfaces which may possess troublesome geometrical singularities, the MIB scheme makes the MIBPB by far the only existing PBE solver that is able to deliver the second order convergence, i.e., the accuracy increases four times when the mesh size is halved. The MIBPB method is also equipped with a Dirichlet-to-Neumann mapping (DNM) technique, that builds a Green's function approach to analytically resolve the singular charge distribution in biomolecules in order to obtain reliable solutions at meshes as coarse as 1Å — while it usually takes other traditional PB solvers 0.25Å to reach similar level of reliability. The present work further accelerates the rate of convergence of linear equation systems resulting from the MIBPB by utilizing the Krylov subspace (KS) techniques. Condition numbers of the MIBPB matrices are significantly reduced by using appropriate Krylov subspace solver and preconditioner combinations. Both linear and nonlinear PBE solvers in the MIBPB package are tested by protein-solvent solvation energy calculations and analysis of salt effects on protein-protein binding energies, respectively. PMID:20845420
NASA Technical Reports Server (NTRS)
Baker, John; Thorpe, Ira
2012-01-01
Thoroughly studied classic space-based gravitational-wave missions concepts such as the Laser Interferometer Space Antenna (LISA) are based on laser-interferometry techniques. Ongoing developments in atom-interferometry techniques have spurred recently proposed alternative mission concepts. These different approaches can be understood on a common footing. We present an comparative analysis of how each type of instrument responds to some of the noise sources which may limiting gravitational-wave mission concepts. Sensitivity to laser frequency instability is essentially the same for either approach. Spacecraft acceleration reference stability sensitivities are different, allowing smaller spacecraft separations in the atom interferometry approach, but acceleration noise requirements are nonetheless similar. Each approach has distinct additional measurement noise issues.
Polymer architectures via mass spectrometry and hyphenated techniques: A review.
Crotty, Sarah; Gerişlioğlu, Selim; Endres, Kevin J; Wesdemiotis, Chrys; Schubert, Ulrich S
2016-08-17
This review covers the application of mass spectrometry (MS) and its hyphenated techniques to synthetic polymers of varying architectural complexities. The synthetic polymers are discussed as according to their architectural complexity from linear homopolymers and copolymers to stars, dendrimers, cyclic copolymers and other polymers. MS and tandem MS (MS/MS) has been extensively used for the analysis of synthetic polymers. However, the increase in structural or architectural complexity can result in analytical challenges that MS or MS/MS cannot overcome alone. Hyphenation to MS with different chromatographic techniques (2D × LC, SEC, HPLC etc.), utilization of other ionization methods (APCI, DESI etc.) and various mass analyzers (FT-ICR, quadrupole, time-of-flight, ion trap etc.) are applied to overcome these challenges and achieve more detailed structural characterizations of complex polymeric systems. In addition, computational methods (software: MassChrom2D, COCONUT, 2D maps etc.) have also reached polymer science to facilitate and accelerate data interpretation. Developments in technology and the comprehension of different polymer classes with diverse architectures have significantly improved, which allow for smart polymer designs to be examined and advanced. We present specific examples covering diverse analytical aspects as well as forthcoming prospects in polymer science. Copyright © 2016 Elsevier B.V. All rights reserved.
Miniature penetrator (MinPen) acceleration recorder development test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franco, R.J.; Platzbecker, M.R.
1998-08-01
The Telemetry Technology Development Department at Sandia National Laboratories actively develops and tests acceleration recorders for penetrating weapons. This new acceleration recorder (MinPen) utilizes a microprocessor-based architecture for operational flexibility while maintaining electronics and packaging techniques developed over years of penetrator testing. MinPen has been demonstrated to function in shock environments up to 20,000 Gs. The MinPen instrumentation development has resulted in a rugged, versatile, miniature acceleration recorder and is a valuable tool for penetrator testing in a wide range of applications.
NASA Technical Reports Server (NTRS)
Moskowitz, Milton E.; Bly, Jennifer M.; Matthiesen, David H.
1997-01-01
Experiments were conducted in the crystal growth furnace (CGF) during the first United States Microgravity Laboratory (USML-1), the STS-50 flight of the Space Shuttle Columbia, to determine the segregation behavior of selenium in bulk GaAs in a microgravity environment. After the flight, the selenium-doped GaAs crystals were sectioned, polished, and analyzed to determine the free carrier concentration as a function of position, One of the two crystals initially exhibited an axial concentration profile indicative of diffusion controlled growth, but this profile then changed to that predicted for a complete mixing type growth. An analytical model, proposed by Naumann [R.J. Naumann, J. Crystal Growth 142 (1994) 253], was utilized to predict the maximum allowable microgravity disturbances transverse to the growth direction during the two different translation rates used for each of the experiments. The predicted allowable acceleration levels were 4.86 microgram for the 2.5 micrometers/s furnace translation rate and 38.9 microgram for the 5.0 micrometers/s rate. These predicted values were compared to the Orbital Acceleration Research Experiment (OARE) accelerometer data recorded during the crystal growth periods for these experiments. Based on the analysis of the OARE acceleration data and utilizing the predictions from the analytical model, it is concluded that the change in segregation behavior was not caused by any acceleration events in the microgravity environment.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.
Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei
2013-04-01
The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.
Acceleration display system for aircraft zero-gravity research
NASA Technical Reports Server (NTRS)
Millis, Marc G.
1987-01-01
The features, design, calibration, and testing of Lewis Research Center's acceleration display system for aircraft zero-gravity research are described. Specific circuit schematics and system specifications are included as well as representative data traces from flown trajectories. Other observations learned from developing and using this system are mentioned where appropriate. The system, now a permanent part of the Lewis Learjet zero-gravity program, provides legible, concise, and necessary guidance information enabling pilots to routinely fly accurate zero-gravity trajectories. Regular use of this system resulted in improvements of the Learjet zero-gravity flight techniques, including a technique to minimize later accelerations. Lewis Gates Learjet trajectory data show that accelerations can be reliably sustained within 0.01 g for 5 consecutive seconds, within 0.02 g for 7 consecutive seconds, and within 0.04 g for up to 20 second. Lewis followed the past practices of acceleration measurement, yet focussed on the acceleration displays. Refinements based on flight experience included evolving the ranges, resolutions, and frequency responses to fit the pilot and the Learjet responses.
NASA Astrophysics Data System (ADS)
Sri Purnami, Agustina; Adi Widodo, Sri; Charitas Indra Prahmana, Rully
2018-01-01
This study aimed to know the improvement of achievement and motivation of learning mathematics by using Team Accelerated Instruction. The research method used was the experiment with descriptive pre-test post-test experiment. The population in this study was all students of class VIII junior high school in Jogjakarta. The sample was taken using cluster random sampling technique. The instrument used in this research was questionnaire and test. Data analysis technique used was Wilcoxon test. It concluded that there was an increase in motivation and student achievement of class VII on linear equation system material by using the learning model of Team Accelerated Instruction. Based on the results of the learning model Team Accelerated Instruction can be used as a variation model in learning mathematics.
OpenFOAM Modeling of Particle Heating and Acceleration in Cold Spraying
NASA Astrophysics Data System (ADS)
Leitz, K.-H.; O'Sullivan, M.; Plankensteiner, A.; Kestler, H.; Sigl, L. S.
2018-01-01
In cold spraying, a powder material is accelerated and heated in the gas flow of a supersonic nozzle to velocities and temperatures that are sufficient to obtain cohesion of the particles to a substrate. The deposition efficiency of the particles is significantly determined by their velocity and temperature. Particle velocity correlates with the amount of kinetic energy that is converted to plastic deformation and thermal heating. The initial particle temperature significantly influences the mechanical properties of the particle. Velocity and temperature of the particles have nonlinear dependence on the pressure and temperature of the gas at the nozzle entrance. In this contribution, a simulation model based on the reactingParcelFoam solver of OpenFOAM is presented and applied for an analysis of particle velocity and temperature in the cold spray nozzle. The model combines a compressible description of the gas flow in the nozzle with a Lagrangian particle tracking. The predictions of the simulation model are verified based on an analytical description of the gas flow, the particle acceleration and heating in the nozzle. Based on experimental data, the drag model according to Plessis and Masliyah is identified to be best suited for OpenFOAM modeling particle heating and acceleration in cold spraying.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazakevich, G.; Johnson, R.; Lebedev, V.
State of the art high-current superconducting accelerators require efficient RF sources with a fast dynamic phase and power control. This allows for compensation of the phase and amplitude deviations of the accelerating voltage in the Superconducting RF (SRF) cavities caused by microphonics, etc. Efficient magnetron transmitters with fast phase and power control are attractive RF sources for this application. They are more cost effective than traditional RF sources such as klystrons, IOTs and solid-state amplifiers used with large scale accelerator projects. However, unlike traditional RF sources, controlled magnetrons operate as forced oscillators. Study of the impact of the controlling signalmore » on magnetron stability, noise and efficiency is therefore important. This paper discusses experiments with 2.45 GHz, 1 kW tubes and verifies our analytical model which is based on the charge drift approximation.« less
NASA Astrophysics Data System (ADS)
Lin, Chiao-Chi; Lyu, Yadong; Yu, Li-Chieh; Gu, Xiaohong
2016-09-01
Channel cracking fragmentation testing and attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy were utilized to study mechanical and chemical degradation of a multilayered backsheet after outdoor and accelerated laboratory aging. A model sample of commercial PPE backsheet, namely polyethylene terephthalate/polyethylene terephthalate/ethylene vinyl acetate (PET/PET/EVA) was investigated. Outdoor aging was performed in Gaithersburg, Maryland, USA for up to 510 days, and complementary accelerated laboratory aging was conducted on the NIST (National Institute of Standards and Technology) SPHERE (Simulated Photodegradation via High Energy Radiant Exposure). Fracture energy, mode I stress intensity factor and film strength were analyzed using an analytical model based on channel cracking fragmentation testing results. The correlation between mechanical and chemical degradation was discussed for both outdoor and accelerated laboratory aging. The results of this work provide preliminary understanding on failure mechanism of backsheets after weathering, laying the groundwork for linking outdoor and indoor accelerated laboratory testing for multilayer photovoltaic backsheets.
NASA Astrophysics Data System (ADS)
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
NASA Technical Reports Server (NTRS)
Weinberg, M. C.; Oronato, P. I.; Uhlmann, D. R.
1984-01-01
Analytical expression used to calculate time it takes for stationary bubbles of oxygen and carbon dioxide to dissolve from glass melt. Technique based on analytical expression for bubble radius as function time, with consequences of surface tension included.
NASA Astrophysics Data System (ADS)
Krishna, M. Veera; Swarnalathamma, B. V.
2017-07-01
We considered the transient MHD flow of a reactive second grade fluid through porous medium between two infinitely long horizontal parallel plates when one of the plate is set into uniform accelerated motion in the presence of a uniform transverse magnetic field under Arrhenius reaction rate. The governing equations are solved by Laplace transform technique. The effects of the pertinent parameters on the velocity, temperature are discussed in detail. The shear stress and Nusselt number at the plates are also obtained analytically and computationally discussed with reference to governing parameters.
High-performance wavelet engine
NASA Astrophysics Data System (ADS)
Taylor, Fred J.; Mellot, Jonathon D.; Strom, Erik; Koren, Iztok; Lewis, Michael P.
1993-11-01
Wavelet processing has shown great promise for a variety of image and signal processing applications. Wavelets are also among the most computationally expensive techniques in signal processing. It is demonstrated that a wavelet engine constructed with residue number system arithmetic elements offers significant advantages over commercially available wavelet accelerators based upon conventional arithmetic elements. Analysis is presented predicting the dynamic range requirements of the reported residue number system based wavelet accelerator.
NASA Astrophysics Data System (ADS)
Trivedi, T.; Patel, Shiv P.; Chandra, P.; Bajpai, P. K.
A 3.0 MV (Pelletron 9 SDH 4, NEC, USA) low energy ion accelerator has been recently installed as the National Centre for Accelerator based Research (NCAR) at the Department of Pure & Applied Physics, Guru Ghasidas Vishwavidyalaya, Bilaspur, India. The facility is aimed to carried out interdisciplinary researches using ion beams with high current TORVIS (for H, He ions) and SNICS (for heavy ions) ion sources. The facility includes two dedicated beam lines, one for ion beam analysis (IBA) and other for ion implantation/ irradiation corresponding to switching magnet at +20 and -10 degree, respectively. Ions with 60 kV energy are injected into the accelerator tank where after stripping positively charged ions are accelerated up to 29 MeV for Au. The installed ion beam analysis techniques include RBS, PIXE, ERDA and channelling.
O’Shea, B. D.; Andonian, G.; Barber, S. K.; ...
2016-09-14
There is urgent need to develop new acceleration techniques capable of exceeding gigaelectron-volt-per-metre (GeV m –1) gradients in order to enable future generations of both light sources and high-energy physics experiments. To address this need, short wavelength accelerators based on wakefields, where an intense relativistic electron beam radiates the demanded fields directly into the accelerator structure or medium, are currently under intense investigation. One such wakefield based accelerator, the dielectric wakefield accelerator, uses a dielectric lined-waveguide to support a wakefield used for acceleration. Here we show gradients of 1.347±0.020 GeV m –1 using a dielectric wakefield accelerator of 15 cmmore » length, with sub-millimetre transverse aperture, by measuring changes of the kinetic state of relativistic electron beams. We follow this measurement by demonstrating accelerating gradients of 320±17 MeV m –1. As a result, both measurements improve on previous measurements by and order of magnitude and show promise for dielectric wakefield accelerators as sources of high-energy electrons.« less
O'Shea, B. D.; Andonian, G.; Barber, S. K.; Fitzmorris, K. L.; Hakimi, S.; Harrison, J.; Hoang, P. D.; Hogan, M. J.; Naranjo, B.; Williams, O. B.; Yakimenko, V.; Rosenzweig, J. B.
2016-01-01
There is urgent need to develop new acceleration techniques capable of exceeding gigaelectron-volt-per-metre (GeV m−1) gradients in order to enable future generations of both light sources and high-energy physics experiments. To address this need, short wavelength accelerators based on wakefields, where an intense relativistic electron beam radiates the demanded fields directly into the accelerator structure or medium, are currently under intense investigation. One such wakefield based accelerator, the dielectric wakefield accelerator, uses a dielectric lined-waveguide to support a wakefield used for acceleration. Here we show gradients of 1.347±0.020 GeV m−1 using a dielectric wakefield accelerator of 15 cm length, with sub-millimetre transverse aperture, by measuring changes of the kinetic state of relativistic electron beams. We follow this measurement by demonstrating accelerating gradients of 320±17 MeV m−1. Both measurements improve on previous measurements by and order of magnitude and show promise for dielectric wakefield accelerators as sources of high-energy electrons. PMID:27624348
Considerations on the Use of Custom Accelerators for Big Data Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Minutoli, Marco
Accelerators, including Graphic Processing Units (GPUs) for gen- eral purpose computation, many-core designs with wide vector units (e.g., Intel Phi), have become a common component of many high performance clusters. The appearance of more stable and reliable tools tools that can automatically convert code written in high-level specifications with annotations (such as C or C++) to hardware de- scription languages (High-Level Synthesis - HLS), is also setting the stage for a broader use of reconfigurable devices (e.g., Field Pro- grammable Gate Arrays - FPGAs) in high performance system for the implementation of custom accelerators, helped by the fact that newmore » processors include advanced cache-coherent interconnects for these components. In this chapter, we briefly survey the status of the use of accelerators in high performance systems targeted at big data analytics applications. We argue that, although the progress in the use of accelerators for this class of applications has been sig- nificant, differently from scientific simulations there still are gaps to close. This is particularly true for the ”irregular” behaviors exhibited by no-SQL, graph databases. We focus our attention on the limits of HLS tools for data analytics and graph methods, and discuss a new architectural template that better fits the requirement of this class of applications. We validate the new architectural templates by mod- ifying the Graph Engine for Multithreaded System (GEMS) frame- work to support accelerators generated with such a methodology, and testing with queries coming from the Lehigh University Benchmark (LUBM). The architectural template enables better supporting the task and memory level parallelism present in graph methods by sup- porting a new control model and a enhanced memory interface. We show that out solution allows generating parallel accelerators, pro- viding speed ups with respect to conventional HLS flows. We finally draw conclusions and present a perspective on the use of reconfig- urable devices and Design Automation tools for data analytics.« less
New isotope technologies in environmental physics
NASA Astrophysics Data System (ADS)
Povinec, P. P.; Betti, M.; Jull, A. J. T.; Vojtyla, P.
2008-02-01
As the levels of radionuclides observed at present in the environment are very low, high sensitive analytical systems are required for carrying out environmental investigations. We review recent progress which has been done in low-level counting techniques in both radiometrics and mass spectrometry sectors, with emphasis on underground laboratories, Monte Carlo (GEANT) simulation of background of HPGe detectors operating in various configurations, secondary ionisation mass spectrometry, and accelerator mass spectrometry. Applications of radiometrics and mass spectrometry techniques in radioecology and climate change studies are presented and discussed as well. The review should help readers in better orientation on recent developments in the field of low-level counting and spectrometry, and to advice on construction principles of underground laboratories, as well as on criteria how to choose low or high energy mass spectrometers for environmental investigations.
Modeling magnetic field amplification in nonlinear diffusive shock acceleration
NASA Astrophysics Data System (ADS)
Vladimirov, Andrey
2009-02-01
This research was motivated by the recent observations indicating very strong magnetic fields at some supernova remnant shocks, which suggests in-situ generation of magnetic turbulence. The dissertation presents a numerical model of collisionless shocks with strong amplification of stochastic magnetic fields, self-consistently coupled to efficient shock acceleration of charged particles. Based on a Monte Carlo simulation of particle transport and acceleration in nonlinear shocks, the model describes magnetic field amplification using the state-of-the-art analytic models of instabilities in magnetized plasmas in the presence of non-thermal particle streaming. The results help one understand the complex nonlinear connections between the thermal plasma, the accelerated particles and the stochastic magnetic fields in strong collisionless shocks. Also, predictions regarding the efficiency of particle acceleration and magnetic field amplification, the impact of magnetic field amplification on the maximum energy of accelerated particles, and the compression and heating of the thermal plasma by the shocks are presented. Particle distribution functions and turbulence spectra derived with this model can be used to calculate the emission of observable nonthermal radiation.
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1999-01-01
This paper presents an analytical and experimental methodology for studying flight simulator fidelity. The task was a rotorcraft bob-up/down maneuver in which vertical acceleration constituted the motion cue. The task considered here is aside-step maneuver that differs from the bob-up one important way: both roll and lateral acceleration cues are available to the pilot. It has been communicated to the author that in some Verticle Motion Simulator (VMS) studies, the lateral acceleration cue has been found to be the most important. It is of some interest to hypothesize how this motion cue associated with "outer-loop" lateral translation fits into the modeling procedure where only "inner-loop " motion cues were considered. This Note is an attempt at formulating such an hypothesis and analytically comparing a large-motion simulator, e.g., the VMS, with a small-motion simulator, e.g., a hexapod.
Byliński, Hubert; Gębicki, Jacek; Dymerski, Tomasz; Namieśnik, Jacek
2017-07-04
One of the major sources of error that occur during chemical analysis utilizing the more conventional and established analytical techniques is the possibility of losing part of the analytes during the sample preparation stage. Unfortunately, this sample preparation stage is required to improve analytical sensitivity and precision. Direct techniques have helped to shorten or even bypass the sample preparation stage; and in this review, we comment of some of the new direct techniques that are mass-spectrometry based. The study presents information about the measurement techniques using mass spectrometry, which allow direct sample analysis, without sample preparation or limiting some pre-concentration steps. MALDI - MS, PTR - MS, SIFT - MS, DESI - MS techniques are discussed. These solutions have numerous applications in different fields of human activity due to their interesting properties. The advantages and disadvantages of these techniques are presented. The trends in development of direct analysis using the aforementioned techniques are also presented.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazakevich, G.; Johnson, R.; Lebedev, V.
A simplified analytical model of the resonant interaction of the beam of Larmor electrons drifting in the crossed constant fields of a magnetron with a synchronous wave providing a phase grouping of the drifting charge was developed to optimize the parameters of an rf resonant injected signal driving the magnetrons for management of phase and power of rf sources with a rate required for superconducting high-current accelerators. The model, which considers the impact of the rf resonant signal injected into the magnetron on the operation of the injection-locked tube, substantiates the recently developed method of fast power control of magnetronsmore » in the range up to 10 dB at the highest generation efficiency, with low noise, precise stability of the carrier frequency, and the possibility of wideband phase control. Experiments with continuous wave 2.45 GHz, 1 kW microwave oven magnetrons have verified the correspondence of the behavior of these tubes to the analytical model. A proof of the principle of the novel method of power control in magnetrons, based on the developed model, was demonstrated in the experiments. The method is attractive for high-current superconducting rf accelerators. This study also discusses vector methods of power control with the rates required for superconducting accelerators, the impact of the rf resonant signal injected into the magnetron on the rate of phase control of the injection-locked tubes, and a conceptual scheme of the magnetron transmitter with highest efficiency for high-current accelerators.« less
Kazakevich, G.; Johnson, R.; Lebedev, V.; ...
2018-06-14
A simplified analytical model of the resonant interaction of the beam of Larmor electrons drifting in the crossed constant fields of a magnetron with a synchronous wave providing a phase grouping of the drifting charge was developed to optimize the parameters of an rf resonant injected signal driving the magnetrons for management of phase and power of rf sources with a rate required for superconducting high-current accelerators. The model, which considers the impact of the rf resonant signal injected into the magnetron on the operation of the injection-locked tube, substantiates the recently developed method of fast power control of magnetronsmore » in the range up to 10 dB at the highest generation efficiency, with low noise, precise stability of the carrier frequency, and the possibility of wideband phase control. Experiments with continuous wave 2.45 GHz, 1 kW microwave oven magnetrons have verified the correspondence of the behavior of these tubes to the analytical model. A proof of the principle of the novel method of power control in magnetrons, based on the developed model, was demonstrated in the experiments. The method is attractive for high-current superconducting rf accelerators. This study also discusses vector methods of power control with the rates required for superconducting accelerators, the impact of the rf resonant signal injected into the magnetron on the rate of phase control of the injection-locked tubes, and a conceptual scheme of the magnetron transmitter with highest efficiency for high-current accelerators.« less
A Meta-Analytic Review of School-Based Prevention for Cannabis Use
ERIC Educational Resources Information Center
Porath-Waller, Amy J.; Beasley, Erin; Beirness, Douglas J.
2010-01-01
This investigation used meta-analytic techniques to evaluate the effectiveness of school-based prevention programming in reducing cannabis use among youth aged 12 to 19. It summarized the results from 15 studies published in peer-reviewed journals since 1999 and identified features that influenced program effectiveness. The results from the set of…
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.
2010-01-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792
General Analytical Schemes for the Characterization of Pectin-Based Edible Gelled Systems
Haghighi, Maryam; Rezaei, Karamatollah
2012-01-01
Pectin-based gelled systems have gained increasing attention for the design of newly developed food products. For this reason, the characterization of such formulas is a necessity in order to present scientific data and to introduce an appropriate finished product to the industry. Various analytical techniques are available for the evaluation of the systems formulated on the basis of pectin and the designed gel. In this paper, general analytical approaches for the characterization of pectin-based gelled systems were categorized into several subsections including physicochemical analysis, visual observation, textural/rheological measurement, microstructural image characterization, and psychorheological evaluation. Three-dimensional trials to assess correlations among microstructure, texture, and taste were also discussed. Practical examples of advanced objective techniques including experimental setups for small and large deformation rheological measurements and microstructural image analysis were presented in more details. PMID:22645484
Algal Biomass Analysis by Laser-Based Analytical Techniques—A Review
Pořízka, Pavel; Prochazková, Petra; Prochazka, David; Sládková, Lucia; Novotný, Jan; Petrilak, Michal; Brada, Michal; Samek, Ota; Pilát, Zdeněk; Zemánek, Pavel; Adam, Vojtěch; Kizek, René; Novotný, Karel; Kaiser, Jozef
2014-01-01
Algal biomass that is represented mainly by commercially grown algal strains has recently found many potential applications in various fields of interest. Its utilization has been found advantageous in the fields of bioremediation, biofuel production and the food industry. This paper reviews recent developments in the analysis of algal biomass with the main focus on the Laser-Induced Breakdown Spectroscopy, Raman spectroscopy, and partly Laser-Ablation Inductively Coupled Plasma techniques. The advantages of the selected laser-based analytical techniques are revealed and their fields of use are discussed in detail. PMID:25251409
Plasma production for electron acceleration by resonant plasma wave
NASA Astrophysics Data System (ADS)
Anania, M. P.; Biagioni, A.; Chiadroni, E.; Cianchi, A.; Croia, M.; Curcio, A.; Di Giovenale, D.; Di Pirro, G. P.; Filippi, F.; Ghigo, A.; Lollo, V.; Pella, S.; Pompili, R.; Romeo, S.; Ferrario, M.
2016-09-01
Plasma wakefield acceleration is the most promising acceleration technique known nowadays, able to provide very high accelerating fields (10-100 GV/m), enabling acceleration of electrons to GeV energy in few centimeter. However, the quality of the electron bunches accelerated with this technique is still not comparable with that of conventional accelerators (large energy spread, low repetition rate, and large emittance); radiofrequency-based accelerators, in fact, are limited in accelerating field (10-100 MV/m) requiring therefore hundred of meters of distances to reach the GeV energies, but can provide very bright electron bunches. To combine high brightness electron bunches from conventional accelerators and high accelerating fields reachable with plasmas could be a good compromise allowing to further accelerate high brightness electron bunches coming from LINAC while preserving electron beam quality. Following the idea of plasma wave resonant excitation driven by a train of short bunches, we have started to study the requirements in terms of plasma for SPARC_LAB (Ferrario et al., 2013 [1]). In particular here we focus on hydrogen plasma discharge, and in particular on the theoretical and numerical estimates of the ionization process which are very useful to design the discharge circuit and to evaluate the current needed to be supplied to the gas in order to have full ionization. Eventually, the current supplied to the gas simulated will be compared to that measured experimentally.
On-Chip Laser-Power Delivery System for Dielectric Laser Accelerators
NASA Astrophysics Data System (ADS)
Hughes, Tyler W.; Tan, Si; Zhao, Zhexin; Sapra, Neil V.; Leedle, Kenneth J.; Deng, Huiyang; Miao, Yu; Black, Dylan S.; Solgaard, Olav; Harris, James S.; Vuckovic, Jelena; Byer, Robert L.; Fan, Shanhui; England, R. Joel; Lee, Yun Jo; Qi, Minghao
2018-05-01
We propose an on-chip optical-power delivery system for dielectric laser accelerators based on a fractal "tree-network" dielectric waveguide geometry. This system replaces experimentally demanding free-space manipulations of the driving laser beam with chip-integrated techniques based on precise nanofabrication, enabling access to orders-of-magnitude increases in the interaction length and total energy gain for these miniature accelerators. Based on computational modeling, in the relativistic regime, our laser delivery system is estimated to provide 21 keV of energy gain over an acceleration length of 192 μ m with a single laser input, corresponding to a 108-MV/m acceleration gradient. The system may achieve 1 MeV of energy gain over a distance of less than 1 cm by sequentially illuminating 49 identical structures. These findings are verified by detailed numerical simulation and modeling of the subcomponents, and we provide a discussion of the main constraints, challenges, and relevant parameters with regard to on-chip laser coupling for dielectric laser accelerators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.
1990-08-01
This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less
Isotope-ratio-monitoring gas chromatography-mass spectrometry: methods for isotopic calibration
NASA Technical Reports Server (NTRS)
Merritt, D. A.; Brand, W. A.; Hayes, J. M.
1994-01-01
In trial analyses of a series of n-alkanes, precise determinations of 13C contents were based on isotopic standards introduced by five different techniques and results were compared. Specifically, organic-compound standards were coinjected with the analytes and carried through chromatography and combustion with them; or CO2 was supplied from a conventional inlet and mixed with the analyte in the ion source, or CO2 was supplied from an auxiliary mixing volume and transmitted to the source without interruption of the analyte stream. Additionally, two techniques were investigated in which the analyte stream was diverted and CO2 standards were placed on a near-zero background. All methods provided accurate results. Where applicable, methods not involving interruption of the analyte stream provided the highest performance (sigma = 0.00006 at.% 13C or 0.06% for 250 pmol C as CO2 reaching the ion source), but great care was required. Techniques involving diversion of the analyte stream were immune to interference from coeluting sample components and still provided high precision (0.0001 < or = sigma < or = 0.0002 at.% or 0.1 < or = sigma < or = 0.2%).
Porter; Eastman; Pace; Bradley
2000-09-01
Polymer-based materials can be incorporated as the active sensing elements in chemiresistor devices. Most of these devices take advantage of the fact that certain polymers will swell when exposed to gaseous analytes. To measure this response, a conducting material such as carbon black is incorporated within the nonconducting polymer matrix. In response to analytes, polymer swelling results in a measurable change in the conductivity of the polymer/carbon composite material. Arrays of these sensors may be used in conjunction with pattern recognition techniques for purposes of analyte recognition and quantification. We have used the technique of scanning force microscopy (SFM) to investigate microstructural changes in carbon-polymer composites formed from the polymers poly (isobutylene) (PIB), poly (vinyl alcohol) (PVA), and poly (ethylene-vinyl acetate) (PEVA) when exposed to the analytes hexane, toluene, water, ethanol, and acetone. Using phase-contrast imaging (PI), changes in the carbon nanoparticle distribution on the surface of the polymer matrix are measured as the polymers are exposed to the analytes in vapor phase. In some but not all cases, the changes were reversible (at the scale of the SFM measurements) upon removal of the analyte vapor. In this paper, we also describe a new type of microsensor based on piezoresistive microcantilever technology. With these new devices, polymeric volume changes accompanying exposure to analyte vapor are measured directly by a piezoresistive microcantilever in direct contact with the polymer. These devices may offer a number of advantages over standard chemiresistor-based sensors.
NASA Astrophysics Data System (ADS)
Del McDaniel, Floyd; Doyle, Barney L.
Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry’s physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator engineers and vendors, medical doctors, cultural heritage experts... the list goes on and on. While thousands of his acquaintances already miss Jerry, this is being felt most by his family and us (B.D. and F.D.M).
NASA Astrophysics Data System (ADS)
Del McDaniel, Floyd; Doyle, Barney L.
Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry's physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator engineers and vendors, medical doctors, cultural heritage experts... the list goes on and on. While thousands of his acquaintances already miss Jerry, this is being felt most by his family and us (B.D. and F.D.M).
Big Data Analytics with Datalog Queries on Spark.
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2016-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.
Big Data Analytics with Datalog Queries on Spark
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2017-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296
Theory, simulation and experiments for precise deflection control of radiotherapy electron beams.
Figueroa, R; Leiva, J; Moncada, R; Rojas, L; Santibáñez, M; Valente, M; Velásquez, J; Young, H; Zelada, G; Yáñez, R; Guillen, Y
2018-03-08
Conventional radiotherapy is mainly applied by linear accelerators. Although linear accelerators provide dual (electron/photon) radiation beam modalities, both of them are intrinsically produced by a megavoltage electron current. Modern radiotherapy treatment techniques are based on suitable devices inserted or attached to conventional linear accelerators. Thus, precise control of delivered beam becomes a main key issue. This work presents an integral description of electron beam deflection control as required for novel radiotherapy technique based on convergent photon beam production. Theoretical and Monte Carlo approaches were initially used for designing and optimizing device´s components. Then, dedicated instrumentation was developed for experimental verification of electron beam deflection due to the designed magnets. Both Monte Carlo simulations and experimental results support the reliability of electrodynamics models used to predict megavoltage electron beam control. Copyright © 2018 Elsevier Ltd. All rights reserved.
GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen
2015-09-30
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less
Residual acceleration data on IML-1: Development of a data reduction and dissemination plan
NASA Technical Reports Server (NTRS)
Rogers, Melissa J. B.; Alexander, J. Iwan D.; Wolf, Randy
1992-01-01
The main thrust of our work in the third year of contract NAG8-759 was the development and analysis of various data processing techniques that may be applicable to residual acceleration data. Our goal is the development of a data processing guide that low gravity principal investigators can use to assess their need for accelerometer data and then formulate an acceleration data analysis strategy. The work focused on the flight of the first International Microgravity Laboratory (IML-1) mission. We are also developing a data base management system to handle large quantities of residual acceleration data. This type of system should be an integral tool in the detailed analysis of accelerometer data. The system will manage a large graphics data base in the support of supervised and unsupervised pattern recognition. The goal of the pattern recognition phase is to identify specific classes of accelerations so that these classes can be easily recognized in any data base. The data base management system is being tested on the Spacelab 3 (SL3) residual acceleration data.
Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward
2016-09-01
Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Eco-analytical Methodology in Environmental Problems Monitoring
NASA Astrophysics Data System (ADS)
Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.
2017-01-01
Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.
A graphical approach to radio frequency quadrupole design
NASA Astrophysics Data System (ADS)
Turemen, G.; Unel, G.; Yasatekin, B.
2015-07-01
The design of a radio frequency quadrupole, an important section of all ion accelerators, and the calculation of its beam dynamics properties can be achieved using the existing computational tools. These programs, originally designed in 1980s, show effects of aging in their user interfaces and in their output. The authors believe there is room for improvement in both design techniques using a graphical approach and in the amount of analytical calculations before going into CPU burning finite element analysis techniques. Additionally an emphasis on the graphical method of controlling the evolution of the relevant parameters using the drag-to-change paradigm is bound to be beneficial to the designer. A computer code, named DEMIRCI, has been written in C++ to demonstrate these ideas. This tool has been used in the design of Turkish Atomic Energy Authority (TAEK)'s 1.5 MeV proton beamline at Saraykoy Nuclear Research and Training Center (SANAEM). DEMIRCI starts with a simple analytical model, calculates the RFQ behavior and produces 3D design files that can be fed to a milling machine. The paper discusses the experience gained during design process of SANAEM Project Prometheus (SPP) RFQ and underlines some of DEMIRCI's capabilities.
NASA Astrophysics Data System (ADS)
Pidl, Renáta
2018-01-01
The overwhelming majority of intercontinental long-haul transportations of goods are usually carried out on road by semi-trailer trucks. Vibration has a major effect regarding the safety of the transport, the load and the transported goods. This paper deals with the logistics goals from the point of view of vibration and summarizes the methods to predict or measure the vibration load in order to design a proper system. From these methods, the focus of this paper is on the computer simulation of the vibration. An analytical method is presented to calculate the vertical dynamics of a semi-trailer truck containing general viscous damping and exposed to harmonic base excitation. For the purpose of a better understanding, the method will be presented through a simplified four degrees-of-freedom (DOF) half-vehicle model, which neglects the stiffness and damping of the tires, thus the four degrees-of-freedom are the vertical and angular displacements of the truck and the trailer. From the vertical and angular accelerations of the trailer, the vertical acceleration of each point of the platform of the trailer can easily be determined, from which the forces acting on the transported goods are given. As a result of this paper the response of the full platform-load-packaging system to any kind of vehicle, any kind of load and any kind of road condition can be analyzed. The peak acceleration of any point on the platform can be determined by the presented analytical method.
Solenoid Fringe Field Effects for the Neutrino Factory Linac - MAD-X Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Aslaninejad,C. Bontoiu,J. Pasternak,J. Pozimski,Alex Bogacz
2010-05-01
International Design Study for the Neutrino Factory (IDS-NF) assumes the first stage of muon acceleration (up to 900 MeV) to be implemented with a solenoid based Linac. The Linac consists of three styles of cryo-modules, containing focusing solenoids and varying number of SRF cavities for acceleration. Fringe fields of the solenoids and the focusing effects in the SRF cavities have significant impact on the transverse beam dynamics. Using an analytical formula, the effects of fringe fields are studied in MAD-X. The resulting betatron functions are compared with the results of beam dynamics simulations using OptiM code.
NASA Astrophysics Data System (ADS)
Mashayekhi, Mohammad Jalali; Behdinan, Kamran
2017-10-01
The increasing demand to minimize undesired vibration and noise levels in several high-tech industries has generated a renewed interest in vibration transfer path analysis. Analyzing vibration transfer paths within a system is of crucial importance in designing an effective vibration isolation strategy. Most of the existing vibration transfer path analysis techniques are empirical which are suitable for diagnosis and troubleshooting purpose. The lack of an analytical transfer path analysis to be used in the design stage is the main motivation behind this research. In this paper an analytical transfer path analysis based on the four-pole theory is proposed for multi-energy-domain systems. Bond graph modeling technique which is an effective approach to model multi-energy-domain systems is used to develop the system model. In this paper an electro-mechanical system is used as a benchmark example to elucidate the effectiveness of the proposed technique. An algorithm to obtain the equivalent four-pole representation of a dynamical systems based on the corresponding bond graph model is also presented in this paper.
Multi-scale analytical investigation of fly ash in concrete
NASA Astrophysics Data System (ADS)
Aboustait, Mohammed B.
Much research has been conducted to find an acceptable concrete ingredient that would act as cement replacement. One promising material is fly ash. Fly ash is a by-product from coal-fired power plants. Throughout this document work on the characterization of fly ash structure and composition will be explored. This effort was conducted through a mixture of cutting edge multi-scale analytical X-ray based techniques that use both bulk experimentation and nano/micro analytical techniques. Furtherly, this examination was coupled by performing Physical/Mechanical ASTM based testing on fly ash-enrolled-concrete to examine the effects of fly ash introduction. The most exotic of the cutting edge characterization techniques endorsed in this work uses the Nano-Computed Tomography and the Nano X-ray Fluorescence at Argonne National Laboratory to investigate single fly ash particles. Additional Work on individual fly ash particles was completed by laboratory-based Micro-Computed Tomography and Scanning Electron Microscopy. By combining the results of individual particles and bulk property tests, a compiled perspective is introduced, and accessed to try and make new insights into the reactivity of fly ash within concrete.
Multimodal system planning technique : an analytical approach to peak period operation
DOT National Transportation Integrated Search
1995-11-01
The multimodal system planning technique described in this report is an improvement of the methodology used in the Dallas System Planning Study. The technique includes a spreadsheet-based process to identify the costs of congestion, construction, and...
Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion
Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza
2013-01-01
Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058
Lin, Jia-Hui; Tseng, Wei-Lung
2015-01-01
Detection of salt- and analyte-induced aggregation of gold nanoparticles (AuNPs) mostly relies on costly and bulky analytical instruments. To response this drawback, a portable, miniaturized, sensitive, and cost-effective detection technique is urgently required for rapid field detection and monitoring of target analyte via the use of AuNP-based sensor. This study combined a miniaturized spectrometer with a 532-nm laser to develop a laser-induced Rayleigh scattering technique, allowing the sensitive and selective detection of Rayleigh scattering from the aggregated AuNPs. Three AuNP-based sensing systems, including salt-, thiol- and metal ion-induced aggregation of the AuNPs, were performed to examine the sensitivity of laser-induced Rayleigh scattering technique. Salt-, thiol-, and metal ion-promoted NP aggregation were exemplified by the use of aptamer-adsorbed, fluorosurfactant-stabilized, and gallic acid-capped AuNPs for probing K(+), S-adenosylhomocysteine hydrolase-induced hydrolysis of S-adenosylhomocysteine, and Pb(2+), in sequence. Compared to the reported methods for monitoring the aggregated AuNPs, the proposed system provided distinct advantages of sensitivity. Laser-induced Rayleigh scattering technique was improved to be convenient, cheap, and portable by replacing a diode laser and a miniaturized spectrometer with a laser pointer and a smart-phone. Using this smart-phone-based detection platform, we can determine whether or not the Pb(2+) concentration exceed the maximum allowable level of Pb(2+) in drinking water. Copyright © 2014 Elsevier B.V. All rights reserved.
Nonlinear analysis for dual-frequency concurrent energy harvesting
NASA Astrophysics Data System (ADS)
Yan, Zhimiao; Lei, Hong; Tan, Ting; Sun, Weipeng; Huang, Wenhu
2018-05-01
The dual-frequency responses of the hybrid energy harvester undergoing the base excitation and galloping were analyzed numerically. In this work, an approximate dual-frequency analytical method is proposed for the nonlinear analysis of such a system. To obtain the approximate analytical solutions of the full coupled distributed-parameter model, the forcing interactions is first neglected. Then, the electromechanical decoupled governing equation is developed using the equivalent structure method. The hybrid mechanical response is finally separated to be the self-excited and forced responses for deriving the analytical solutions, which are confirmed by the numerical simulations of the full coupled model. The forced response has great impacts on the self-excited response. The boundary of Hopf bifurcation is analytically determined by the onset wind speed to galloping, which is linearly increased by the electrical damping. Quenching phenomenon appears when the increasing base excitation suppresses the galloping. The theoretical quenching boundary depends on the forced mode velocity. The quenching region increases with the base acceleration and electrical damping, but decreases with the wind speed. Superior to the base-excitation-alone case, the existence of the aerodynamic force protects the hybrid energy harvester at resonance from damages caused by the excessive large displacement. From the view of the harvested power, the hybrid system surpasses the base-excitation-alone system or the galloping-alone system. This study advances our knowledge on intrinsic nonlinear dynamics of the dual-frequency energy harvesting system by taking advantage of the analytical solutions.
Hybrid perturbation methods based on statistical time series models
NASA Astrophysics Data System (ADS)
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
A developed nearly analytic discrete method for forward modeling in the frequency domain
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai
2018-02-01
High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.
Epilepsy analytic system with cloud computing.
Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei
2013-01-01
Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.
Modern analytical methods for the detection of food fraud and adulteration by food category.
Hong, Eunyoung; Lee, Sang Yoo; Jeong, Jae Yun; Park, Jung Min; Kim, Byung Hee; Kwon, Kisung; Chun, Hyang Sook
2017-09-01
This review provides current information on the analytical methods used to identify food adulteration in the six most adulterated food categories: animal origin and seafood, oils and fats, beverages, spices and sweet foods (e.g. honey), grain-based food, and others (organic food and dietary supplements). The analytical techniques (both conventional and emerging) used to identify adulteration in these six food categories involve sensory, physicochemical, DNA-based, chromatographic and spectroscopic methods, and have been combined with chemometrics, making these techniques more convenient and effective for the analysis of a broad variety of food products. Despite recent advances, the need remains for suitably sensitive and widely applicable methodologies that encompass all the various aspects of food adulteration. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Willert, Jeffrey; Park, H.; Taitano, William
2015-11-01
High-order/low-order (or moment-based acceleration) algorithms have been used to significantly accelerate the solution to the neutron transport k-eigenvalue problem over the past several years. Recently, the nonlinear diffusion acceleration algorithm has been extended to solve fixed-source problems with anisotropic scattering sources. In this paper, we demonstrate that we can extend this algorithm to k-eigenvalue problems in which the scattering source is anisotropic and a significant acceleration can be achieved. Lastly, we demonstrate that the low-order, diffusion-like eigenvalue problem can be solved efficiently using a technique known as nonlinear elimination.
Synthesis of porous SnO2 nanocubes via selective leaching and enhanced gas-sensing properties
NASA Astrophysics Data System (ADS)
Li, Yining; Wei, Qi; Song, Peng; Wang, Qi
2016-01-01
Porous micro-/nanostructures are of great interest in many current and emerging areas of technology. In this paper, porous SnO2 nanocubes have been successfully fabricated via a selective leaching strategy using CoSn(OH)6 as precursor. The structure and morphology of as-prepared samples were investigated by several techniques, such as X-ray diffraction (XRD), scanning electron microscopy (SEM), thermogravimetric and differential scanning calorimeter analysis (TGDSC), transmission electron microscopy (TEM) and N2 adsorptiondesorption analyses. On the basis of those characterizations, the mechanism for the formation of porous SnO2 nanocubes has been proposed. Owing to the well-defined and uniform porous structures, porous SnO2 nanocubes possessing more adsorbent amount of analytic gas and accelerate the transmission speed so as to enhance the gas-sensing properties. Gas sensing investigation showed that the sensor based on porous SnO2 nanocubes exhibited high response, short responserecovery times and good selectivity to ethanol gas.
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
Algorithms for the Euler and Navier-Stokes equations for supercomputers
NASA Technical Reports Server (NTRS)
Turkel, E.
1985-01-01
The steady state Euler and Navier-Stokes equations are considered for both compressible and incompressible flow. Methods are found for accelerating the convergence to a steady state. This acceleration is based on preconditioning the system so that it is no longer time consistent. In order that the acceleration technique be scheme-independent, this preconditioning is done at the differential equation level. Applications are presented for very slow flows and also for the incompressible equations.
Loit, Evelin; Tricco, Andrea C; Tsouros, Sophia; Sears, Margaret; Ansari, Mohammed T; Booth, Ronald A
2011-07-01
Low thiopurine S-methyltransferase (TPMT) enzyme activity is associated with increased thiopurine drug toxicity, particularly myelotoxicity. Pre-analytic and analytic variables for TPMT genotype and phenotype (enzyme activity) testing were reviewed. A systematic literature review was performed, and diagnostic laboratories were surveyed. Thirty-five studies reported relevant data for pre-analytic variables (patient age, gender, race, hematocrit, co-morbidity, co-administered drugs and specimen stability) and thirty-three for analytic variables (accuracy, reproducibility). TPMT is stable in blood when stored for up to 7 days at room temperature, and 3 months at -30°C. Pre-analytic patient variables do not affect TPMT activity. Fifteen drugs studied to date exerted no clinically significant effects in vivo. Enzymatic assay is the preferred technique. Radiochemical and HPLC techniques had intra- and inter-assay coefficients of variation (CVs) below 10%. TPMT is a stable enzyme, and its assay is not affected by age, gender, race or co-morbidity. Copyright © 2011. Published by Elsevier Inc.
Optical Diagnostics for Plasma-based Particle Accelerators
NASA Astrophysics Data System (ADS)
Muggli, Patric
2009-05-01
One of the challenges for plasma-based particle accelerators is to measure the spatio-temporal characteristics of the accelerated particle bunch. ``Optical'' diagnostics are particularly interesting and useful because of the large number of techniques that exits to determine the properties of photon pulses. The accelerated bunch can produce photons pulses that carry information about its characteristics for example through synchrotron radiation in a magnet, Cherenkov radiation in a gas, and transition radiation (TR) at the boundary between two media with different dielectric constants. Depending on the wavelength of the emission when compared to the particle bunch length, the radiation can be incoherent or coherent. Incoherent TR in the optical range (or OTR) is useful to measure the transverse spatial characteristics of the beam, such as charge distribution and size. Coherent TR (or CTR) carries information about the bunch length that can in principle be retrieved by standard auto-correlation or interferometric techniques, as well as by spectral measurements. A measurement of the total CTR energy emitted by bunches with constant charge can also be used as a shot-to-shot measurement for the relative bunch length as the CTR energy is proportional to the square of the bunch population and inversely proportional to its length (for a fixed distribution). Spectral interferometry can also yield the spacing between bunches in the case where multiple bunches are trapped in subsequent buckets of the plasma wave. Cherenkov radiation can be used as an energy threshold diagnostic for low energy particles. Cherenkov, synchrotron and transition radiation can be used in a dispersive section of the beam line to measure the bunch energy spectrum. The application of these diagnostics to plasma-based particle accelerators, with emphasis on the beam-driven, plasma wakefield accelerator (PWFA) at the SLAC National Accelerator Laboratory will be discussed.
Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science
NASA Astrophysics Data System (ADS)
Riedel, Morris; Ramachandran, Rahul; Baumann, Peter
2014-05-01
The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.
Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science
NASA Technical Reports Server (NTRS)
Riedel, Morris; Ramachandran, Rahul; Baumann, Peter
2014-01-01
The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.
Theoretical and Observational Analysis of Particle Acceleration Mechanisms at Astrophysical Shocks
NASA Astrophysics Data System (ADS)
Lever, Edward Lawrence
We analytically and numerically investigate the viability of Shock Surfing as a pre-injection mechanism for Diffusive Shock Acceleration, believed to be responsible for the production of Cosmic Rays. We demonstrate mathematically and from computer simulations that four critical conditions must be satisfied for Shock Surfing to function; the shock ramp must be narrow, the shock front must be smooth, the magnetic field angle must be very nearly perpendicular and, finally, these conditions must persist without interruption over substantial time periods and spatial scales. We quantify these necessary conditions, exhibit predictive functions for velocity maxima and accelerated ion fluxes based on observable shock parameters, and show unequivocally from current observational evidence that all of these necessary conditions are violated at shocks within the heliosphere, at the heliospheric Termination Shock, and also at Supernovae.
Designing for aircraft structural crashworthiness
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Caiafa, C.
1981-01-01
This report describes structural aviation crash dynamics research activities being conducted on general aviation aircraft and transport aircraft. The report includes experimental and analytical correlations of load-limiting subfloor and seat configurations tested dynamically in vertical drop tests and in a horizontal sled deceleration facility. Computer predictions using a finite-element nonlinear computer program, DYCAST, of the acceleration time-histories of these innovative seat and subfloor structures are presented. Proposed application of these computer techniques, and the nonlinear lumped mass computer program KRASH, to transport aircraft crash dynamics is discussed. A proposed FAA full-scale crash test of a fully instrumented radio controlled transport airplane is also described.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Application of ionic liquid for extraction and separation of bioactive compounds from plants.
Tang, Baokun; Bi, Wentao; Tian, Minglei; Row, Kyung Ho
2012-09-01
In recent years, ionic liquids (ILs), as green and designer solvents, have accelerated research in analytical chemistry. This review highlights some of the unique properties of ILs and provides an overview of the preparation and application of IL or IL-based materials to extract bioactive compounds in plants. IL or IL-based materials in conjunction with liquid-liquid extraction (LLE), ultrasonic-assisted extraction (UAE), microwave-assisted extraction (MAE), high performance liquid chromatography (HPLC) and solid-phase extraction (SPE) analytical technologies etc., have been applied successfully to the extraction or separation of bioactive compounds from plants. This paper reviews the available data and references to examine the advantages of IL and IL-based materials in these applications. In addition, the main target compounds reviewed in this paper are bioactive compounds with multiple therapeutic effects and pharmacological activities. Based on the importance of the targets, this paper reviews the applications of ILs, IL-based materials or co-working with analytical technologies. The exploitation of new applications of ILs on the extraction of bioactive compounds from plant samples is expected to increase. Copyright © 2012 Elsevier B.V. All rights reserved.
Mantovani, Cínthia de Carvalho; Lima, Marcela Bittar; Oliveira, Carolina Dizioli Rodrigues de; Menck, Rafael de Almeida; Diniz, Edna Maria de Albuquerque; Yonamine, Mauricio
2014-04-15
A method using accelerated solvent extraction (ASE) for the isolation of cocaine/crack biomarkers in meconium samples, followed by solid phase extraction (SPE) and the simultaneous quantification by gas chromatography-mass spectrometry (GC-MS) was developed and validated. Initially, meconium samples were submitted to an ASE procedure, which was followed by SPE with Bond Elut Certify I cartridges. The analytes were derivatizated with PFP/PFPA and analyzed by GC-MS. The limits of detection (LOD) were between 11 and 17ng/g for all analytes. The limits of quantification (LOQ) were 30ng/g for anhydroecgonine methyl ester, and 20ng/g for cocaine, benzoylecgonine, ecgonine methyl ester and cocaethylene. Linearity ranged from the LOQ to 1500ng/g for all analytes, with a coefficients of determination greater than 0.991, except for m-hydroxybenzoylecgonine, which was only qualitatively detected. Precision and accuracy were evaluated at three concentration levels. For all analytes, inter-assay precision ranged from 3.2 to 18.1%, and intra-assay precision did not exceed 12.7%. The accuracy results were between 84.5 and 114.2% and the average recovery ranged from 17 to 84%. The method was applied to 342 meconium samples randomly collected in the University Hospital-University of São Paulo (HU-USP), Brazil. Cocaine biomarkers were detected in 19 samples, which represent 5.6% of exposure prevalence. Significantly lower birth weight, length and head circumference were found for the exposed newborns compared with the non-exposed group. This is the first report in which ASE was used as a sample preparation technique to extract cocaine biomarkers from a complex biological matrix such as meconium samples. The advantages of the developed method are the smaller demand for organic solvents and the minor sample handling, which allows a faster and accurate procedure, appropriate to confirm fetal exposure to cocaine/crack. Copyright © 2014 Elsevier B.V. All rights reserved.
Lin, Shengxuan; Zhou, Xuedong; Ge, Liya; Ng, Sum Huan; Zhou, Xiaodong; Chang, Victor Wei-Chung
2016-10-01
Heavy metals and some metalloids are the most significant inorganic contaminants specified in toxicity characteristic leaching procedure (TCLP) in determining the safety of landfills or further utilization. As a consequence, a great deal of efforts had been made on the development of miniaturized analytical devices, such as Microchip Electrophoresis (ME) and μTAS for on-site testing of heavy metals and metalloids to prevent spreading of those pollutants or decrease the reutilization period of waste materials such as incineration bottom ash. However, the bottleneck lied in the long and tedious conventional TCLP that requires 18 h of leaching. Without accelerating the TCLP process, the on-site testing of the waste material leachates was impossible. In this study, therefore, a new accelerated leaching method (ALM) combining ultrasonic assisted leaching with tumbling was developed to reduce the total leaching time from 18 h to 30 min. After leaching, the concentrations of heavy metals and metalloids were determined with ICP-MS or ICP-optical emission spectroscopy. No statistical significance between ALM and TCLP was observed for most heavy metals (i.e., cobalt, manganese, mercury, molybdenum, nickel, silver, strontium, and tin) and metalloids (i.e., arsenic and selenium). For the heavy metals with statistical significance, correlation factors derived between ALM and TCLP were 0.56, 0.20, 0.037, and 0.019 for barium, cadmium, chromium, and lead, respectively. Combined with appropriate analytical techniques (e.g., ME), the ALM can be applied to rapidly prepare the incineration bottom ash samples as well as other environmental samples for on-site determination of heavy metals and metalloids. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xu, Xiaohui Sophia; Dueker, Stephen R; Christopher, Lisa J; Lohstroh, Pete N; Keung, Chi Fung Anther; Cao, Kai Kevin; Bonacorsi, Samuel J; Cojocaru, Laura; Shen, Jim X; Humphreys, W Griffith; Stouffer, Bruce; Arnold, Mark E
2012-08-01
An absolute bioavailability study that utilized an intravenous [(14)C]microdose was conducted for saxagliptin (Onglyza(®)), a marketed drug product for the treatment of Type 2 diabetes mellitus. Concentrations of [(14)C]saxagliptin were determined by accelerator MS (AMS) after protein precipitation, chromatographic separation by UPLC and analyte fraction collection. A series of investigative experiments were conducted to maximize the release of the drug from high-affinity receptors and nonspecific adsorption, and to determine a suitable quantitation range. A technique-appropriate validation demonstrated the accuracy, precision, specificity, stability and recovery of the AMS methodology across the concentration range of 0.025 to 15.0 dpm/ml (disintegration per minute per milliliter), the equivalent of 1.91-1144 pg/ml. Based on the study sample analysis, the mean absolute bioavailability of saxagliptin was 50% in the eight subjects with a CV of 6.6%. Incurred sample reanalysis data fell well within acceptable limits. This study demonstrated that the optimized sample pretreatment and chromatographic separation procedures were critical for the successful implementation of an UPLC plus AMS method for [(14)C]saxagliptin. The use of multiple-point standards are useful, particularly during method development and validation, to evaluate and correct for concentration-dependent recovery, if observed, and to monitor and control process loss and operational variations.
Multiplexed Paper Analytical Device for Quantification of Metals using Distance-Based Detection
Cate, David M.; Noblitt, Scott D.; Volckens, John; Henry, Charles S.
2015-01-01
Exposure to metal-containing aerosols has been linked with adverse health outcomes for almost every organ in the human body. Commercially available techniques for quantifying particulate metals are time-intensive, laborious, and expensive; often sample analysis exceeds $100. We report a simple technique, based upon a distance-based detection motif, for quantifying metal concentrations of Ni, Cu, and Fe in airborne particulate matter using microfluidic paper-based analytical devices. Paper substrates are used to create sensors that are self-contained, self-timing, and require only a drop of sample for operation. Unlike other colorimetric approaches in paper microfluidics that rely on optical instrumentation for analysis, with distance-based detection, analyte is quantified visually based on the distance of a colorimetric reaction, similar to reading temperature on a thermometer. To demonstrate the effectiveness of this approach, Ni, Cu, and Fe were measured individually in single-channel devices; detection limits as low as 0.1, 0.1, and 0.05 µg were reported for Ni, Cu, and Fe. Multiplexed analysis of all three metals was achieved with detection limits of 1, 5, and 1 µg for Ni, Cu, and Fe. We also extended the dynamic range for multi-analyte detection by printing concentration gradients of colorimetric reagents using an off the shelf inkjet printer. Analyte selectivity was demonstrated for common interferences. To demonstrate utility of the method, Ni, Cu, and Fe were measured from samples of certified welding fume; levels measured with paper sensors matched known values determined gravimetrically. PMID:26009988
Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2
NASA Technical Reports Server (NTRS)
Lorenzini, Enrico C.
1987-01-01
A control law was developed to control the elevator during short-distance maneuvers along the tether of a 4-mass tethered system. This control law (called retarded exponential or RE) was analyzed parametrically in order to assess which control parameters provide a good dynamic response and a smooth time history of the acceleration on board the elevator. The short-distance maneuver under investigation consists of a slow crawling of the elevator over the distance of 10 m that represents a typical maneuver for fine tuning the acceleration level on board the elevator. The contribution of aerodynamic and thermal perturbations upon acceleration levels was also evaluated and acceleration levels obtained when such pertubations are taken into account were compared to those obtained by neglecting the thermal and aerodynamic forces. In addition, the preparation of a tether simulation questionnaire is illustrated. Analytic solutions to be compared to numerical cases and simulator test cases are also discussed.
Velikina, Julia V; Samsonov, Alexey A
2015-11-01
To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.
Velikina, Julia V.; Samsonov, Alexey A.
2014-01-01
Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724
Laser-driven three-stage heavy-ion acceleration from relativistic laser-plasma interaction.
Wang, H Y; Lin, C; Liu, B; Sheng, Z M; Lu, H Y; Ma, W J; Bin, J H; Schreiber, J; He, X T; Chen, J E; Zepf, M; Yan, X Q
2014-01-01
A three-stage heavy ion acceleration scheme for generation of high-energy quasimonoenergetic heavy ion beams is investigated using two-dimensional particle-in-cell simulation and analytical modeling. The scheme is based on the interaction of an intense linearly polarized laser pulse with a compound two-layer target (a front heavy ion layer + a second light ion layer). We identify that, under appropriate conditions, the heavy ions preaccelerated by a two-stage acceleration process in the front layer can be injected into the light ion shock wave in the second layer for a further third-stage acceleration. These injected heavy ions are not influenced by the screening effect from the light ions, and an isolated high-energy heavy ion beam with relatively low-energy spread is thus formed. Two-dimensional particle-in-cell simulations show that ∼100MeV/u quasimonoenergetic Fe24+ beams can be obtained by linearly polarized laser pulses at intensities of 1.1×1021W/cm2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Eddy, N.; Edstrom, D.
Superconducting linacs are capable of producing intense, ultra-stable, high-quality electron beams that have widespread applications in Science and Industry. Many project are based on the 1.3-GHz TESLA-type superconducting cavity. In this paper we provide an update on a recent experiment aimed at measuring the transfer matrix of a TESLA cavity at the Fermilab Accelerator Science and Technology (FAST) facility. The results are discussed and compared with analytical and numerical simulations.
The Role of a Reference Synthetic Data Generator within the Field of Learning Analytics
ERIC Educational Resources Information Center
Berg, Alan\tM.; Mol, Stefan T.; Kismihók, Gábor; Sclater, Niall
2016-01-01
This paper details the anticipated impact of synthetic "big" data on learning analytics (LA) infrastructures, with a particular focus on data governance, the acceleration of service development, and the benchmarking of predictive models. By reviewing two cases, one at the sector-wide level (the Jisc learning analytics architecture) and…
Dielectrophoretic label-free immunoassay for rare-analyte quantification in biological samples
NASA Astrophysics Data System (ADS)
Velmanickam, Logeeshan; Laudenbach, Darrin; Nawarathna, Dharmakeerthi
2016-10-01
The current gold standard for detecting or quantifying target analytes from blood samples is the ELISA (enzyme-linked immunosorbent assay). The detection limit of ELISA is about 250 pg/ml. However, to quantify analytes that are related to various stages of tumors including early detection requires detecting well below the current limit of the ELISA test. For example, Interleukin 6 (IL-6) levels of early oral cancer patients are <100 pg/ml and the prostate specific antigen level of the early stage of prostate cancer is about 1 ng/ml. Further, it has been reported that there are significantly less than 1 pg /mL of analytes in the early stage of tumors. Therefore, depending on the tumor type and the stage of the tumors, it is required to quantify various levels of analytes ranging from ng/ml to pg/ml. To accommodate these critical needs in the current diagnosis, there is a need for a technique that has a large dynamic range with an ability to detect extremely low levels of target analytes (
Pilot testing of SHRP 2 reliability data and analytical products: Florida. [supporting datasets
DOT National Transportation Integrated Search
2014-01-01
SHRP 2 initiated the L38 project to pilot test products from five of the programs completed projects. The products support reliability estimation and use based on data analyses, analytical techniques, and decision-making framework. The L38 project...
Sensor failure detection for jet engines using analytical redundance
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1984-01-01
Analytical redundant sensor failure detection, isolation and accommodation techniques for gas turbine engines are surveyed. Both the theoretical technology base and demonstrated concepts are discussed. Also included is a discussion of current technology needs and ongoing Government sponsored programs to meet those needs.
Analytical methods for determination of mycotoxins: a review.
Turner, Nicholas W; Subrahmanyam, Sreenath; Piletsky, Sergey A
2009-01-26
Mycotoxins are small (MW approximately 700), toxic chemical products formed as secondary metabolites by a few fungal species that readily colonise crops and contaminate them with toxins in the field or after harvest. Ochratoxins and Aflatoxins are mycotoxins of major significance and hence there has been significant research on broad range of analytical and detection techniques that could be useful and practical. Due to the variety of structures of these toxins, it is impossible to use one standard technique for analysis and/or detection. Practical requirements for high-sensitivity analysis and the need for a specialist laboratory setting create challenges for routine analysis. Several existing analytical techniques, which offer flexible and broad-based methods of analysis and in some cases detection, have been discussed in this manuscript. There are a number of methods used, of which many are lab-based, but to our knowledge there seems to be no single technique that stands out above the rest, although analytical liquid chromatography, commonly linked with mass spectroscopy is likely to be popular. This review manuscript discusses (a) sample pre-treatment methods such as liquid-liquid extraction (LLE), supercritical fluid extraction (SFE), solid phase extraction (SPE), (b) separation methods such as (TLC), high performance liquid chromatography (HPLC), gas chromatography (GC), and capillary electrophoresis (CE) and (c) others such as ELISA. Further currents trends, advantages and disadvantages and future prospects of these methods have been discussed.
Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.
Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen
2015-10-01
Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.
Ultramicroelectrode Array Based Sensors: A Promising Analytical Tool for Environmental Monitoring
Orozco, Jahir; Fernández-Sánchez, César; Jiménez-Jorquera, Cecilia
2010-01-01
The particular analytical performance of ultramicroelectrode arrays (UMEAs) has attracted a high interest by the research community and has led to the development of a variety of electroanalytical applications. UMEA-based approaches have demonstrated to be powerful, simple, rapid and cost-effective analytical tools for environmental analysis compared to available conventional electrodes and standardised analytical techniques. An overview of the fabrication processes of UMEAs, their characterization and applications carried out by the Spanish scientific community is presented. A brief explanation of theoretical aspects that highlight their electrochemical behavior is also given. Finally, the applications of this transducer platform in the environmental field are discussed. PMID:22315551
Studying Upper-Limb Kinematics Using Inertial Sensors Embedded in Mobile Phones
Bennett, Paul
2015-01-01
Background In recent years, there has been a great interest in analyzing upper-limb kinematics. Inertial measurement with mobile phones is a convenient and portable analysis method for studying humerus kinematics in terms of angular mobility and linear acceleration. Objective The aim of this analysis was to study upper-limb kinematics via mobile phones through six physical properties that correspond to angular mobility and acceleration in the three axes of space. Methods This cross-sectional study recruited healthy young adult subjects. Humerus kinematics was studied in 10 young adults with the iPhone4. They performed flexion and abduction analytical tasks. Mobility angle and lineal acceleration in each of its axes (yaw, pitch, and roll) were obtained with the iPhone4. This device was placed on the right half of the body of each subject, in the middle third of the humerus, slightly posterior. Descriptive statistics were calculated. Results Descriptive graphics of analytical tasks performed were obtained. The biggest range of motion was found in pitch angle, and the biggest acceleration was found in the y-axis in both analytical tasks. Focusing on tridimensional kinematics, bigger range of motion and acceleration was found in abduction (209.69 degrees and 23.31 degrees per second respectively). Also, very strong correlation was found between angular mobility and linear acceleration in abduction (r=.845) and flexion (r=.860). Conclusions The use of an iPhone for humerus tridimensional kinematics is feasible. This supports use of the mobile phone as a device to analyze upper-limb kinematics and to facilitate the evaluation of the patient. PMID:28582241
Nsouli, Bilal; Bejjani, Alice; Negra, Serge Della; Gardon, Alain; Thomas, Jean-Paul
2010-09-01
In order to evaluate the potential of accelerator based analytical techniques ((particle induced X-ray emission (PIXE), particle induced gamma-ray emission (PIGE), and particle desorption mass spectrometry (PD-MS)) for the analysis of commercial pharmaceutical products in their solid dosage form, the fluphenazine drug has been taken as a representative example. It is demonstrated that PIXE and PIGE are by far the best choice for quantification of the active ingredient (AI) (certification with 7% precision) from the reactions induced on its specific heteroatoms fluorine and sulfur using pellets made from original tablets. Since heteroatoms cannot be present in all types of drugs, the PD-MS technique, which makes easily the distinction between AI(s) and excipients, has been evaluated for the same material. It is shown that the quantification of AI is obtained via the detection of its protonated molecule. However, calibration curves have to be made from the secondary ion yield variations since matrix effects of various nature are characteristics of such mixtures of heterogeneous materials (including deposits from soluble components). From the analysis of solid tablets, (either transformed into pellets and even as received), it is strongly suggested that the physical state of the grains in the mixture is a crucial parameter in the ion emission and accordingly for the calibration curves. As a result of our specific (but not optimized) conditions the resulting precision is <17% with an almost linear range extending from 0.04 to 7.87 mg of AI in a tablet made under the manufacturer conditions (the commercial drug product is labeled at 5 mg).
Biosensor Regeneration: A Review of Common Techniques and Outcomes.
Goode, J A; Rushworth, J V H; Millner, P A
2015-06-16
Biosensors are ideally portable, low-cost tools for the rapid detection of pathogens, proteins, and other analytes. The global biosensor market is currently worth over 10 billion dollars annually and is a burgeoning field of interdisciplinary research that is hailed as a potential revolution in consumer, healthcare, and industrial testing. A key barrier to the widespread adoption of biosensors, however, is their cost. Although many systems have been validated in the laboratory setting and biosensors for a range of analytes are proven at the concept level, many have yet to make a strong commercial case for their acceptance. Though it is true with the development of cheaper electrodes, circuits, and components that there is a downward pressure on costs, there is also an emerging trend toward the development of multianalyte biosensors that is pushing in the other direction. One way to reduce the cost that is suitable for certain systems is to enable their reuse, thus reducing the cost per test. Regenerating biosensors is a technique that can often be used in conjunction with existing systems in order to reduce costs and accelerate the commercialization process. This article discusses the merits and drawbacks of regeneration schemes that have been proven in various biosensor systems and indicates parameters for successful regeneration based on a systematic review of the literature. It also outlines some of the difficulties encountered when considering the role of regeneration at the point of use. A brief meta-analysis has been included in this review to develop a working definition for biosensor regeneration, and using this analysis only ∼60% of the reported studies analyzed were deemed a success. This highlights the variation within the field and the need to normalize regeneration as a standard process across the field by establishing a consensus term.
Development and validation of a sensor-based health monitoring model for the Parkview Bridge deck.
DOT National Transportation Integrated Search
2012-01-31
Accelerated bridge construction (ABC) using full-depth precast deck panels is an innovative technique that brings all : the benefits listed under ABC to full fruition. However, this technique needs to be evaluated and the performance of : the bridge ...
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.
Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar
2015-01-01
Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466
Principles for timing at spallation neutron sources based on developments at LANSCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, R. O.; Merl, R. B.; Rose, C. R.
2001-01-01
Due to AC-power-grid frequency fluctuations, the designers for accelerator-based spallation-neutron facilities have worked to optimize the conflicting demands of accelerator and neutron chopper performance. For the first time, we are able to quantitatively access the tradeoffs between these two constraints and design or upgrade a facility to optimize total system performance using powerful new simulation techniques. We have modeled timing systems that integrate chopper controllers and chopper hardware and built new systems. Thus, at LANSCE, we now operate multiple chopper systems and the accelerator as simple slaves to a single master-timing-reference generator. Based on this experience we recommend that spallationmore » neutron sources adhere to three principles. First, timing for pulsed sources should be planned starting with extraction at a fixed phase and working backwards toward the leading edge of the beam pulse. Second, accelerator triggers and storage ring extraction commands from neutron choppers offer only marginal benefits to accelerator-based spallation sources. Third, the storage-ring RF should be phase synchronized with neutron choppers to provide extraction without the one orbit timing uncertainty.« less
Sharma, Upendra Kumar; Sharma, Nandini; Gupta, Ajai Prakash; Kumar, Vinod; Sinha, Arun Kumar
2007-12-01
A simple, fast and sensitive RP-HPTLC method is developed for simultaneous quantitative determination of vanillin and related phenolic compounds in ethanolic extracts of Vanilla planifolia pods. In addition to this, the applicability of accelerated solvent extraction (ASE) as an alternative to microwave-assisted extraction (MAE), ultrasound-assisted extraction (UAE) and Soxhlet extraction was also explored for the rapid extraction of phenolic compounds in vanilla pods. Good separation was achieved on aluminium plates precoated with silica gel RP-18 F(254S) in the mobile phase of methanol/water/isopropanol/acetic acid (30:65:2:3, by volume). The method showed good linearity, high precision and good recovery of compounds of interest. ASE showed good extraction efficiency in less time as compared to other techniques for all the phenolic compounds. The present method would be useful for analytical research and for routine analysis of vanilla extracts for their quality control.
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-01-01
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-11-23
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.
Comparison between DCA - SSO - VDR and VMAT dose delivery techniques for 15 SRS/SRT patients
NASA Astrophysics Data System (ADS)
Tas, B.; Durmus, I. F.
2018-02-01
To evaluate dose delivery between Dynamic Conformal Arc (DCA) - Segment Shape Optimization (SSO) - Variation Dose Rate (VDR) and Volumetric Modulated Arc Therapy (VMAT) techniques for fifteen SRS patients using Versa HD® lineer accelerator. Fifteen SRS / SRT patient's optimum treatment planning were performed using Monaco5.11® treatment planning system (TPS) with 1 coplanar and 3 non-coplanar fields for VMAT technique, then the plans were reoptimized with the same optimization parameters for DCA - SSO - VDR technique. The advantage of DCA - SSO - VDR technique were determined less MUs and beam on time, also larger segments decrease dosimetric uncertainities of small fields quality assurance. The advantage of VMAT technique were determined a little better GI, CI, PCI, brain V12Gy and brain mean dose. The results show that the clinical objectives and plans for both techniques satisfied all organs at risks (OARs) dose constraints. Depends on the shape and localization of target, we could choose one of these techniques for linear accelerator based SRS / SRT treatment.
Flexible aircraft dynamic modeling for dynamic analysis and control synthesis
NASA Technical Reports Server (NTRS)
Schmidt, David K.
1989-01-01
The linearization and simplification of a nonlinear, literal model for flexible aircraft is highlighted. Areas of model fidelity that are critical if the model is to be used for control system synthesis are developed and several simplification techniques that can deliver the necessary model fidelity are discussed. These techniques include both numerical and analytical approaches. An analytical approach, based on first-order sensitivity theory is shown to lead not only to excellent numerical results, but also to closed-form analytical expressions for key system dynamic properties such as the pole/zero factors of the vehicle transfer-function matrix. The analytical results are expressed in terms of vehicle mass properties, vibrational characteristics, and rigid-body and aeroelastic stability derivatives, thus leading to the underlying causes for critical dynamic characteristics.
USDA-ARS?s Scientific Manuscript database
Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Gas-filled capillaries for plasma-based accelerators
NASA Astrophysics Data System (ADS)
Filippi, F.; Anania, M. P.; Brentegani, E.; Biagioni, A.; Cianchi, A.; Chiadroni, E.; Ferrario, M.; Pompili, R.; Romeo, S.; Zigler, A.
2017-07-01
Plasma Wakefield Accelerators are based on the excitation of large amplitude plasma waves excited by either a laser or a particle driver beam. The amplitude of the waves, as well as their spatial dimensions and the consequent accelerating gradient depend strongly on the background electron density along the path of the accelerated particles. The process needs stable and reliable plasma sources, whose density profile must be controlled and properly engineered to ensure the appropriate accelerating mechanism. Plasma confinement inside gas filled capillaries have been studied in the past since this technique allows to control the evolution of the plasma, ensuring a stable and repeatable plasma density distribution during the interaction with the drivers. Moreover, in a gas filled capillary plasma can be pre-ionized by a current discharge to avoid ionization losses. Different capillary geometries have been studied to allow the proper temporal and spatial evolution of the plasma along the acceleration length. Results of this analysis obtained by varying the length and the number of gas inlets will be presented.
Role of Knowledge Management and Analytical CRM in Business: Data Mining Based Framework
ERIC Educational Resources Information Center
Ranjan, Jayanthi; Bhatnagar, Vishal
2011-01-01
Purpose: The purpose of the paper is to provide a thorough analysis of the concepts of business intelligence (BI), knowledge management (KM) and analytical CRM (aCRM) and to establish a framework for integrating all the three to each other. The paper also seeks to establish a KM and aCRM based framework using data mining (DM) techniques, which…
NASA Astrophysics Data System (ADS)
Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel
2013-10-01
We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.
NASA Astrophysics Data System (ADS)
Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan
2016-04-01
Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.
Longitudinal phase space tomography using a booster cavity at PITZ
NASA Astrophysics Data System (ADS)
Malyutin, D.; Gross, M.; Isaev, I.; Khojoyan, M.; Kourkafas, G.; Krasilnikov, M.; Marchetti, B.; Otevrel, M.; Stephan, F.; Vashchenko, G.
2017-11-01
The knowledge of the longitudinal phase space (LPS) of electron beams is of great importance for optimizing the performance of high brightness photo injectors. To get the longitudinal phase space of an electron bunch in a linear accelerator a tomographic technique can be used. The method is based on measurements of the bunch momentum spectra while varying the bunch energy chirp. The energy chirp can be varied by one of the RF accelerating structures in the accelerator and the resulting momentum distribution can be measured with a dipole spectrometer further downstream. As a result, the longitudinal phase space can be reconstructed. Application of the tomographic technique for reconstruction of the longitudinal phase space is introduced in detail in this paper. Measurement results from the PITZ facility are shown and analyzed.
DOT National Transportation Integrated Search
2012-01-31
Accelerated bridge construction (ABC) using full-depth precast deck panels is an innovative technique that brings all the benefits listed under ABC to full fruition. However, this technique needs to be evaluated and the performance of the bridge need...
Kojima, A; Hanada, M; Tobari, H; Nishikiori, R; Hiratsuka, J; Kashiwagi, M; Umeda, N; Yoshida, M; Ichikawa, M; Watanabe, K; Yamano, Y; Grisham, L R
2016-02-01
Design techniques for the vacuum insulation have been developed in order to realize a reliable voltage holding capability of multi-aperture multi-grid (MAMuG) accelerators for fusion application. In this method, the nested multi-stage configuration of the MAMuG accelerator can be uniquely designed to satisfy the target voltage within given boundary conditions. The evaluation of the voltage holding capabilities of each acceleration stages was based on the previous experimental results about the area effect and the multi-aperture effect. Since the multi-grid effect was found to be the extension of the area effect by the total facing area this time, the total voltage holding capability of the multi-stage can be estimated from that per single stage by assuming the stage with the highest electric field, the total facing area, and the total apertures. By applying these consideration, the analysis on the 3-stage MAMuG accelerator for JT-60SA agreed well with the past gap-scan experiments with an accuracy of less than 10% variation, which demonstrated the high reliability to design MAMuG accelerators and also multi-stage high voltage bushings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kojima, A., E-mail: kojima.atsushi@jaea.go.jp; Hanada, M.; Tobari, H.
Design techniques for the vacuum insulation have been developed in order to realize a reliable voltage holding capability of multi-aperture multi-grid (MAMuG) accelerators for fusion application. In this method, the nested multi-stage configuration of the MAMuG accelerator can be uniquely designed to satisfy the target voltage within given boundary conditions. The evaluation of the voltage holding capabilities of each acceleration stages was based on the previous experimental results about the area effect and the multi-aperture effect. Since the multi-grid effect was found to be the extension of the area effect by the total facing area this time, the total voltagemore » holding capability of the multi-stage can be estimated from that per single stage by assuming the stage with the highest electric field, the total facing area, and the total apertures. By applying these consideration, the analysis on the 3-stage MAMuG accelerator for JT-60SA agreed well with the past gap-scan experiments with an accuracy of less than 10% variation, which demonstrated the high reliability to design MAMuG accelerators and also multi-stage high voltage bushings.« less
Advanced Accelerators: Particle, Photon and Plasma Wave Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Ronald L.
2017-06-29
The overall objective of this project was to study the acceleration of electrons to very high energies over very short distances based on trapping slowly moving electrons in the fast moving potential wells of large amplitude plasma waves, which have relativistic phase velocities. These relativistic plasma waves, or wakefields, are the basis of table-top accelerators that have been shown to accelerate electrons to the same high energies as kilometer-length linear particle colliders operating using traditional decades-old acceleration techniques. The accelerating electrostatic fields of the relativistic plasma wave accelerators can be as large as GigaVolts/meter, and our goal was to studymore » techniques for remotely measuring these large fields by injecting low energy probe electron beams across the plasma wave and measuring the beam’s deflection. Our method of study was via computer simulations, and these results suggested that the deflection of the probe electron beam was directly proportional to the amplitude of the plasma wave. This is the basis of a proposed diagnostic technique, and numerous studies were performed to determine the effects of changing the electron beam, plasma wave and laser beam parameters. Further simulation studies included copropagating laser beams with the relativistic plasma waves. New interesting results came out of these studies including the prediction that very small scale electron beam bunching occurs, and an anomalous line focusing of the electron beam occurs under certain conditions. These studies were summarized in the dissertation of a graduate student who obtained the Ph.D. in physics. This past research program has motivated ideas for further research to corroborate these results using particle-in-cell simulation tools which will help design a test-of-concept experiment in our laboratory and a scaled up version for testing at a major wakefield accelerator facility.« less
Considerations for monitoring raptor population trends based on counts of migrants
Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.
1989-01-01
Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.
NASA Astrophysics Data System (ADS)
Korytov, M. S.; Shcherbakov, V. S.; Titenko, V. V.
2018-01-01
Limitation of the swing of the bridge crane cargo rope is a matter of urgency, as it can significantly improve the efficiency and safety of the work performed. In order to completely dampen the pendulum swing after the break-up of a bridge or a bridge-crane freight cart to maximum speed, it is necessary, in the normal repulsion control of the electric motor, to split the process of dispersion into a minimum of three gaps. For a dynamic system of swinging of a bridge crane on a flexible cable hanger in a separate vertical plane, an analytical solution was obtained to determine the temporal dependence of the cargo rope angle relative to the gravitational vertical when the cargo suspension point moves with constant acceleration. The resulting analytical dependence of the cargo rope angle and its first derivative can break the process of dispersing the cargo suspension point into three stages of dispersal and braking with various accelerations and enter maximum speed of movement of the cargo suspension point. In doing so, the condition of eliminating the swings of the cargo rope relative to the gravitational vertical is fulfilled. Provides examples of the maximum speed output constraints-to-time when removing the rope swing.
Microprocessor Based Temperature Control of Liquid Delivery with Flow Disturbances.
ERIC Educational Resources Information Center
Kaya, Azmi
1982-01-01
Discusses analytical design and experimental verification of a PID control value for a temperature controlled liquid delivery system, demonstrating that the analytical design techniques can be experimentally verified by using digital controls as a tool. Digital control instrumentation and implementation are also demonstrated and documented for…
Artifact Noise Removal Techniques on Seismocardiogram Using Two Tri-Axial Accelerometers
Luu, Loc; Dinh, Anh
2018-01-01
The aim of this study is on the investigation of motion noise removal techniques using two-accelerometer sensor system and various placements of the sensors on gentle movement and walking of the patients. A Wi-Fi based data acquisition system and a framework on Matlab are developed to collect and process data while the subjects are in motion. The tests include eight volunteers who have no record of heart disease. The walking and running data on the subjects are analyzed to find the minimal-noise bandwidth of the SCG signal. This bandwidth is used to design filters in the motion noise removal techniques and peak signal detection. There are two main techniques of combining signals from the two sensors to mitigate the motion artifact: analog processing and digital processing. The analog processing comprises analog circuits performing adding or subtracting functions and bandpass filter to remove artifact noises before entering the data acquisition system. The digital processing processes all the data using combinations of total acceleration and z-axis only acceleration. The two techniques are tested on three placements of accelerometer sensors including horizontal, vertical, and diagonal on gentle motion and walking. In general, the total acceleration and z-axis acceleration are the best techniques to deal with gentle motion on all sensor placements which improve average systolic signal-noise-ratio (SNR) around 2 times and average diastolic SNR around 3 times comparing to traditional methods using only one accelerometer. With walking motion, ADDER and z-axis acceleration are the best techniques on all placements of the sensors on the body which enhance about 7 times of average systolic SNR and about 11 times of average diastolic SNR comparing to only one accelerometer method. Among the sensor placements, the performance of horizontal placement of the sensors is outstanding comparing with other positions on all motions. PMID:29614821
Zhdanov,; Michael, S [Salt Lake City, UT
2008-01-29
Mineral exploration needs a reliable method to distinguish between uneconomic mineral deposits and economic mineralization. A method and system includes a geophysical technique for subsurface material characterization, mineral exploration and mineral discrimination. The technique introduced in this invention detects induced polarization effects in electromagnetic data and uses remote geophysical observations to determine the parameters of an effective conductivity relaxation model using a composite analytical multi-phase model of the rock formations. The conductivity relaxation model and analytical model can be used to determine parameters related by analytical expressions to the physical characteristics of the microstructure of the rocks and minerals. These parameters are ultimately used for the discrimination of different components in underground formations, and in this way provide an ability to distinguish between uneconomic mineral deposits and zones of economic mineralization using geophysical remote sensing technology.
NASA Astrophysics Data System (ADS)
Chen, Guohai; Meng, Zeng; Yang, Dixiong
2018-01-01
This paper develops an efficient method termed as PE-PIM to address the exact nonstationary responses of pavement structure, which is modeled as a rectangular thin plate resting on bi-parametric Pasternak elastic foundation subjected to stochastic moving loads with constant acceleration. Firstly, analytical power spectral density (PSD) functions of random responses for thin plate are derived by integrating pseudo excitation method (PEM) with Duhamel's integral. Based on PEM, the new equivalent von Mises stress (NEVMS) is proposed, whose PSD function contains all cross-PSD functions between stress components. Then, the PE-PIM that combines the PEM with precise integration method (PIM) is presented to achieve efficiently stochastic responses of the plate by replacing Duhamel's integral with the PIM. Moreover, the semi-analytical Monte Carlo simulation is employed to verify the computational results of the developed PE-PIM. Finally, numerical examples demonstrate the high accuracy and efficiency of PE-PIM for nonstationary random vibration analysis. The effects of velocity and acceleration of moving load, boundary conditions of the plate and foundation stiffness on the deflection and NEVMS responses are scrutinized.
Spatiotemporal Airy Ince-Gaussian wave packets in strongly nonlocal nonlinear media.
Peng, Xi; Zhuang, Jingli; Peng, Yulian; Li, DongDong; Zhang, Liping; Chen, Xingyu; Zhao, Fang; Deng, Dongmei
2018-03-08
The self-accelerating Airy Ince-Gaussian (AiIG) and Airy helical Ince-Gaussian (AihIG) wave packets in strongly nonlocal nonlinear media (SNNM) are obtained by solving the strongly nonlocal nonlinear Schrödinger equation. For the first time, the propagation properties of three dimensional localized AiIG and AihIG breathers and solitons in the SNNM are demonstrated, these spatiotemporal wave packets maintain the self-accelerating and approximately non-dispersion properties in temporal dimension, periodically oscillating (breather state) or steady (soliton state) in spatial dimension. In particular, their numerical experiments of spatial intensity distribution, numerical simulations of spatiotemporal distribution, as well as the transverse energy flow and the angular momentum in SNNM are presented. Typical examples of the obtained solutions are based on the ratio between the input power and the critical power, the ellipticity and the strong nonlocality parameter. The comparisons of analytical solutions with numerical simulations and numerical experiments of the AiIG and AihIG optical solitons show that the numerical results agree well with the analytical solutions in the case of strong nonlocality.
NASA Astrophysics Data System (ADS)
Hou, X. Y.; Koh, C. G.; Kuang, K. S. C.; Lee, W. H.
2017-07-01
This paper investigates the capability of a novel piezoelectric sensor for low-frequency and low-amplitude vibration measurement. The proposed design effectively amplifies the input acceleration via two amplifying mechanisms and thus eliminates the use of the external charge amplifier or conditioning amplifier typically employed for measurement system. The sensor is also self-powered, i.e. no external power unit is required. Consequently, wiring and electrical insulation for on-site measurement are considerably simpler. In addition, the design also greatly reduces the interference from rotational motion which often accompanies the translational acceleration to be measured. An analytical model is developed based on a set of piezoelectric constitutive equations and beam theory. Closed-form expression is derived to correlate sensor geometry and material properties with its dynamic performance. Experimental calibration is then carried out to validate the analytical model. After calibration, experiments are carried out to check the feasibility of the new sensor in structural vibration detection. From experimental results, it is concluded that the proposed sensor is suitable for measuring low-frequency and low-amplitude vibrations.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Special issue on compact x-ray sources
NASA Astrophysics Data System (ADS)
Hooker, Simon; Midorikawa, Katsumi; Rosenzweig, James
2014-04-01
Journal of Physics B: Atomic, Molecular and Optical Physics is delighted to announce a forthcoming special issue on compact x-ray sources, to appear in the winter of 2014, and invites you to submit a paper. The potential for high-brilliance x- and gamma-ray sources driven by advanced, compact accelerators has gained increasing attention in recent years. These novel sources—sometimes dubbed 'fifth generation sources'—will build on the revolutionary advance of the x-ray free-electron laser (FEL). New radiation sources of this type have widespread applications, including in ultra-fast imaging, diagnostic and therapeutic medicine, and studies of matter under extreme conditions. Rapid advances in compact accelerators and in FEL techniques make this an opportune moment to consider the opportunities which could be realized by bringing these two fields together. Further, the successful development of compact radiation sources driven by compact accelerators will be a significant milestone on the road to the development of high-gradient colliders able to operate at the frontiers of particle physics. Thus the time is right to publish a peer-reviewed collection of contributions concerning the state-of-the-art in: advanced and novel acceleration techniques; sophisticated physics at the frontier of FELs; and the underlying and enabling techniques of high brightness electron beam physics. Interdisciplinary research connecting two or more of these fields is also increasingly represented, as exemplified by entirely new concepts such as plasma based electron beam sources, and coherent imaging with fs-class electron beams. We hope that in producing this special edition of Journal of Physics B: Atomic, Molecular and Optical Physics (iopscience.iop.org/0953-4075/) we may help further a challenging mission and ongoing intellectual adventure: the harnessing of newly emergent, compact advanced accelerators to the creation of new, agile light sources with unprecedented capabilities. New schemes for compact accelerators: laser- and beam-driven plasma accelerators; dielectric laser accelerators; THz accelerators. Latest results for compact accelerators. Target design and staging of advanced accelerators. Advanced injection and phase space manipulation techniques. Novel diagnostics: single-shot measurement of sub-fs bunch duration; measurement of ultra-low emittance. Generation and characterization of incoherent radiation: betatron and undulator radiation; Thomson/Compton scattering sources, novel THz sources. Generation and characterization of coherent radiation. Novel FEL simulation techniques. Advances in simulations of novel accelerators: simulations of injection and acceleration processes; simulations of coherent and incoherent radiation sources; start-to-end simulations of fifth generation light sources. Novel undulator schemes. Novel laser drivers for laser-driven accelerators: high-repetition rate laser systems; high wall-plug efficiency systems. Applications of compact accelerators: imaging; radiography; medical applications; electron diffraction and microscopy. Please submit your article by 15 May 2014 (expected web publication: winter 2014); submissions received after this date will be considered for the journal, but may not be included in the special issue.
Qiu, Junlang; Wang, Fuxin; Zhang, Tianlang; Chen, Le; Liu, Yuan; Zhu, Fang; Ouyang, Gangfeng
2018-01-02
Decreasing the tedious sample preparation duration is one of the most important concerns for the environmental analytical chemistry especially for in vivo experiments. However, due to the slow mass diffusion paths for most of the conventional methods, ultrafast in vivo sampling remains challenging. Herein, for the first time, we report an ultrafast in vivo solid-phase microextraction (SPME) device based on electrosorption enhancement and a novel custom-made CNT@PPY@pNE fiber for in vivo sampling of ionized acidic pharmaceuticals in fish. This sampling device exhibited an excellent robustness, reproducibility, matrix effect-resistant capacity, and quantitative ability. Importantly, the extraction kinetics of the targeted ionized pharmaceuticals were significantly accelerated using the device, which significantly improved the sensitivity of the SPME in vivo sampling method (limits of detection ranged from 0.12 ng·g -1 to 0.25 ng·g -1 ) and shorten the sampling time (only 1 min). The proposed approach was successfully applied to monitor the concentrations of ionized pharmaceuticals in living fish, which demonstrated that the device and fiber were suitable for ultrafast in vivo sampling and continuous monitoring. In addition, the bioconcentration factor (BCF) values of the pharmaceuticals were derived in tilapia (Oreochromis mossambicus) for the first time, based on the data of ultrafast in vivo sampling. Therefore, we developed and validated an effective and ultrafast SPME sampling device for in vivo sampling of ionized analytes in living organisms and this state-of-the-art method provides an alternative technique for future in vivo studies.
An Multivariate Distance-Based Analytic Framework for Connectome-Wide Association Studies
Shehzad, Zarrar; Kelly, Clare; Reiss, Philip T.; Craddock, R. Cameron; Emerson, John W.; McMahon, Katie; Copland, David A.; Castellanos, F. Xavier; Milham, Michael P.
2014-01-01
The identification of phenotypic associations in high-dimensional brain connectivity data represents the next frontier in the neuroimaging connectomics era. Exploration of brain-phenotype relationships remains limited by statistical approaches that are computationally intensive, depend on a priori hypotheses, or require stringent correction for multiple comparisons. Here, we propose a computationally efficient, data-driven technique for connectome-wide association studies (CWAS) that provides a comprehensive voxel-wise survey of brain-behavior relationships across the connectome; the approach identifies voxels whose whole-brain connectivity patterns vary significantly with a phenotypic variable. Using resting state fMRI data, we demonstrate the utility of our analytic framework by identifying significant connectivity-phenotype relationships for full-scale IQ and assessing their overlap with existent neuroimaging findings, as synthesized by openly available automated meta-analysis (www.neurosynth.org). The results appeared to be robust to the removal of nuisance covariates (i.e., mean connectivity, global signal, and motion) and varying brain resolution (i.e., voxelwise results are highly similar to results using 800 parcellations). We show that CWAS findings can be used to guide subsequent seed-based correlation analyses. Finally, we demonstrate the applicability of the approach by examining CWAS for three additional datasets, each encompassing a distinct phenotypic variable: neurotypical development, Attention-Deficit/Hyperactivity Disorder diagnostic status, and L-dopa pharmacological manipulation. For each phenotype, our approach to CWAS identified distinct connectome-wide association profiles, not previously attainable in a single study utilizing traditional univariate approaches. As a computationally efficient, extensible, and scalable method, our CWAS framework can accelerate the discovery of brain-behavior relationships in the connectome. PMID:24583255
Ammari, Faten; Jouan-Rimbaud-Bouveresse, Delphine; Boughanmi, Néziha; Rutledge, Douglas N
2012-09-15
The aim of this study was to find objective analytical methods to study the degradation of edible oils during heating and thus to suggest solutions to improve their stability. The efficiency of Nigella seed extract as natural antioxidant was compared with butylated hydroxytoluene (BHT) during accelerated oxidation of edible vegetable oils at 120 and 140 °C. The modifications during heating were monitored by 3D-front-face fluorescence spectroscopy along with Independent Components Analysis (ICA), (1)H NMR spectroscopy and classical physico-chemical methods such as anisidine value and viscosity. The results of the study clearly indicate that the natural seed extract at a level of 800 ppm exhibited antioxidant effects similar to those of the synthetic antioxidant BHT at a level of 200 ppm and thus contributes to an increase in the oxidative stability of the oil. Copyright © 2012 Elsevier B.V. All rights reserved.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
Chemical and Biological Dynamics Using Droplet-Based Microfluidics.
Dressler, Oliver J; Casadevall I Solvas, Xavier; deMello, Andrew J
2017-06-12
Recent years have witnessed an increased use of droplet-based microfluidic techniques in a wide variety of chemical and biological assays. Nevertheless, obtaining dynamic data from these platforms has remained challenging, as this often requires reading the same droplets (possibly thousands of them) multiple times over a wide range of intervals (from milliseconds to hours). In this review, we introduce the elemental techniques for the formation and manipulation of microfluidic droplets, together with the most recent developments in these areas. We then discuss a wide range of analytical methods that have been successfully adapted for analyte detection in droplets. Finally, we highlight a diversity of studies where droplet-based microfluidic strategies have enabled the characterization of dynamic systems that would otherwise have remained unexplorable.
Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle
2009-10-19
Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less
Sanyal, Doyeli; Rani, Anita; Alam, Samsul; Gujral, Seema; Gupta, Ruchi
2011-11-01
Simple and efficient multi-residue analytical methods were developed and validated for the determination of 13 organochlorine and 17 organophosphorous pesticides from soil, spinach and eggplant. Techniques namely accelerated solvent extraction and dispersive SPE were used for sample preparations. The recovery studies were carried out by spiking the samples at three concentration levels (1 limit of quantification (LOQ), 5 LOQ, and 10 LOQ). The methods were subjected to a thorough validation procedure. The mean recovery for soil, spinach and eggplant were in the range of 70-120% with median CV (%) below 10%. The total uncertainty was evaluated taking four main independent sources viz., weighing, purity of the standard, GC calibration curve and repeatability under consideration. The expanded uncertainty was well below 10% for most of the pesticides and the rest fell in the range of 10-20%.
Many-core graph analytics using accelerated sparse linear algebra routines
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric
2016-05-01
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
Simulations of heart valves by thin shells with non-linear material properties
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez; Hedayat, Mohammadali
2016-11-01
The primary function of a heart valve is to allow blood to flow in only one direction through the heart. Triangular thin-shell finite element formulation is implemented, which considers only translational degrees of freedom, in three-dimensional domain to simulate heart valves undergoing large deformations. The formulation is based on the nonlinear Kirchhoff thin-shell theory. The developed method is intensively validated against numerical and analytical benchmarks. This method is added to previously developed membrane method to obtain more realistic results since ignoring bending forces can results in unrealistic wrinkling of heart valves. A nonlinear Fung-type constitutive relation, based on experimentally measured biaxial loading tests, is used to model the material properties for response of the in-plane motion in heart valves. Furthermore, the experimentally measured liner constitutive relation is used to model the material properties to capture the flexural motion of heart valves. The Fluid structure interaction solver adopts a strongly coupled partitioned approach that is stabilized with under-relaxation and the Aitken acceleration technique. This work was supported by American Heart Association (AHA) Grant 13SDG17220022 and the Center of Computational Research (CCR) of University at Buffalo.
Punctuated evolution and robustness in morphogenesis
Grigoriev, D.; Reinitz, J.; Vakulenko, S.; Weber, A.
2014-01-01
This paper presents an analytic approach to the pattern stability and evolution problem in morphogenesis. The approach used here is based on the ideas from the gene and neural network theory. We assume that gene networks contain a number of small groups of genes (called hubs) controlling morphogenesis process. Hub genes represent an important element of gene network architecture and their existence is empirically confirmed. We show that hubs can stabilize morphogenetic pattern and accelerate the morphogenesis. The hub activity exhibits an abrupt change depending on the mutation frequency. When the mutation frequency is small, these hubs suppress all mutations and gene product concentrations do not change, thus, the pattern is stable. When the environmental pressure increases and the population needs new genotypes, the genetic drift and other effects increase the mutation frequency. For the frequencies that are larger than a critical amount the hubs turn off; and as a result, many mutations can affect phenotype. This effect can serve as an engine for evolution. We show that this engine is very effective: the evolution acceleration is an exponential function of gene redundancy. Finally, we show that the Eldredge-Gould concept of punctuated evolution results from the network architecture, which provides fast evolution, control of evolvability, and pattern robustness. To describe analytically the effect of exponential acceleration, we use mathematical methods developed recently for hard combinatorial problems, in particular, for so-called k-SAT problem, and numerical simulations. PMID:24996115
NASA Astrophysics Data System (ADS)
Trombetti, Tomaso
This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral density of the table input and output were estimated using the Bartlett's spectral estimation method. The experimentally-estimated table acceleration transfer functions obtained for different working conditions are correlated with their analytical counterparts. As a result of this comprehensive correlation study, a thorough understanding of the shaking table dynamics and its sensitivities to control and payload parameters is obtained. Moreover, the correlation study leads to a calibrated analytical model of the shaking table of high predictive ability. It is concluded that, in its present conditions, the Rice shaking table is able to reproduce, with a high degree of accuracy, model earthquake accelerations time histories in the frequency bandwidth from 0 to 75 Hz. Furthermore, the exhaustive analysis performed indicates that the table transfer function is not significantly affected by the presence of a large (in terms of weight) payload with a fundamental frequency up to 20 Hz. Payloads having a higher fundamental frequency do affect significantly the shaking table performance and require a modification of the table control gain setting that can be easily obtained using the predictive analytical model of the shaking table. The complete description of a structural dynamic experiment performed using the Rice shaking table facility is also reported herein. The object of this experimentation was twofold: (1) to verify the testing capability of the shaking table and, (2) to experimentally validate a simplified theory developed by the author, which predicts the maximum rotational response developed by seismic isolated building structures characterized by non-coincident centers of mass and rigidity, when subjected to strong earthquake ground motions.
Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas
2014-03-10
We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.
Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G
2014-12-01
Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.
Interactions between creep, fatigue and strain aging in two refractory alloys
NASA Technical Reports Server (NTRS)
Sheffler, K. D.
1972-01-01
The application of low-amplitude, high-frequency fatigue vibrations during creep testing of two strain-aging refractory alloys (molybdenum-base TZC and tantalum-base T-111) significantly reduced the creep strength of these materials. This strength reduction caused dramatic increases in both the first stage creep strain and the second stage creep rate. The magnitude of the creep rate acceleration varied directly with both frequency and A ratio (ratio of alternating to mean stress), and also varied with temperature, being greatest in the range where the strain-aging phenomenon was most prominent. It was concluded that the creep rate acceleration resulted from a negative strain rate sensitivity which is associated with the strain aging phenomenon in these materials. (A negative rate sensitivity causes flow stress to decrease with increasing strain rate, instead of increasing as in normal materials). By combining two analytical expressions which are normally used to describe creep and strain aging behavior, an expression was developed which correctly described the influence of temperature, frequency, and A ratio on the TZC creep rate acceleration.
Studying Upper-Limb Kinematics Using Inertial Sensors Embedded in Mobile Phones.
Roldan-Jimenez, Cristina; Cuesta-Vargas, Antonio; Bennett, Paul
2015-05-20
In recent years, there has been a great interest in analyzing upper-limb kinematics. Inertial measurement with mobile phones is a convenient and portable analysis method for studying humerus kinematics in terms of angular mobility and linear acceleration. The aim of this analysis was to study upper-limb kinematics via mobile phones through six physical properties that correspond to angular mobility and acceleration in the three axes of space. This cross-sectional study recruited healthy young adult subjects. Humerus kinematics was studied in 10 young adults with the iPhone4. They performed flexion and abduction analytical tasks. Mobility angle and lineal acceleration in each of its axes (yaw, pitch, and roll) were obtained with the iPhone4. This device was placed on the right half of the body of each subject, in the middle third of the humerus, slightly posterior. Descriptive statistics were calculated. Descriptive graphics of analytical tasks performed were obtained. The biggest range of motion was found in pitch angle, and the biggest acceleration was found in the y-axis in both analytical tasks. Focusing on tridimensional kinematics, bigger range of motion and acceleration was found in abduction (209.69 degrees and 23.31 degrees per second respectively). Also, very strong correlation was found between angular mobility and linear acceleration in abduction (r=.845) and flexion (r=.860). The use of an iPhone for humerus tridimensional kinematics is feasible. This supports use of the mobile phone as a device to analyze upper-limb kinematics and to facilitate the evaluation of the patient. ©Cristina Roldan-Jimenez, Antonio Cuesta-Vargas, Paul Bennett. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 20.05.2015.
Beam manipulation with velocity bunching for PWFA applications
NASA Astrophysics Data System (ADS)
Pompili, R.; Anania, M. P.; Bellaveglia, M.; Biagioni, A.; Bisesto, F.; Chiadroni, E.; Cianchi, A.; Croia, M.; Curcio, A.; Di Giovenale, D.; Ferrario, M.; Filippi, F.; Galletti, M.; Gallo, A.; Giribono, A.; Li, W.; Marocchino, A.; Mostacci, A.; Petrarca, M.; Petrillo, V.; Di Pirro, G.; Romeo, S.; Rossi, A. R.; Scifo, J.; Shpakov, V.; Vaccarezza, C.; Villa, F.; Zhu, J.
2016-09-01
The activity of the SPARC_LAB test-facility (LNF-INFN, Frascati) is currently focused on the development of new plasma-based accelerators. Particle accelerators are used in many fields of science, with applications ranging from particle physics research to advanced radiation sources (e.g. FEL). The demand to accelerate particles to higher and higher energies is currently limited by the effective efficiency in the acceleration process that requires the development of km-size facilities. By increasing the accelerating gradient, the compactness can be improved and costs reduced. Recently, the new technique which attracts main efforts relies on plasma acceleration. In the following, the current status of plasma-based activities at SPARC_LAB is presented. Both laser- and beam-driven schemes will be adopted with the aim to provide an adequate accelerating gradient (1-10 GV/m) while preserving the brightness of the accelerated beams to the level of conventional photo-injectors. This aspect, in particular, requires the use of ultra-short (< 100 fs) electron beams, consisting in one or more bunches. We show, with the support of simulations and experimental results, that such beams can be produced using RF compression by velocity-bunching.
Pang, Susan; Cowen, Simon
2017-12-13
We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.
NASA Astrophysics Data System (ADS)
Gabrielse, C.; Angelopoulos, V.; Artemyev, A.; Runov, A.; Harris, C.
2016-12-01
We study energetic electron injections using an analytical model that self-consistently describes electric and magnetic field perturbations of transient, localized dipolarizing flux bundles (DFBs). Previous studies using THEMIS, Van Allen Probes, and the Magnetospheric Multiscale Mission have shown that injections can occur on short (minutes) or long (10s of minutes) timescales. These studies suggest that the short timescale injections correspond to a single DFB, whereas long timescale injections are likely caused by an aggregate of multiple DFBs, each incrementally heating the particle population. We therefore model the effects of multiple DFBs on the electron population using multi-spacecraft observations of the fields and particle fluxes to constrain the model parameters. The analytical model is the first of its kind to model multiple dipolarization fronts in order to better understand the transport and acceleration process throughout the plasma sheet. It can reproduce most injection signatures at multiple locations simultaneously, reaffirming earlier findings that multiple earthward-traveling DFBs can both transport and accelerate electrons to suprathermal energies, and can thus be considered the injections' primary driver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.
2009-04-14
This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoogcarspel, S J; Kontaxis, C; Velden, J M van der
2014-06-01
Purpose: To develop an MR accelerator-enabled online planning-todelivery technique for stereotactic palliative radiotherapy treatment of spinal metastases. The technical challenges include; automated stereotactic treatment planning, online MR-based dose calculation and MR guidance during treatment. Methods: Using the CT data of 20 patients previously treated at our institution, a class solution for automated treatment planning for spinal bone metastases was created. For accurate dose simulation right before treatment, we fused geometrically correct online MR data with pretreatment CT data of the target volume (TV). For target tracking during treatment, a dynamic T2-weighted TSE MR sequence was developed. An in house developedmore » GPU based IMRT optimization and dose calculation algorithm was used for fast treatment planning and simulation. An automatically generated treatment plan developed with this treatment planning system was irradiated on a clinical 6 MV linear accelerator and evaluated using a Delta4 dosimeter. Results: The automated treatment planning method yielded clinically viable plans for all patients. The MR-CT fusion based dose calculation accuracy was within 2% as compared to calculations performed with original CT data. The dynamic T2-weighted TSE MR Sequence was able to provide an update of the anatomical location of the TV every 10 seconds. Dose calculation and optimization of the automatically generated treatment plans using only one GPU took on average 8 minutes. The Delta4 measurement of the irradiated plan agreed with the dose calculation with a 3%/3mm gamma pass rate of 86.4%. Conclusions: The development of an MR accelerator-enabled planning-todelivery technique for stereotactic palliative radiotherapy treatment of spinal metastases was presented. Future work will involve developing an intrafraction motion adaptation strategy, MR-only dose calculation, radiotherapy quality-assurance in a magnetic field, and streamlining the entire treatment process on an MR accelerator.« less
Sci—Fri PM: Topics — 05: Experience with linac simulation software in a teaching environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlone, Marco; Harnett, Nicole; Jaffray, David
Medical linear accelerator education is usually restricted to use of academic textbooks and supervised access to accelerators. To facilitate the learning process, simulation software was developed to reproduce the effect of medical linear accelerator beam adjustments on resulting clinical photon beams. The purpose of this report is to briefly describe the method of operation of the software as well as the initial experience with it in a teaching environment. To first and higher orders, all components of medical linear accelerators can be described by analytical solutions. When appropriate calibrations are applied, these analytical solutions can accurately simulate the performance ofmore » all linear accelerator sub-components. Grouped together, an overall medical linear accelerator model can be constructed. Fifteen expressions in total were coded using MATLAB v 7.14. The program was called SIMAC. The SIMAC program was used in an accelerator technology course offered at our institution; 14 delegates attended the course. The professional breakdown of the participants was: 5 physics residents, 3 accelerator technologists, 4 regulators and 1 physics associate. The course consisted of didactic lectures supported by labs using SIMAC. At the conclusion of the course, eight of thirteen delegates were able to successfully perform advanced beam adjustments after two days of theory and use of the linac simulator program. We suggest that this demonstrates good proficiency in understanding of the accelerator physics, which we hope will translate to a better ability to understand real world beam adjustments on a functioning medical linear accelerator.« less
Gómez-Caravaca, Ana M; Maggio, Rubén M; Cerretani, Lorenzo
2016-03-24
Today virgin and extra-virgin olive oil (VOO and EVOO) are food with a large number of analytical tests planned to ensure its quality and genuineness. Almost all official methods demand high use of reagents and manpower. Because of that, analytical development in this area is continuously evolving. Therefore, this review focuses on analytical methods for EVOO/VOO which use fast and smart approaches based on chemometric techniques in order to reduce time of analysis, reagent consumption, high cost equipment and manpower. Experimental approaches of chemometrics coupled with fast analytical techniques such as UV-Vis spectroscopy, fluorescence, vibrational spectroscopies (NIR, MIR and Raman fluorescence), NMR spectroscopy, and other more complex techniques like chromatography, calorimetry and electrochemical techniques applied to EVOO/VOO production and analysis have been discussed throughout this work. The advantages and drawbacks of this association have also been highlighted. Chemometrics has been evidenced as a powerful tool for the oil industry. In fact, it has been shown how chemometrics can be implemented all along the different steps of EVOO/VOO production: raw material input control, monitoring during process and quality control of final product. Copyright © 2016 Elsevier B.V. All rights reserved.
Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.
2016-09-16
The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques weremore » applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.
The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques weremore » applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.« less
Neutron H*(10) estimation and measurements around 18MV linac.
Cerón Ramírez, Pablo Víctor; Díaz Góngora, José Antonio Irán; Paredes Gutiérrez, Lydia Concepción; Rivera Montalvo, Teodoro; Vega Carrillo, Héctor René
2016-11-01
Thermoluminescent dosimetry, analytical techniques and Monte Carlo calculations were used to estimate the dose of neutron radiation in a treatment room with a linear electron accelerator of 18MV. Measurements were carried out through neutron ambient dose monitors which include pairs of thermoluminescent dosimeters TLD 600 ( 6 LiF: Mg, Ti) and TLD 700 ( 7 LiF: Mg, Ti), which were placed inside a paraffin spheres. The measurements has allowed to use NCRP 151 equations, these expressions are useful to find relevant dosimetric quantities. In addition, photoneutrons produced by linac head were calculated through MCNPX code taking into account the geometry and composition of the linac head principal parts. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Agrawal, Ankit; Choudhary, Alok
2016-05-01
Our ability to collect "big data" has greatly surpassed our capability to analyze it, underscoring the emergence of the fourth paradigm of science, which is data-driven discovery. The need for data informatics is also emphasized by the Materials Genome Initiative (MGI), further boosting the emerging field of materials informatics. In this article, we look at how data-driven techniques are playing a big role in deciphering processing-structure-property-performance relationships in materials, with illustrative examples of both forward models (property prediction) and inverse models (materials discovery). Such analytics can significantly reduce time-to-insight and accelerate cost-effective materials discovery, which is the goal of MGI.
NASA Astrophysics Data System (ADS)
Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan; Yang, Guoguang
2016-08-01
The ultrahigh static displacement-acceleration sensitivity of a mechanical sensing chip is essential primarily for an ultrasensitive accelerometer. In this paper, an optimal design to implement to a single-axis MOEMS accelerometer consisting of a grating interferometry cavity and a micromachined sensing chip is presented. The micromachined sensing chip is composed of a proof mass along with its mechanical cantilever suspension and substrate. The dimensional parameters of the sensing chip, including the length, width, thickness and position of the cantilevers are evaluated and optimized both analytically and by finite-element-method (FEM) simulation to yield an unprecedented acceleration-displacement sensitivity. Compared with one of the most sensitive single-axis MOEMS accelerometers reported in the literature, the optimal mechanical design can yield a profound sensitivity improvement with an equal footprint area, specifically, 200% improvement in displacement-acceleration sensitivity with moderate resonant frequency and dynamic range. The modified design was microfabricated, packaged with the grating interferometry cavity and tested. The experimental results demonstrate that the MOEMS accelerometer with modified design can achieve the acceleration-displacement sensitivity of about 150μm/g and acceleration sensitivity of greater than 1500V/g, which validates the effectiveness of the optimal design.
Cantilever Beam Natural Frequencies in Centrifugal Inertia Field
NASA Astrophysics Data System (ADS)
Jivkov, V. S.; Zahariev, E. V.
2018-03-01
In the advanced mechanical science the well known fact is that the gravity influences on the natural frequencies and modes even for the vertical structures and pillars. But, the condition that should be fulfilled in order for the gravity to be taken into account is connected with the ration between the gravity value and the geometrical cross section inertia. The gravity is related to the earth acceleration but for moving structures there exist many other acceleration exaggerated forces and such are forces caused by the centrifugal accelerations. Large rotating structures, as wind power generators, chopper wings, large antennas and radars, unfolding space structures and many others are such examples. It is expected, that acceleration based forces influence on the structure modal and frequency properties, which is a subject of the present investigations. In the paper, rotating beams are subject to investigations and modal and frequency analysis is carried out. Analytical dependences for the natural resonances are derived and their dependences on the angular velocity and centrifugal accelerations are derived. Several examples of large rotating beams with different orientations of the rotating shaft are presented. Numerical experiments are conducted. Time histories of the beam tip deflections, that depict the beam oscillations are presented.
Vibration-Based Method Developed to Detect Cracks in Rotors During Acceleration Through Resonance
NASA Technical Reports Server (NTRS)
Sawicki, Jerzy T.; Baaklini, George Y.; Gyekenyesi, Andrew L.
2004-01-01
In recent years, there has been an increasing interest in developing rotating machinery shaft crack-detection methodologies and online techniques. Shaft crack problems present a significant safety and loss hazard in nearly every application of modern turbomachinery. In many cases, the rotors of modern machines are rapidly accelerated from rest to operating speed, to reduce the excessive vibrations at the critical speeds. The vibration monitoring during startup or shutdown has been receiving growing attention (ref. 1), especially for machines such as aircraft engines, which are subjected to frequent starts and stops, as well as high speeds and acceleration rates. It has been recognized that the presence of angular acceleration strongly affects the rotor's maximum response to unbalance and the speed at which it occurs. Unfortunately, conventional nondestructive evaluation (NDE) methods have unacceptable limits in terms of their application for online crack detection. Some of these techniques are time consuming and inconvenient for turbomachinery service testing. Almost all of these techniques require that the vicinity of the damage be known in advance, and they can provide only local information, with no indication of the structural strength at a component or system level. In addition, the effectiveness of these experimental techniques is affected by the high measurement noise levels existing in complex turbomachine structures. Therefore, the use of vibration monitoring along with vibration analysis has been receiving increasing attention.
A Survey of Shape Parameterization Techniques
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.
NASA Astrophysics Data System (ADS)
Aono, T.; Kazama, A.; Okada, R.; Iwasaki, T.; Isono, Y.
2018-03-01
We developed a eutectic-based wafer-level-packaging (WLP) technique for piezoresistive micro-electromechanical systems (MEMS) accelerometers on the basis of molecular dynamics analyses and shear tests of WLP accelerometers. The bonding conditions were experimentally and analytically determined to realize a high shear strength without solder material atoms diffusing to adhesion layers. Molecular dynamics (MD) simulations and energy dispersive x-ray (EDX) spectrometry done after the shear tests clarified the eutectic reaction of the solder materials used in this research. Energy relaxation calculations in MD showed that the diffusion of solder material atoms into the adhesive layer was promoted at a higher temperature. Tensile creep MD simulations also suggested that the local potential energy in a solder material model determined the fracture points of the model. These numerical results were supported by the shear tests and EDX analyses for WLP accelerometers. Consequently, a bonding load of 9.8 kN and temperature of 300 °C were found to be rational conditions because the shear strength was sufficient to endure the polishing process after the WLP process and there was little diffusion of solder material atoms to the adhesion layer. Also, eutectic-bonding-based WLP was effective for controlling the attenuation of the accelerometers by determining the thickness of electroplated solder materials that played the role of a cavity between the accelerometers and lids. If the gap distance between the two was less than 6.2 µm, the signal gains for x- and z-axis acceleration were less than 20 dB even at the resonance frequency due to air-damping.
Dictionary learning and time sparsity in dynamic MRI.
Caballero, Jose; Rueckert, Daniel; Hajnal, Joseph V
2012-01-01
Sparse representation methods have been shown to tackle adequately the inherent speed limits of magnetic resonance imaging (MRI) acquisition. Recently, learning-based techniques have been used to further accelerate the acquisition of 2D MRI. The extension of such algorithms to dynamic MRI (dMRI) requires careful examination of the signal sparsity distribution among the different dimensions of the data. Notably, the potential of temporal gradient (TG) sparsity in dMRI has not yet been explored. In this paper, a novel method for the acceleration of cardiac dMRI is presented which investigates the potential benefits of enforcing sparsity constraints on patch-based learned dictionaries and TG at the same time. We show that an algorithm exploiting sparsity on these two domains can outperform previous sparse reconstruction techniques.
Classifying Correlation Matrices into Relatively Homogeneous Subgroups: A Cluster Analytic Approach
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Chan, Wai
2005-01-01
Researchers are becoming interested in combining meta-analytic techniques and structural equation modeling to test theoretical models from a pool of studies. Most existing procedures are based on the assumption that all correlation matrices are homogeneous. Few studies have addressed what the next step should be when studies being analyzed are…
ERIC Educational Resources Information Center
Gao, Ruomei
2015-01-01
In a typical chemistry instrumentation laboratory, students learn analytical techniques through a well-developed procedure. Such an approach, however, does not engage students in a creative endeavor. To foster the intrinsic motivation of students' desire to learn, improve their confidence in self-directed learning activities and enhance their…
Zhao, Ding; Lam, Henry; Peng, Huei; Bao, Shan; LeBlanc, David J.; Nobukawa, Kazutoshi; Pan, Christopher S.
2016-01-01
Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs. PMID:27840592
Zhao, Ding; Lam, Henry; Peng, Huei; Bao, Shan; LeBlanc, David J; Nobukawa, Kazutoshi; Pan, Christopher S
2017-03-01
Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs.
Ionic liquids in solid-phase microextraction: a review.
Ho, Tien D; Canestraro, Anthony J; Anderson, Jared L
2011-06-10
Solid-phase microextraction (SPME) has undergone a surge in popularity within the field of analytical chemistry in the past two decades since its introduction. Owing to its nature of extraction, SPME has become widely known as a quick and cost-effective sample preparation technique. Although SPME has demonstrated extraordinary versatility in sampling capabilities, the technique continues to experience a tremendous growth in innovation. Presently, increasing efforts have been directed towards the engineering of novel sorbent material in order to expand the applicability of SPME for a wider range of analytes and matrices. This review highlights the application of ionic liquids (ILs) and polymeric ionic liquids (PILs) as innovative sorbent materials for SPME. Characterized by their unique physico-chemical properties, these compounds can be structurally-designed to selectively extract target analytes based on unique molecular interactions. To examine the advantages of IL and PIL-based sorbent coatings in SPME, the field is reviewed by gathering available experimental data and exploring the sensitivity, linear calibration range, as well as detection limits for a variety of target analytes in the methods that have been developed. Copyright © 2011 Elsevier B.V. All rights reserved.
Overview of graduate training program of John Adams Institute for Accelerator Science
NASA Astrophysics Data System (ADS)
Seryi, Andrei
The John Adams Institute for Accelerator Science is a center of excellence in the UK for advanced and novel accelerator technology, providing expertise, research, development and training in accelerator techniques, and promoting advanced accelerator applications in science and society. We work in JAI on design of novel light sources upgrades of 3-rd generation and novel FELs, on plasma acceleration and its application to industrial and medical fields, on novel energy recovery compact linacs and advanced beam diagnostics, and many other projects. The JAI is based on three universities - University of Oxford, Imperial College London and Royal Holloway University of London. Every year 6 to 10 accelerators science experts, trained via research on cutting edge projects, defend their PhD thesis in JAI partner universities. In this presentation we will overview the research and in particular the highly successful graduate training program in JAI.
NASA Astrophysics Data System (ADS)
Ovsyannikov, A. D.; Kozynchenko, S. A.; Kozynchenko, V. A.
2017-12-01
When developing a particle accelerator for generating the high-precision beams, the injection system design is of importance, because it largely determines the output characteristics of the beam. At the present paper we consider the injection systems consisting of electrodes with given potentials. The design of such systems requires carrying out simulation of beam dynamics in the electrostatic fields. For external field simulation we use the new approach, proposed by A.D. Ovsyannikov, which is based on analytical approximations, or finite difference method, taking into account the real geometry of the injection system. The software designed for solving the problems of beam dynamics simulation and optimization in the injection system for non-relativistic beams has been developed. Both beam dynamics and electric field simulations in the injection system which use analytical approach and finite difference method have been made and the results presented in this paper.
Linearization of the longitudinal phase space without higher harmonic field
NASA Astrophysics Data System (ADS)
Zeitler, Benno; Floettmann, Klaus; Grüner, Florian
2015-12-01
Accelerator applications like free-electron lasers, time-resolved electron diffraction, and advanced accelerator concepts like plasma acceleration desire bunches of ever shorter longitudinal extent. However, apart from space charge repulsion, the internal bunch structure and its development along the beam line can limit the achievable compression due to nonlinear phase space correlations. In order to improve such a limited longitudinal focus, a correction by properly linearizing the phase space is required. At large scale facilities like Flash at Desy or the European Xfel, a higher harmonic cavity is installed for this purpose. In this paper, another method is described and evaluated: Expanding the beam after the electron source enables a higher order correction of the longitudinal focus by a subsequent accelerating cavity which is operated at the same frequency as the electron gun. The elaboration of this idea presented here is based on a ballistic bunching scheme, but can be extended to bunch compression based on magnetic chicanes. The core of this article is an analytic model describing this approach, which is verified by simulations, predicting possible bunch length below 1 fs at low bunch charge. Minimizing the energy spread down to σE/E <1 0-5 while keeping the bunch long is another interesting possibility, which finds applications, e.g., in time resolved transmission electron microscopy concepts.
Mitigating the Hook Effect in Lateral Flow Sandwich Immunoassays Using Real-Time Reaction Kinetics.
Rey, Elizabeth G; O'Dell, Dakota; Mehta, Saurabh; Erickson, David
2017-05-02
The quantification of analyte concentrations using lateral flow assays is a low-cost and user-friendly alternative to traditional lab-based assays. However, sandwich-type immunoassays are often limited by the high-dose hook effect, which causes falsely low results when analytes are present at very high concentrations. In this paper, we present a reaction kinetics-based technique that solves this problem, significantly increasing the dynamic range of these devices. With the use of a traditional sandwich lateral flow immunoassay, a portable imaging device, and a mobile interface, we demonstrate the technique by quantifying C-reactive protein concentrations in human serum over a large portion of the physiological range. The technique could be applied to any hook effect-limited sandwich lateral flow assay and has a high level of accuracy even in the hook effect range.
Rotational Acceleration during Head Impact Resulting from Different Judo Throwing Techniques
MURAYAMA, Haruo; HITOSUGI, Masahito; MOTOZAWA, Yasuki; OGINO, Masahiro; KOYAMA, Katsuhiro
2014-01-01
Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchigari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s2 to 5,525.9 rad/s2 and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s2 to 2,104.1 rad/s2 and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami. PMID:24477065
Rotational acceleration during head impact resulting from different judo throwing techniques.
Murayama, Haruo; Hitosugi, Masahito; Motozawa, Yasuki; Ogino, Masahiro; Koyama, Katsuhiro
2014-01-01
Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchi-gari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s(2) to 5,525.9 rad/s(2) and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s(2) to 2,104.1 rad/s(2) and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami.
Babbs, Charles F
2006-06-01
Periodic z-axis acceleration (pGz)-CPR involves an oscillating motion of a whole patient in the head-to-foot dimension on a mechanized table. The method is able to sustain blood flow and long-term survival during and after prolonged cardiac arrest in anesthetized pigs. However, the exact mechanism by which circulation of blood is created has remained unknown. To explain the hemodynamic mechanism of pGz-CPR and to suggest some theoretically useful improvements. Computer modeling using a hybrid analytical-numerical approach, based upon Newton's second law of motion for fluid columns in the aorta and vena cavae, Ohm's law for resistive flow through vascular beds, and a 10-compartment representation of the adult human circulation. This idealized 70-kg human model is exercised to explore the effects upon systemic perfusion pressure of whole body z-axis acceleration at frequencies ranging from 0.5 to 5 Hz. The results, in turn, suggested studies of abdominal compression at these frequencies. Blood motion induced in great vessels by periodic z-axis acceleration causes systemic perfusion when cardiac valves are competent. Blood flow is a function of the frequency of oscillation. At 3.5 Hz, periodic acceleration using +/-0.6G and +/-1.2 cm oscillations induces forward blood flow of 2.1L/min and systemic perfusion pressure of 47 mmHg. A form of resonance occurs at the frequency for peak-flow, in which the period of oscillation matches the round-trip transit time for reflected pulse waves in the aorta. For +/-1.0 G acceleration at 3.5 Hz, systemic perfusion pressure is 80 mmHg and forward flow is 3.8L/min in the adult human model with longitudinal z-axis motion of only +/-2 cm. Similar results can be obtained using abdominal compression to excite resonant pressure-volume waves in the aorta. For 20 mmHg abdominal pressure pulses at 3.8 Hz, systemic perfusion pressure is 7 mmHg and forward flow is 2.8L/min. pGz-CPR and high-frequency abdominal CPR are the physically realistic means of generating artificial circulation during cardiac arrest. These techniques have fundamental mechanisms and practical features quite different from those of conventional CPR and the potential to generate superior systemic perfusion.
DEPEND: A simulation-based environment for system level dependability analysis
NASA Technical Reports Server (NTRS)
Goswami, Kumar; Iyer, Ravishankar K.
1992-01-01
The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.
Lores, Marta; Llompart, Maria; Alvarez-Rivera, Gerardo; Guerra, Eugenia; Vila, Marlene; Celeiro, Maria; Lamas, J Pablo; Garcia-Jares, Carmen
2016-04-07
Cosmetic products placed on the market and their ingredients, must be safe under reasonable conditions of use, in accordance to the current legislation. Therefore, regulated and allowed chemical substances must meet the regulatory criteria to be used as ingredients in cosmetics and personal care products, and adequate analytical methodology is needed to evaluate the degree of compliance. This article reviews the most recent methods (2005-2015) used for the extraction and the analytical determination of the ingredients included in the positive lists of the European Regulation of Cosmetic Products (EC 1223/2009): comprising colorants, preservatives and UV filters. It summarizes the analytical properties of the most relevant analytical methods along with the possibilities of fulfilment of the current regulatory issues. The cosmetic legislation is frequently being updated; consequently, the analytical methodology must be constantly revised and improved to meet safety requirements. The article highlights the most important advances in analytical methodology for cosmetics control, both in relation to the sample pretreatment and extraction and the different instrumental approaches developed to solve this challenge. Cosmetics are complex samples, and most of them require a sample pretreatment before analysis. In the last times, the research conducted covering this aspect, tended to the use of green extraction and microextraction techniques. Analytical methods were generally based on liquid chromatography with UV detection, and gas and liquid chromatographic techniques hyphenated with single or tandem mass spectrometry; but some interesting proposals based on electrophoresis have also been reported, together with some electroanalytical approaches. Regarding the number of ingredients considered for analytical control, single analyte methods have been proposed, although the most useful ones in the real life cosmetic analysis are the multianalyte approaches. Copyright © 2016 Elsevier B.V. All rights reserved.
Towards real-time thermometry using simultaneous multislice MRI
NASA Astrophysics Data System (ADS)
Borman, P. T. S.; Bos, C.; de Boorder, T.; Raaymakers, B. W.; Moonen, C. T. W.; Crijns, S. P. M.
2016-09-01
MR-guided thermal therapies, such as high-intensity focused ultrasound (MRgHIFU) and laser-induced thermal therapy (MRgLITT) are increasingly being applied in oncology and neurology. MRI is used for guidance since it can measure temperature noninvasively based on the proton resonance frequency shift (PRFS). For therapy guidance using PRFS thermometry, high temporal resolution and large spatial coverage are desirable. We propose to use the parallel imaging technique simultaneous multislice (SMS) in combination with controlled aliasing (CAIPIRINHA) to accelerate the acquisition. We compare this with the sensitivity encoding (SENSE) acceleration technique. Two experiments were performed to validate that SMS can be used to increase the spatial coverage or the temporal resolution. The first was performed in agar gel using LITT heating and a gradient-echo sequence with echo-planar imaging (EPI), and the second was performed in bovine muscle using HIFU heating and a gradient-echo sequence without EPI. In both experiments temperature curves from an unaccelerated scan and from SMS, SENSE, and SENSE/SMS accelerated scans were compared. The precision was quantified by a standard deviation analysis of scans without heating. Both experiments showed a good agreement between the temperature curves obtained from the unaccelerated, and SMS accelerated scans, confirming that accuracy was maintained during SMS acceleration. The standard deviations of the temperature measurements obtained with SMS were significantly smaller than when SENSE was used, implying that SMS allows for higher acceleration. In the LITT and HIFU experiments SMS factors up to 4 and 3 were reached, respectively, with a loss of precision of less than a factor of 3. Based on these results we conclude that SMS acceleration of PRFS thermometry is a valuable addition to SENSE, because it allows for a higher temporal resolution or bigger spatial coverage, with a higher precision.
Kadlecek, Stephen; Hamedani, Hooman; Xu, Yinan; Emami, Kiarash; Xin, Yi; Ishii, Masaru; Rizi, Rahim
2013-10-01
Alveolar oxygen tension (Pao2) is sensitive to the interplay between local ventilation, perfusion, and alveolar-capillary membrane permeability, and thus reflects physiologic heterogeneity of healthy and diseased lung function. Several hyperpolarized helium ((3)He) magnetic resonance imaging (MRI)-based Pao2 mapping techniques have been reported, and considerable effort has gone toward reducing Pao2 measurement error. We present a new Pao2 imaging scheme, using parallel accelerated MRI, which significantly reduces measurement error. The proposed Pao2 mapping scheme was computer-simulated and was tested on both phantoms and five human subjects. Where possible, correspondence between actual local oxygen concentration and derived values was assessed for both bias (deviation from the true mean) and imaging artifact (deviation from the true spatial distribution). Phantom experiments demonstrated a significantly reduced coefficient of variation using the accelerated scheme. Simulation results support this observation and predict that correspondence between the true spatial distribution and the derived map is always superior using the accelerated scheme, although the improvement becomes less significant as the signal-to-noise ratio increases. Paired measurements in the human subjects, comparing accelerated and fully sampled schemes, show a reduced Pao2 distribution width for 41 of 46 slices. In contrast to proton MRI, acceleration of hyperpolarized imaging has no signal-to-noise penalty; its use in Pao2 measurement is therefore always beneficial. Comparison of multiple schemes shows that the benefit arises from a longer time-base during which oxygen-induced depolarization modifies the signal strength. Demonstration of the accelerated technique in human studies shows the feasibility of the method and suggests that measurement error is reduced here as well, particularly at low signal-to-noise levels. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
BIOCONAID System (Bionic Control of Acceleration Induced Dimming). Final Report.
ERIC Educational Resources Information Center
Rogers, Dana B.; And Others
The system described represents a new technique for enhancing the fidelity of flight simulators during high acceleration maneuvers. This technique forces the simulator pilot into active participation and energy expenditure similar to the aircraft pilot undergoing actual accelerations. The Bionic Control of Acceleration Induced Dimming (BIOCONAID)…
Mandal, Arundhoti; Singha, Monisha; Addy, Partha Sarathi; Basak, Amit
2017-10-13
The MALDI-based mass spectrometry, over the last three decades, has become an important analytical tool. It is a gentle ionization technique, usually applicable to detect and characterize analytes with high molecular weights like proteins and other macromolecules. The earlier difficulty of detection of analytes with low molecular weights like small organic molecules and metal ion complexes with this technique arose due to the cluster of peaks in the low molecular weight region generated from the matrix. To detect such molecules and metal ion complexes, a four-prong strategy has been developed. These include use of alternate matrix materials, employment of new surface materials that require no matrix, use of metabolites that directly absorb the laser light, and the laser-absorbing label-assisted LDI-MS (popularly known as LALDI-MS). This review will highlight the developments with all these strategies with a special emphasis on LALDI-MS. © 2017 Wiley Periodicals, Inc.
Benhammouda, Brahim; Vazquez-Leal, Hector
2016-01-01
This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.
Preparing the MAX IV storage rings for timing-based experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stråhlman, C., E-mail: Christian.Strahlman@maxlab.lu.se; Olsson, T., E-mail: Teresia.Olsson@maxlab.lu.se; Leemann, S. C.
2016-07-27
Time-resolved experimental techniques are increasingly abundant at storage ring facilities. Recent developments in accelerator technology and beamline instrumentation allow for simultaneous operation of high-intensity and timing-based experiments. The MAX IV facility is a state-of-the-art synchrotron light source in Lund, Sweden, that will come into operation in 2016. As many storage ring facilities are pursuing upgrade programs employing strong-focusing multibend achromats and passive harmonic cavities (HCs) in high-current operation, it is of broad interest to study the accelerator and instrumentation developments required to enable timing-based experiments at such machines. In particular, the use of hybrid filling modes combined with pulse pickingmore » by resonant excitation or pseudo single bunch has shown promising results. These methods can be combined with novel beamline instrumentation, such as choppers and instrument gating. In this paper we discuss how these techniques can be implemented and employed at MAX IV.« less
Numerical studies of the Bethe-Salpeter equation for a two-fermion bound state
NASA Astrophysics Data System (ADS)
de Paula, W.; Frederico, T.; Salmè, G.; Viviani, M.
2018-03-01
Some recent advances on the solution of the Bethe-Salpeter equation (BSE) for a two-fermion bound system directly in Minkowski space are presented. The calculations are based on the expression of the Bethe-Salpeter amplitude in terms of the so-called Nakanishi integral representation and on the light-front projection (i.e. the integration of the light-front variable k - = k 0 - k 3). The latter technique allows for the analytically exact treatment of the singularities plaguing the two-fermion BSE in Minkowski space. The good agreement observed between our results and those obtained using other existing numerical methods, based on both Minkowski and Euclidean space techniques, fully corroborate our analytical treatment.
2016-08-17
thereby opening up new avenues for accelerated materials discovery and design . The need for such data analytics has also been emphasized by the...and design . The construction of inverse models is typically formulated as an optimiza- tion problem wherein a property or performance metric of...discovery and design . extraction, feature selection, etc. Such data preprocessing can either be supervised or unsupervised, based on whether the
Acceleration and stability of a high-current ion beam in induction fields
NASA Astrophysics Data System (ADS)
Karas', V. I.; Manuilenko, O. V.; Tarakanov, V. P.; Federovskaya, O. V.
2013-03-01
A one-dimensional nonlinear analytic theory of the filamentation instability of a high-current ion beam is formulated. The results of 2.5-dimensional numerical particle-in-cell simulations of acceleration and stability of an annular compensated ion beam (CIB) in a linear induction particle accelerator are presented. It is shown that additional transverse injection of electron beams in magnetically insulated gaps (cusps) improves the quality of the ion-beam distribution function and provides uniform beam acceleration along the accelerator. The CIB filamentation instability in both the presence and the absence of an external magnetic field is considered.
Accelerometer Data Analysis and Presentation Techniques
NASA Technical Reports Server (NTRS)
Rogers, Melissa J. B.; Hrovat, Kenneth; McPherson, Kevin; Moskowitz, Milton E.; Reckart, Timothy
1997-01-01
The NASA Lewis Research Center's Principal Investigator Microgravity Services project analyzes Orbital Acceleration Research Experiment and Space Acceleration Measurement System data for principal investigators of microgravity experiments. Principal investigators need a thorough understanding of data analysis techniques so that they can request appropriate analyses to best interpret accelerometer data. Accelerometer data sampling and filtering is introduced along with the related topics of resolution and aliasing. Specific information about the Orbital Acceleration Research Experiment and Space Acceleration Measurement System data sampling and filtering is given. Time domain data analysis techniques are discussed and example environment interpretations are made using plots of acceleration versus time, interval average acceleration versus time, interval root-mean-square acceleration versus time, trimmean acceleration versus time, quasi-steady three dimensional histograms, and prediction of quasi-steady levels at different locations. An introduction to Fourier transform theory and windowing is provided along with specific analysis techniques and data interpretations. The frequency domain analyses discussed are power spectral density versus frequency, cumulative root-mean-square acceleration versus frequency, root-mean-square acceleration versus frequency, one-third octave band root-mean-square acceleration versus frequency, and power spectral density versus frequency versus time (spectrogram). Instructions for accessing NASA Lewis Research Center accelerometer data and related information using the internet are provided.
Brockmeyer, Berit; Kraus, Uta R; Theobald, Norbert
2015-12-01
Silicone passive samplers have gained an increasing attention as single-phased, practical and robust samplers for monitoring of organic contaminants in the aquatic environment in recent years. However, analytical challenges arise in routine application during the extraction of analytes as silicone oligomers are co-extracted and interfere severely during chemical analyses (e.g. gas chromatographic techniques). In this study, we present a fast, practical pre-cleaning method for silicone passive samplers applying accelerated solvent extraction (ASE) for the removal of silicone oligomers prior to the water deployment (hexane/dichloromethane, 100 °C, 70 min). ASE was also shown to be a very fast (10 min) and efficient extraction method for non-polar contaminants (non-exposed PRC recoveries 66-101 %) sampled by the silicone membrane. For both applications, temperature, extraction time and the solvent used for ASE have been optimized. Purification of the ASE extract was carried out by silica gel and high-pressure liquid size exclusion chromatography (HPLC-SEC). The silicone oligomer content was checked by total reflection X-ray fluorescence spectroscopy (TXRF) in order to confirm the absence of the silicone oligomers prior to analysis of passive sampler extracts. The established method was applied on real silicone samplers from the North- and Baltic Sea and showed no matrix effects during analysis of organic pollutants. Internal laboratory standard recoveries were in the same range for laboratory, transport and exposed samplers (85-126 %).
Analysis of Gold Ores by Fire Assay
ERIC Educational Resources Information Center
Blyth, Kristy M.; Phillips, David N.; van Bronswijk, Wilhelm
2004-01-01
Students of an Applied Chemistry degree course carried out a fire-assay exercise. The analysis showed that the technique was a worthwhile quantitative analytical technique and covered interesting theory including acid-base and redox chemistry and other concepts such as inquarting and cupelling.
A REVIEW OF APPLICATIONS OF LUMINESCENCE TO MONITORING OF CHEMICAL CONTAMINANTS IN THE ENVIRONMENT
The recent analytical literature on the application of luminescence techniques to the measurement of various classes of environmentally significant chemicals has been reviewed. Luminescent spectroscopy based methods are compared to other current techniques. Also, examples of rece...
Fujiyoshi, Tomoharu; Ikami, Takahito; Sato, Takashi; Kikukawa, Koji; Kobayashi, Masato; Ito, Hiroshi; Yamamoto, Atsushi
2016-02-19
The consequences of matrix effects in GC are a major issue of concern in pesticide residue analysis. The aim of this study was to evaluate the applicability of an analyte protectant generator in pesticide residue analysis using a GC-MS system. The technique is based on continuous introduction of ethylene glycol into the carrier gas. Ethylene glycol as an analyte protectant effectively compensated the matrix effects in agricultural product extracts. All peak intensities were increased by this technique without affecting the GC-MS performance. Calibration curves for ethylene glycol in the GC-MS system with various degrees of pollution were compared and similar response enhancements were observed. This result suggests a convenient multi-residue GC-MS method using an analyte protectant generator instead of the conventional compensation method for matrix-induced response enhancement adding the mixture of analyte protectants into both neat and sample solutions. Copyright © 2016 Elsevier B.V. All rights reserved.
Sitt, Amit; Hess, Henry
2015-05-13
Nanoscale detectors hold great promise for single molecule detection and the analysis of small volumes of dilute samples. However, the probability of an analyte reaching the nanosensor in a dilute solution is extremely low due to the sensor's small size. Here, we examine the use of a chemical potential gradient along a surface to accelerate analyte capture by nanoscale sensors. Utilizing a simple model for transport induced by surface binding energy gradients, we study the effect of the gradient on the efficiency of collecting nanoparticles and single and double stranded DNA. The results indicate that chemical potential gradients along a surface can lead to an acceleration of analyte capture by several orders of magnitude compared to direct collection from the solution. The improvement in collection is limited to a relatively narrow window of gradient slopes, and its extent strongly depends on the size of the gradient patch. Our model allows the optimization of gradient layouts and sheds light on the fundamental characteristics of chemical potential gradient induced transport.
Accelerator-based techniques for the support of senior-level undergraduate physics laboratories
NASA Astrophysics Data System (ADS)
Williams, J. R.; Clark, J. C.; Isaacs-Smith, T.
2001-07-01
Approximately three years ago, Auburn University replaced its aging Dynamitron accelerator with a new 2MV tandem machine (Pelletron) manufactured by the National Electrostatics Corporation (NEC). This new machine is maintained and operated for the University by Physics Department personnel, and the accelerator supports a wide variety of materials modification/analysis studies. Computer software is available that allows the NEC Pelletron to be operated from a remote location, and an Internet link has been established between the Accelerator Laboratory and the Upper-Level Undergraduate Teaching Laboratory in the Physics Department. Additional software supplied by Canberra Industries has also been used to create a second Internet link that allows live-time data acquisition in the Teaching Laboratory. Our senior-level undergraduates and first-year graduate students perform a number of experiments related to radiation detection and measurement as well as several standard accelerator-based experiments that have been added recently. These laboratory exercises will be described, and the procedures used to establish the Internet links between our Teaching Laboratory and the Accelerator Laboratory will be discussed.
Optical control of hard X-ray polarization by electron injection in a laser wakefield accelerator
Schnell, Michael; Sävert, Alexander; Uschmann, Ingo; Reuter, Maria; Nicolai, Maria; Kämpfer, Tino; Landgraf, Björn; Jäckel, Oliver; Jansen, Oliver; Pukhov, Alexander; Kaluza, Malte Christoph; Spielmann, Christian
2013-01-01
Laser-plasma particle accelerators could provide more compact sources of high-energy radiation than conventional accelerators. Moreover, because they deliver radiation in femtosecond pulses, they could improve the time resolution of X-ray absorption techniques. Here we show that we can measure and control the polarization of ultra-short, broad-band keV photon pulses emitted from a laser-plasma-based betatron source. The electron trajectories and hence the polarization of the emitted X-rays are experimentally controlled by the pulse-front tilt of the driving laser pulses. Particle-in-cell simulations show that an asymmetric plasma wave can be driven by a tilted pulse front and a non-symmetric intensity distribution of the focal spot. Both lead to a notable off-axis electron injection followed by collective electron–betatron oscillations. We expect that our method for an all-optical steering is not only useful for plasma-based X-ray sources but also has significance for future laser-based particle accelerators. PMID:24026068
Hallier, Arnaud; Celette, Florian; Coutarel, Julie; David, Christophe
2013-01-01
Fusarium head blight caused by different varieties of Fusarium species is one of the major serious worldwide diseases found in wheat production. It is therefore important to be able to quantify the deoxynivalenol concentration in wheat. Unfortunately, in mycotoxin quantification, due to the uneven distribution of mycotoxins within the initial lot, it is difficult, or even impossible, to obtain a truly representative analytical sample. In previous work we showed that the sampling step most responsible for variability was grain sampling. In this paper, it is more particularly the step scaling down from a laboratory sample of some kilograms to an analytical sample of a few grams that is investigated. The naturally contaminated wheat lot was obtained from an organic field located in the southeast of France (Rhône-Alpes) from the year 2008-2009 cropping season. The deoxynivalenol level was found to be 50.6 ± 2.3 ng g⁻¹. Deoxynivalenol was extracted with a acetonitrile-water mix and quantified by gas chromatography-electron capture detection (GC-ECD). Three different grain sampling techniques were tested to obtain analytical samples: a technique based on manually homogenisation and division, a second technique based on the use of a rotating shaker and a third on the use of compressed air. Both the rotating shaker and the compressed air techniques enabled a homogeneous laboratory sample to be obtained, from which representative analytical samples could be taken. Moreover, the techniques did away with many repetitions and grinding. This study, therefore, contributes to sampling variability reduction in the evaluation of deoxynivalenol contamination of organic wheat grain, and then, at a reasonable cost.
Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Conners, Timothy R.; Sims, Robert L.
1998-01-01
Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.
Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad
2014-01-01
Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.
2014-01-01
This review aims to highlight the recent advances and methodological improvements in instrumental techniques applied for the analysis of different brominated flame retardants (BFRs). The literature search strategy was based on the recent analytical reviews published on BFRs. The main selection criteria involved the successful development and application of analytical methods for determination of the target compounds in various environmental matrices. Different factors affecting chromatographic separation and mass spectrometric detection of brominated analytes were evaluated and discussed. Techniques using advanced instrumentation to achieve outstanding results in quantification of different BFRs and their metabolites/degradation products were highlighted. Finally, research gaps in the field of BFR analysis were identified and recommendations for future research were proposed. PMID:27433482
The effect of buccal corticotomy on accelerating orthodontic tooth movement of maxillary canine
Jahanbakhshi, Mohammad Reza; Motamedi, Ali Mohammad Kalantar; Feizbakhsh, Masoud; Mogharehabed, Ahmad
2016-01-01
Background: Selective alveolar corticotomy is defined as an intentional injury to cortical bone. This technique is an effective means of accelerating orthodontic tooth movement. The aim of this study is to evaluate the effect of buccal corticotomy in accelerating maxillary canine retraction. Materials and Methods: The sample in this clinical trial study consisted of 15 adult female patients with therapeutic need for extraction of maxillary first premolars and maximum canine retraction. By use of split-mouth design, at the time of premolars extraction, buccal corticotomy was performed around the maxillary first premolar, randomly on one side of maxilla, and the other side was reserved as the control side. Canine retraction was performed by use of friction – less mechanic with simple vertical loop. Every 2 weeks, distance between canines and second premolars was measured until complete space closure. The velocity of space closure was calculated to evaluate the effect of this technique in accelerating orthodontic tooth movement. The obtained data were statistically analyzed using independent t-test, and the significance was set at 0.05. Results: The rate of canine retraction was significantly higher on the corticotomy side than the control side by an average of 1.8 mm/month versus 1.1 mm/month in the corticotomy side and control side, respectively (P < 0.001). Conclusion: Based on result of this study, corticotomy can accelerates the rate of orthodontic tooth movement about two times faster than conventional orthodontics and it is significant in early stages after surgical porsedure. Therefore Buccal corticotomy is a useful adjunct technique for accelerating orthodontic tooth movement. PMID:27605986
NASA Astrophysics Data System (ADS)
Colby, Eric R.; Len, L. K.
Most particle accelerators today are expensive devices found only in the largest laboratories, industries, and hospitals. Using techniques developed nearly a century ago, the limiting performance of these accelerators is often traceable to material limitations, power source capabilities, and the cost tolerance of the application. Advanced accelerator concepts aim to increase the gradient of accelerators by orders of magnitude, using new power sources (e.g. lasers and relativistic beams) and new materials (e.g. dielectrics, metamaterials, and plasmas). Worldwide, research in this area has grown steadily in intensity since the 1980s, resulting in demonstrations of accelerating gradients that are orders of magnitude higher than for conventional techniques. While research is still in the early stages, these techniques have begun to demonstrate the potential to radically change accelerators, making them much more compact, and extending the reach of these tools of science into the angstrom and attosecond realms. Maturation of these techniques into robust, engineered devices will require sustained interdisciplinary, collaborative R&D and coherent use of test infrastructure worldwide. The outcome can potentially transform how accelerators are used.
NASA Astrophysics Data System (ADS)
Colby, Eric R.; Len, L. K.
Most particle accelerators today are expensive devices found only in the largest laboratories, industries, and hospitals. Using techniques developed nearly a century ago, the limiting performance of these accelerators is often traceable to material limitations, power source capabilities, and the cost tolerance of the application. Advanced accelerator conceptsa aim to increase the gradient of accelerators by orders of magnitude, using new power sources (e.g. lasers and relativistic beams) and new materials (e.g. dielectrics, metamaterials, and plasmas). Worldwide, research in this area has grown steadily in intensity since the 1980s, resulting in demonstrations of accelerating gradients that are orders of magnitude higher than for conventional techniques. While research is still in the early stages, these techniques have begun to demonstrate the potential to radically change accelerators, making them much more compact, and extending the reach of these tools of science into the angstrom and attosecond realms. Maturation of these techniques into robust, engineered devices will require sustained interdisciplinary, collaborative R&D and coherent use of test infrastructure worldwide. The outcome can potentially transform how accelerators are used.
Timescale Correlation between Marine Atmospheric Exposure and Accelerated Corrosion Testing - Part 2
NASA Technical Reports Server (NTRS)
Montgomery, Eliza L.; Calle, Luz Marina; Curran, Jerome C.; Kolody, Mark R.
2012-01-01
Evaluation of metals to predict service life of metal-based structures in corrosive environments has long relied on atmospheric exposure test sites. Traditional accelerated corrosion testing relies on mimicking the exposure conditions, often incorporating salt spray and ultraviolet (UV) radiation, and exposing the metal to continuous or cyclic conditions similar to those of the corrosive environment. Their reliability to correlate to atmospheric exposure test results is often a concern when determining the timescale to which the accelerated tests can be related. Accelerated corrosion testing has yet to be universally accepted as a useful tool in predicting the long-term service life of a metal, despite its ability to rapidly induce corrosion. Although visual and mass loss methods of evaluating corrosion are the standard, and their use is crucial, a method that correlates timescales from accelerated testing to atmospheric exposure would be very valuable. This paper presents work that began with the characterization of the atmospheric environment at the Kennedy Space Center (KSC) Beachside Corrosion Test Site. The chemical changes that occur on low carbon steel, during atmospheric and accelerated corrosion conditions, were investigated using surface chemistry analytical methods. The corrosion rates and behaviors of panels subjected to long-term and accelerated corrosion conditions, involving neutral salt fog and alternating seawater spray, were compared to identify possible timescale correlations between accelerated and long-term corrosion performance. The results, as well as preliminary findings on the correlation investigation, are presented.
Digital Mapping Techniques '11–12 workshop proceedings
Soller, David R.
2014-01-01
At these meetings, oral and poster presentations and special discussion sessions emphasized: (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase formats; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.
Technology evaluation of man-rated acceleration test equipment for vestibular research
NASA Technical Reports Server (NTRS)
Taback, I.; Kenimer, R. L.; Butterfield, A. J.
1983-01-01
The considerations for eliminating acceleration noise cues in horizontal, linear, cyclic-motion sleds intended for both ground and shuttle-flight applications are addressed. the principal concerns are the acceleration transients associated with change in direction-of-motion for the carriage. The study presents a design limit for acceleration cues or transients based upon published measurements for thresholds of human perception to linear cyclic motion. The sources and levels for motion transients are presented based upon measurements obtained from existing sled systems. The approaches to a noise-free system recommends the use of air bearings for the carriage support and moving-coil linear induction motors operating at low frequency as the drive system. Metal belts running on air bearing pulleys provide an alternate approach to the driving system. The appendix presents a discussion of alternate testing techniques intended to provide preliminary type data by means of pendulums, linear motion devices and commercial air bearing tables.
A method to align a bent crystal for channeling experiments by using quasichanneling oscillations
NASA Astrophysics Data System (ADS)
Sytov, A. I.; Guidi, V.; Tikhomirov, V. V.; Bandiera, L.; Bagli, E.; Germogli, G.; Mazzolari, A.; Romagnoni, M.
2018-04-01
A method to calculate both the bent crystal angle of alignment and radius of curvature by using only one distribution of deflection angles has been developed. The method is based on measuring of the angular position of recently predicted and observed quasichanneling oscillations in the deflection angle distribution and consequent fitting of both the radius and angular alignment by analytic formulae. In this paper this method is applied on the example of simulated angular distributions over a wide range of values of both radius and alignment for electrons. It is carried out through the example of (111) nonequidistant planes though this technique is general and could be applied to any kind of planes. In addition, the method application constraints are also discussed. It is shown by simulations that this method, being in fact a sort of beam diagnostics, allows one in a certain case to increase the crystal alignment accuracy as well as to control precisely the radius of curvature inside an accelerator tube without vacuum breaking. In addition, it speeds up the procedure of crystal alignment in channeling experiments, reducing beamtime consuming.
Yang, Jianmin; Li, Hai-Fang; Li, Meilan; Lin, Jin-Ming
2012-08-21
The presence of inorganic elements in fuel gas generally accelerates the corrosion and depletion of materials used in the fuel gas industry, and even leads to serious accidents. For identification of existing trace inorganic contaminants in fuel gas in a portable way, a highly efficient gas-liquid sampling collection system based on gas dispersion concentration is introduced in this work. Using the constructed dual path gas-liquid collection setup, inorganic cations and anions were simultaneously collected from real liquefied petroleum gas (LPG) and analyzed by capillary electrophoresis (CE) with indirect UV absorbance detection. The head-column field-amplified sample stacking technique was applied to improve the detection limits to 2-25 ng mL(-1). The developed collection and analytical methods have successfully determined existing inorganic contaminants in a real LPG sample in the range of 4.59-138.69 μg m(-3). The recoveries of cations and anions with spiked LPG samples were between 83.98 and 105.63%, and the relative standard deviations (RSDs) were less than 7.19%.
Optimal time-domain technique for pulse width modulation in power electronics
NASA Astrophysics Data System (ADS)
Mayergoyz, I.; Tyagi, S.
2018-05-01
Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.
A simulation-based evaluation of methods for inferring linear barriers to gene flow
Christopher Blair; Dana E. Weigel; Matthew Balazik; Annika T. H. Keeley; Faith M. Walker; Erin Landguth; Sam Cushman; Melanie Murphy; Lisette Waits; Niko Balkenhol
2012-01-01
Different analytical techniques used on the same data set may lead to different conclusions about the existence and strength of genetic structure. Therefore, reliable interpretation of the results from different methods depends on the efficacy and reliability of different statistical methods. In this paper, we evaluated the performance of multiple analytical methods to...
Mining Interactions in Immersive Learning Environments for Real-Time Student Feedback
ERIC Educational Resources Information Center
Kennedy, Gregor; Ioannou, Ioanna; Zhou, Yun; Bailey, James; O'Leary, Stephen
2013-01-01
The analysis and use of data generated by students' interactions with learning systems or programs--learning analytics--has recently gained widespread attention in the educational technology community. Part of the reason for this interest is based on the potential of learning analytic techniques such as data mining to find hidden patterns in…
ERIC Educational Resources Information Center
Graudins, Maija M.; Rehfeldt, Ruth Anne; DeMattei, Ronda; Baker, Jonathan C.; Scaglia, Fiorella
2012-01-01
Performing oral care procedures with children with autism who exhibit noncompliance can be challenging for oral care professionals. Previous research has elucidated a number of effective behavior analytic procedures for increasing compliance, but some procedures are likely to be too time consuming and expensive for community-based oral care…
Treatment of a Disorder of Self through Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Ferro-Garcia, Rafael; Lopez-Bermudez, Miguel Angel; Valero-Aguayo, Luis
2012-01-01
This paper presents a clinical case study of a depressed female, treated by means of Functional Analytic Psychotherapy (FAP) based on the theory and techniques for treating an "unstable self" (Kohlenberg & Tsai, 1991), instead of the classic treatment for depression. The client was a 20-year-old college student. The trigger for her problems was a…
1998-02-01
provide the aircrew and passengers with a level of protection commensurate with the risk of operating aircraft in the military and civilian...the time taken to reach peak acceleration and upon the peak acceleration level attained. Long duration acceleration, which can be experienced in...acceleration depends principally on the plateau level of the acceleration imposed on the body, as the response to long duration acceleration is due
Analytical techniques for steroid estrogens in water samples - A review.
Fang, Ting Yien; Praveena, Sarva Mangala; deBurbure, Claire; Aris, Ahmad Zaharin; Ismail, Sharifah Norkhadijah Syed; Rasdi, Irniza
2016-12-01
In recent years, environmental concerns over ultra-trace levels of steroid estrogens concentrations in water samples have increased because of their adverse effects on human and animal life. Special attention to the analytical techniques used to quantify steroid estrogens in water samples is therefore increasingly important. The objective of this review was to present an overview of both instrumental and non-instrumental analytical techniques available for the determination of steroid estrogens in water samples, evidencing their respective potential advantages and limitations using the Need, Approach, Benefit, and Competition (NABC) approach. The analytical techniques highlighted in this review were instrumental and non-instrumental analytical techniques namely gas chromatography mass spectrometry (GC-MS), liquid chromatography mass spectrometry (LC-MS), enzyme-linked immuno sorbent assay (ELISA), radio immuno assay (RIA), yeast estrogen screen (YES) assay, and human breast cancer cell line proliferation (E-screen) assay. The complexity of water samples and their low estrogenic concentrations necessitates the use of highly sensitive instrumental analytical techniques (GC-MS and LC-MS) and non-instrumental analytical techniques (ELISA, RIA, YES assay and E-screen assay) to quantify steroid estrogens. Both instrumental and non-instrumental analytical techniques have their own advantages and limitations. However, the non-instrumental ELISA analytical techniques, thanks to its lower detection limit and simplicity, its rapidity and cost-effectiveness, currently appears to be the most reliable for determining steroid estrogens in water samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
YORP: Influence on Rotation Rate
NASA Astrophysics Data System (ADS)
Golubov, A. A.; Krugly, Yu. N.
2010-06-01
We have developed a semi-analytical model for calculating angular acceleration of asteroids due to Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect. The calculation of the YORP effect has been generalized for the case of elliptic orbits. It has been shown that the acceleration does not depend on thermal inertia of the asteroid's surface. The model was applied to the asteroid 1620 Geographos and led to acceleration 2×10^{-18}s^{-2}. This value is close to the acceleration obtained from photometric observations of Geographos by Durech et al. [1].
Jadhav, Vivek Dattatray; Motwani, Bhagwan K; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. The results of the t -test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness.
Positive-Negative Birefringence in Multiferroic Layered Metasurfaces.
Khomeriki, R; Chotorlishvili, L; Tralle, I; Berakdar, J
2016-11-09
We uncover and identify the regime for a magnetically and ferroelectrically controllable negative refraction of a light-traversing multiferroic, oxide-based metastructure consisting of alternating nanoscopic ferroelectric (SrTiO 3 ) and ferromagnetic (Y 3 Fe 2 (FeO 4 ) 3 , YIG) layers. We perform analytical and numerical simulations based on discretized, coupled equations for the self-consistent Maxwell/ferroelectric/ferromagnetic dynamics and obtain a biquadratic relation for the refractive index. Various scenarios of ordinary and negative refraction in different frequency ranges are analyzed and quantified by simple analytical formula that are confirmed by full-fledge numerical simulations. Electromagnetic waves injected at the edges of the sample are propagated exactly numerically. We discovered that, for particular GHz frequencies, waves with different polarizations are characterized by different signs of the refractive index, giving rise to novel types of phenomena such as a positive-negative birefringence effect and magnetically controlled light trapping and accelerations.
Van Poucke, Sven; Thomeer, Michiel; Heath, John; Vukicevic, Milan
2016-07-06
Despite the accelerating pace of scientific discovery, the current clinical research enterprise does not sufficiently address pressing clinical questions. Given the constraints on clinical trials, for a majority of clinical questions, the only relevant data available to aid in decision making are based on observation and experience. Our purpose here is 3-fold. First, we describe the classic context of medical research guided by Poppers' scientific epistemology of "falsificationism." Second, we discuss challenges and shortcomings of randomized controlled trials and present the potential of observational studies based on big data. Third, we cover several obstacles related to the use of observational (retrospective) data in clinical studies. We conclude that randomized controlled trials are not at risk for extinction, but innovations in statistics, machine learning, and big data analytics may generate a completely new ecosystem for exploration and validation.
2016-01-01
Despite the accelerating pace of scientific discovery, the current clinical research enterprise does not sufficiently address pressing clinical questions. Given the constraints on clinical trials, for a majority of clinical questions, the only relevant data available to aid in decision making are based on observation and experience. Our purpose here is 3-fold. First, we describe the classic context of medical research guided by Poppers’ scientific epistemology of “falsificationism.” Second, we discuss challenges and shortcomings of randomized controlled trials and present the potential of observational studies based on big data. Third, we cover several obstacles related to the use of observational (retrospective) data in clinical studies. We conclude that randomized controlled trials are not at risk for extinction, but innovations in statistics, machine learning, and big data analytics may generate a completely new ecosystem for exploration and validation. PMID:27383622
Laborda, Francisco; Bolea, Eduardo; Cepriá, Gemma; Gómez, María T; Jiménez, María S; Pérez-Arantegui, Josefina; Castillo, Juan R
2016-01-21
The increasing demand of analytical information related to inorganic engineered nanomaterials requires the adaptation of existing techniques and methods, or the development of new ones. The challenge for the analytical sciences has been to consider the nanoparticles as a new sort of analytes, involving both chemical (composition, mass and number concentration) and physical information (e.g. size, shape, aggregation). Moreover, information about the species derived from the nanoparticles themselves and their transformations must also be supplied. Whereas techniques commonly used for nanoparticle characterization, such as light scattering techniques, show serious limitations when applied to complex samples, other well-established techniques, like electron microscopy and atomic spectrometry, can provide useful information in most cases. Furthermore, separation techniques, including flow field flow fractionation, capillary electrophoresis and hydrodynamic chromatography, are moving to the nano domain, mostly hyphenated to inductively coupled plasma mass spectrometry as element specific detector. Emerging techniques based on the detection of single nanoparticles by using ICP-MS, but also coulometry, are in their way to gain a position. Chemical sensors selective to nanoparticles are in their early stages, but they are very promising considering their portability and simplicity. Although the field is in continuous evolution, at this moment it is moving from proofs-of-concept in simple matrices to methods dealing with matrices of higher complexity and relevant analyte concentrations. To achieve this goal, sample preparation methods are essential to manage such complex situations. Apart from size fractionation methods, matrix digestion, extraction and concentration methods capable of preserving the nature of the nanoparticles are being developed. This review presents and discusses the state-of-the-art analytical techniques and sample preparation methods suitable for dealing with complex samples. Single- and multi-method approaches applied to solve the nanometrological challenges posed by a variety of stakeholders are also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Ambient Mass Spectrometry Imaging Using Direct Liquid Extraction Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laskin, Julia; Lanekoff, Ingela
2015-11-13
Mass spectrometry imaging (MSI) is a powerful analytical technique that enables label-free spatial localization and identification of molecules in complex samples.1-4 MSI applications range from forensics5 to clinical research6 and from understanding microbial communication7-8 to imaging biomolecules in tissues.1, 9-10 Recently, MSI protocols have been reviewed.11 Ambient ionization techniques enable direct analysis of complex samples under atmospheric pressure without special sample pretreatment.3, 12-16 In fact, in ambient ionization mass spectrometry, sample processing (e.g., extraction, dilution, preconcentration, or desorption) occurs during the analysis.17 This substantially speeds up analysis and eliminates any possible effects of sample preparation on the localization of moleculesmore » in the sample.3, 8, 12-14, 18-20 Venter and co-workers have classified ambient ionization techniques into three major categories based on the sample processing steps involved: 1) liquid extraction techniques, in which analyte molecules are removed from the sample and extracted into a solvent prior to ionization; 2) desorption techniques capable of generating free ions directly from substrates; and 3) desorption techniques that produce larger particles subsequently captured by an electrospray plume and ionized.17 This review focuses on localized analysis and ambient imaging of complex samples using a subset of ambient ionization methods broadly defined as “liquid extraction techniques” based on the classification introduced by Venter and co-workers.17 Specifically, we include techniques where analyte molecules are desorbed from solid or liquid samples using charged droplet bombardment, liquid extraction, physisorption, chemisorption, mechanical force, laser ablation, or laser capture microdissection. Analyte extraction is followed by soft ionization that generates ions corresponding to intact species. Some of the key advantages of liquid extraction techniques include the ease of operation, ability to analyze samples in their native environments, speed of analysis, and ability to tune the extraction solvent composition to a problem at hand. For example, solvent composition may be optimized for efficient extraction of different classes of analytes from the sample or for quantification or online derivatization through reactive analysis. In this review, we will: 1) introduce individual liquid extraction techniques capable of localized analysis and imaging, 2) describe approaches for quantitative MSI experiments free of matrix effects, 3) discuss advantages of reactive analysis for MSI experiments, and 4) highlight selected applications (published between 2012 and 2015) that focus on imaging and spatial profiling of molecules in complex biological and environmental samples.« less
Prognostics Approach for Power MOSFET Under Thermal-Stress
NASA Technical Reports Server (NTRS)
Galvan, Jose Ramon Celaya; Saxena, Abhinav; Kulkarni, Chetan S.; Saha, Sankalita; Goebel, Kai
2012-01-01
The prognostic technique for a power MOSFET presented in this paper is based on accelerated aging of MOSFET IRF520Npbf in a TO-220 package. The methodology utilizes thermal and power cycling to accelerate the life of the devices. The major failure mechanism for the stress conditions is dieattachment degradation, typical for discrete devices with leadfree solder die attachment. It has been determined that dieattach degradation results in an increase in ON-state resistance due to its dependence on junction temperature. Increasing resistance, thus, can be used as a precursor of failure for the die-attach failure mechanism under thermal stress. A feature based on normalized ON-resistance is computed from in-situ measurements of the electro-thermal response. An Extended Kalman filter is used as a model-based prognostics techniques based on the Bayesian tracking framework. The proposed prognostics technique reports on preliminary work that serves as a case study on the prediction of remaining life of power MOSFETs and builds upon the work presented in [1]. The algorithm considered in this study had been used as prognostics algorithm in different applications and is regarded as suitable candidate for component level prognostics. This work attempts to further the validation of such algorithm by presenting it with real degradation data including measurements from real sensors, which include all the complications (noise, bias, etc.) that are regularly not captured on simulated degradation data. The algorithm is developed and tested on the accelerated aging test timescale. In real world operation, the timescale of the degradation process and therefore the RUL predictions will be considerable larger. It is hypothesized that even though the timescale will be larger, it remains constant through the degradation process and the algorithm and model would still apply under the slower degradation process. By using accelerated aging data with actual device measurements and real sensors (no simulated behavior), we are attempting to assess how such algorithm behaves under realistic conditions.
NASA Astrophysics Data System (ADS)
Shao, Lin; Gigax, Jonathan; Chen, Di; Kim, Hyosim; Garner, Frank A.; Wang, Jing; Toloczko, Mychailo B.
2017-10-01
Self-ion irradiation is widely used as a method to simulate neutron damage in reactor structural materials. Accelerator-based simulation of void swelling, however, introduces a number of neutron-atypical features which require careful data extraction and, in some cases, introduction of innovative irradiation techniques to alleviate these issues. We briefly summarize three such atypical features: defect imbalance effects, pulsed beam effects, and carbon contamination. The latter issue has just been recently recognized as being relevant to simulation of void swelling and is discussed here in greater detail. It is shown that carbon ions are entrained in the ion beam by Coulomb force drag and accelerated toward the target surface. Beam-contaminant interactions are modeled using molecular dynamics simulation. By applying a multiple beam deflection technique, carbon and other contaminants can be effectively filtered out, as demonstrated in an irradiation of HT-9 alloy by 3.5 MeV Fe ions.
NASA Technical Reports Server (NTRS)
Abel, I.
1979-01-01
An analytical technique for predicting the performance of an active flutter-suppression system is presented. This technique is based on the use of an interpolating function to approximate the unsteady aerodynamics. The resulting equations are formulated in terms of linear, ordinary differential equations with constant coefficients. This technique is then applied to an aeroelastic model wing equipped with an active flutter-suppression system. Comparisons between wind-tunnel data and analysis are presented for the wing both with and without active flutter suppression. Results indicate that the wing flutter characteristics without flutter suppression can be predicted very well but that a more adequate model of wind-tunnel turbulence is required when the active flutter-suppression system is used.
Torque Transient of Magnetically Drive Flow for Viscosity Measurement
NASA Technical Reports Server (NTRS)
Ban, Heng; Li, Chao; Su, Ching-Hua; Lin, Bochuan; Scripa, Rosalia N.; Lehoczky, Sandor L.
2004-01-01
Viscosity is a good indicator of structural changes for complex liquids, such as semiconductor melts with chain or ring structures. This paper discusses the theoretical and experimental results of the transient torque technique for non-intrusive viscosity measurement. Such a technique is essential for the high temperature viscosity measurement of high pressure and toxic semiconductor melts. In this paper, our previous work on oscillating cup technique was expanded to the transient process of a magnetically driven melt flow in a damped oscillation system. Based on the analytical solution for the fluid flow and cup oscillation, a semi-empirical model was established to extract the fluid viscosity. The analytical and experimental results indicated that such a technique has the advantage of short measurement time and straight forward data analysis procedures
Chemical and biological threat-agent detection using electrophoresis-based lab-on-a-chip devices.
Borowsky, Joseph; Collins, Greg E
2007-10-01
The ability to separate complex mixtures of analytes has made capillary electrophoresis (CE) a powerful analytical tool since its modern configuration was first introduced over 25 years ago. The technique found new utility with its application to the microfluidics based lab-on-a-chip platform (i.e., microchip), which resulted in ever smaller footprints, sample volumes, and analysis times. These features, coupled with the technique's potential for portability, have prompted recent interest in the development of novel analyzers for chemical and biological threat agents. This article will comment on three main areas of microchip CE as applied to the separation and detection of threat agents: detection techniques and their corresponding limits of detection, sampling protocol and preparation time, and system portability. These three areas typify the broad utility of lab-on-a-chip for meeting critical, present-day security, in addition to illustrating areas wherein advances are necessary.
Shape optimization of disc-type flywheels
NASA Technical Reports Server (NTRS)
Nizza, R. S.
1976-01-01
Techniques were developed for presenting an analytical and graphical means for selecting an optimum flywheel system design, based on system requirements, geometric constraints, and weight limitations. The techniques for creating an analytical solution are formulated from energy and structural principals. The resulting flywheel design relates stress and strain pattern distribution, operating speeds, geometry, and specific energy levels. The design techniques incorporate the lowest stressed flywheel for any particular application and achieve the highest specific energy per unit flywheel weight possible. Stress and strain contour mapping and sectional profile plotting reflect the results of the structural behavior manifested under rotating conditions. This approach toward flywheel design is applicable to any metal flywheel, and permits the selection of the flywheel design to be based solely on the criteria of the system requirements that must be met, those that must be optimized, and those system parameters that may be permitted to vary.
Immobilization of Fab' fragments onto substrate surfaces: A survey of methods and applications.
Crivianu-Gaita, Victor; Thompson, Michael
2015-08-15
Antibody immobilization onto surfaces has widespread applications in many different fields. It is desirable to bind antibodies such that their fragment-antigen-binding (Fab) units are oriented away from the surface in order to maximize analyte binding. The immobilization of only Fab' fragments yields benefits over the more traditional whole antibody immobilization technique. Bound Fab' fragments display higher surface densities, yielding a higher binding capacity for the analyte. The nucleophilic sulfide of the Fab' fragments allows for specific orientations to be achieved. For biosensors, this indicates a higher sensitivity and lower detection limit for a target analyte. The last thirty years have shown tremendous progress in the immobilization of Fab' fragments onto gold, Si-based, polysaccharide-based, plastic-based, magnetic, and inorganic surfaces. This review will show the current scope of Fab' immobilization techniques available and illustrate methods employed to minimize non-specific adsorption of undesirables. Furthermore, a variety of examples will be given to show the versatility of immobilized Fab' fragments in different applications and future directions of the field will be addressed, especially regarding biosensors. Copyright © 2015 Elsevier B.V. All rights reserved.
2017-08-01
of metallic additive manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics...manufacturing processes and show that combining experimental data with modelling and advanced data processing and analytics methods will accelerate that...geometries, we develop a methodology that couples experimental data and modelling to convert the scan paths into spatially resolved local thermal histories
Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo
2009-01-01
Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.
Recent developments in computer vision-based analytical chemistry: A tutorial review.
Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J
2015-10-29
Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Big data and high-performance analytics in structural health monitoring for bridge management
NASA Astrophysics Data System (ADS)
Alampalli, Sharada; Alampalli, Sandeep; Ettouney, Mohammed
2016-04-01
Structural Health Monitoring (SHM) can be a vital tool for effective bridge management. Combining large data sets from multiple sources to create a data-driven decision-making framework is crucial for the success of SHM. This paper presents a big data analytics framework that combines multiple data sets correlated with functional relatedness to convert data into actionable information that empowers risk-based decision-making. The integrated data environment incorporates near real-time streams of semi-structured data from remote sensors, historical visual inspection data, and observations from structural analysis models to monitor, assess, and manage risks associated with the aging bridge inventories. Accelerated processing of dataset is made possible by four technologies: cloud computing, relational database processing, support from NOSQL database, and in-memory analytics. The framework is being validated on a railroad corridor that can be subjected to multiple hazards. The framework enables to compute reliability indices for critical bridge components and individual bridge spans. In addition, framework includes a risk-based decision-making process that enumerate costs and consequences of poor bridge performance at span- and network-levels when rail networks are exposed to natural hazard events such as floods and earthquakes. Big data and high-performance analytics enable insights to assist bridge owners to address problems faster.
Chao, Yu-Ying; Jian, Zhi-Xuan; Tu, Yi-Ming; Wang, Hsaio-Wen; Huang, Yeou-Lih
2013-06-07
In this study, we employed a novel on-line method, push/pull perfusion hollow-fiber liquid-phase microextraction (PPP-HF-LPME), to extract 4-tert-butylphenol, 2,4-di-tert-butylphenol, 4-n-nonylphenol, and 4-n-octylphenol from river and tap water samples; we then separated and quantified the extracted analytes through high-performance liquid chromatography (HPLC). Using this approach, we overcame the problem of fluid loss across the porous HF membrane to the donor phase, permitting on-line coupling of HF-LPME to HPLC. In our PPP-HF-LPME system, we used a push/pull syringe pump as the driving source to perfuse the acceptor phase, while employing a heating mantle and an ultrasonic probe to accelerate mass transfer. We optimized the experimental conditions such as the nature of the HF supported intermediary phase and the acceptor phase, the composition of the donor and acceptor phases, the sample temperature, and the sonication conditions. Our proposed method provided relative standard deviations of 3.1-6.2%, coefficients of determination (r(2)) of 0.9989-0.9998, and limits of detection of 0.03-0.2 ng mL(-1) for the analytes under the optimized conditions. When we applied this method to analyses of river and tap water samples, our results confirmed that this microextraction technique allows reliable monitoring of alkylphenols in water samples.
The MIND PALACE: A Multi-Spectral Imaging and Spectroscopy Database for Planetary Science
NASA Astrophysics Data System (ADS)
Eshelman, E.; Doloboff, I.; Hara, E. K.; Uckert, K.; Sapers, H. M.; Abbey, W.; Beegle, L. W.; Bhartia, R.
2017-12-01
The Multi-Instrument Database (MIND) is the web-based home to a well-characterized set of analytical data collected by a suite of deep-UV fluorescence/Raman instruments built at the Jet Propulsion Laboratory (JPL). Samples derive from a growing body of planetary surface analogs, mineral and microbial standards, meteorites, spacecraft materials, and other astrobiologically relevant materials. In addition to deep-UV spectroscopy, datasets stored in MIND are obtained from a variety of analytical techniques obtained over multiple spatial and spectral scales including electron microscopy, optical microscopy, infrared spectroscopy, X-ray fluorescence, and direct fluorescence imaging. Multivariate statistical analysis techniques, primarily Principal Component Analysis (PCA), are used to guide interpretation of these large multi-analytical spectral datasets. Spatial co-referencing of integrated spectral/visual maps is performed using QGIS (geographic information system software). Georeferencing techniques transform individual instrument data maps into a layered co-registered data cube for analysis across spectral and spatial scales. The body of data in MIND is intended to serve as a permanent, reliable, and expanding database of deep-UV spectroscopy datasets generated by this unique suite of JPL-based instruments on samples of broad planetary science interest.
Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan
2016-11-01
Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.
NASA Astrophysics Data System (ADS)
Chen, Zi-Yu; Chen, Shi; Dan, Jia-Kun; Li, Jian-Feng; Peng, Qi-Xian
2011-10-01
A simple one-dimensional analytical model for electromagnetic emission from an unmagnetized wakefield excited by an intense short-pulse laser in the nonlinear regime has been developed in this paper. The expressions for the spectral and angular distributions of the radiation have been derived. The model suggests that the origin of the radiation can be attributed to the violent sudden acceleration of plasma electrons experiencing the accelerating potential of the laser wakefield. The radiation process could help to provide a qualitative interpretation of existing experimental results, and offers useful information for future laser wakefield experiments.
Surface modifications of AISI 420 stainless steel by low energy Yttrium ions
NASA Astrophysics Data System (ADS)
Nassisi, Vincenzo; Delle Side, Domenico; Turco, Vito; Martina, Luigi
2018-01-01
In this work, we study surface modifications of AISI 420 stainless steel specimens in order to improve their surface properties. Oxidation resistance and surface micro-hardness were analyzed. Using an ion beam delivered by a Laser Ion Source (LIS) coupled to an electrostatic accelerator, we performed implantation of low energy yttrium ions on the samples. The ions experienced an acceleration passing through a gap whose ends had a potential difference of 60 kV. The gap was placed immediately before the samples surface. The LIS produced high ions fluxes per laser pulse, up to 3x1011 ions/cm2, resulting in a total implanted flux of 7x1015 ions/cm2. The samples were characterized before and after ion implantation using two analytical techniques. They were also thermally treated to investigate the oxide scale. The crystal phases were identified by an X-ray diffractometer, while the micro-hardness was assayed using the scratch test and a profilometer. The first analysis was applied to blank, implanted and thermally treated sample surface, while the latter was applied only to blank and implanted sample surfaces. We found a slight increase in the hardness values and an increase to oxygen resistance. The implantation technique we used has the advantages, with respect to conventional methods, to modify the samples at low temperature avoiding stray diffusion of ions inside the substrate bulk.
Retail video analytics: an overview and survey
NASA Astrophysics Data System (ADS)
Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang
2013-03-01
Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.
Fabrication and Operation of Paper-Based Analytical Devices
NASA Astrophysics Data System (ADS)
Jiang, Xiao; Fan, Z. Hugh
2016-06-01
This review focuses on the fabrication techniques and operational components of microfluidic paper-based analytical devices (μPADs). Being low-cost, user-friendly, fast, and simple, μPADs have seen explosive growth in the literature in the last decade. Many different materials and technologies have been employed to fabricate μPADs for various applications, including those that employ patterning, the creation of physical boundaries, and three-dimensional structures. In addition to fabrication techniques, flow control and other operational components in μPADs are of great interest. These components enable μPADs to control flow rates, direct flow paths via valves, sequentially deliver reagents automatically, and display test results, all of which will make μPADs more suitable for point-of-care applications.
Lim, Wei Yin; Goh, Boon Tong; Khor, Sook Mei
2017-08-15
Clinicians, working in the health-care diagnostic systems of developing countries, currently face the challenges of rising costs, increased number of patient visits, and limited resources. A significant trend is using low-cost substrates to develop microfluidic devices for diagnostic purposes. Various fabrication techniques, materials, and detection methods have been explored to develop these devices. Microfluidic paper-based analytical devices (μPADs) have gained attention for sensing multiplex analytes, confirming diagnostic test results, rapid sample analysis, and reducing the volume of samples and analytical reagents. μPADs, which can provide accurate and reliable direct measurement without sample pretreatment, can reduce patient medical burden and yield rapid test results, aiding physicians in choosing appropriate treatment. The objectives of this review are to provide an overview of the strategies used for developing paper-based sensors with enhanced analytical performances and to discuss the current challenges, limitations, advantages, disadvantages, and future prospects of paper-based microfluidic platforms in clinical diagnostics. μPADs, with validated and justified analytical performances, can potentially improve the quality of life by providing inexpensive, rapid, portable, biodegradable, and reliable diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.
Engineering fluidic delays in paper-based devices using laser direct-writing.
He, P J W; Katis, I N; Eason, R W; Sones, C L
2015-10-21
We report the use of a new laser-based direct-write technique that allows programmable and timed fluid delivery in channels within a paper substrate which enables implementation of multi-step analytical assays. The technique is based on laser-induced photo-polymerisation, and through adjustment of the laser writing parameters such as the laser power and scan speed we can control the depth and/or the porosity of hydrophobic barriers which, when fabricated in the fluid path, produce controllable fluid delay. We have patterned these flow delaying barriers at pre-defined locations in the fluidic channels using either a continuous wave laser at 405 nm, or a pulsed laser operating at 266 nm. Using this delay patterning protocol we generated flow delays spanning from a few minutes to over half an hour. Since the channels and flow delay barriers can be written via a common laser-writing process, this is a distinct improvement over other methods that require specialist operating environments, or custom-designed equipment. This technique can therefore be used for rapid fabrication of paper-based microfluidic devices that can perform single or multistep analytical assays.
Observation of the development of secondary features in a Richtmyer–Meshkov instability driven flow
Bernard, Tennille; Truman, C. Randall; Vorobieff, Peter; ...
2014-09-10
Richtmyer–Meshkov instability (RMI) has long been the subject of interest for analytical, numerical, and experimental studies. In comparing results of experiment with numerics, it is important to understand the limitations of experimental techniques inherent in the chosen method(s) of data acquisition. We discuss results of an experiment where a laminar, gravity-driven column of heavy gas is injected into surrounding light gas and accelerated by a planar shock. A popular and well-studied method of flow visualization (using glycol droplet tracers) does not produce a flow pattern that matches the numerical model of the same conditions, while revealing the primary feature ofmore » the flow developing after shock acceleration: the pair of counter-rotating vortex columns. However, visualization using fluorescent gaseous tracer confirms the presence of features suggested by the numerics; in particular, a central spike formed due to shock focusing in the heavy-gas column. Furthermore, the streamwise growth rate of the spike appears to exhibit the same scaling with Mach number as that of the counter-rotating vortex pair (CRVP).« less
Ishibashi, Midori
2015-01-01
The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.
Artemyev, A V; Neishtadt, A I; Zelenyi, L M; Vainchtein, D L
2010-12-01
We present an analytical and numerical study of the surfatron acceleration of nonrelativistic charged particles by electromagnetic waves. The acceleration is caused by capture of particles into resonance with one of the waves. We investigate capture for systems with one or two waves and provide conditions under which the obtained results can be applied to systems with more than two waves. In the case of a single wave, the once captured particles never leave the resonance and their velocity grows linearly with time. However, if there are two waves in the system, the upper bound of the energy gain may exist and we find the analytical value of that bound. We discuss several generalizations including the relativistic limit, different wave amplitudes, and a wide range of the waves' wavenumbers. The obtained results are used for qualitative description of some phenomena observed in the Earth's magnetosphere. © 2010 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Haghi, Hosein; Baumgardt, Holger; Kroupa, Pavel; Grebel, Eva K.; Hilker, Michael; Jordi, Katrin
2009-05-01
We investigate the mean velocity dispersion and the velocity dispersion profile of stellar systems in modified Newtonian dynamics (MOND), using the N-body code N-MODY, which is a particle-mesh-based code with a numerical MOND potential solver developed by Ciotti, Londrillo & Nipoti. We have calculated mean velocity dispersions for stellar systems following Plummer density distributions with masses in the range of 104 to 109Msolar and which are either isolated or immersed in an external field. Our integrations reproduce previous analytic estimates for stellar velocities in systems in the deep MOND regime (ai, ae << a0), where the motion of stars is either dominated by internal accelerations (ai >> ae) or constant external accelerations (ae >> ai). In addition, we derive for the first time analytic formulae for the line-of-sight velocity dispersion in the intermediate regime (ai ~ ae ~ a0). This allows for a much-improved comparison of MOND with observed velocity dispersions of stellar systems. We finally derive the velocity dispersion of the globular cluster Pal14 as one of the outer Milky Way halo globular clusters that have recently been proposed as a differentiator between Newtonian and MONDian dynamics.
CERN-derived analysis of lunar radiation backgrounds
NASA Technical Reports Server (NTRS)
Wilson, Thomas L.; Svoboda, Robert
1993-01-01
The Moon produces radiation which background-limits scientific experiments there. Early analyses of these backgrounds have either failed to take into consideration the effect of charm in particle physics (because they pre-dated its discovery), or have used branching ratios which are no longer strictly valid (due to new accelerator data). We are presently investigating an analytical program for deriving muon and neutrino spectra generated by the Moon, converting an existing CERN computer program known as GEANT which does the same for the Earth. In so doing, this will (1) determine an accurate prompt neutrino spectrum produced by the lunar surface; (2) determine the lunar subsurface particle flux; (3) determine the consequence of charm production physics upon the lunar background radiation environment; and (4) provide an analytical tool for the NASA astrophysics community with which to begin an assessment of the Moon as a scientific laboratory versus its particle radiation environment. This will be done on a recurring basis with the latest experimental results of the particle data groups at Earth-based high-energy accelerators, in particular with the latest branching ratios for charmed meson decay. This will be accomplished for the first time as a full 3-dimensional simulation.
Silina, Yuliya E; Volmer, Dietrich A
2013-12-07
Analytical applications often require rapid measurement of compounds from complex sample mixtures. High-speed mass spectrometry approaches frequently utilize techniques based on direct ionization of the sample by laser irradiation, mostly by means of matrix-assisted laser desorption/ionization (MALDI). Compounds of low molecular weight are difficult to analyze by MALDI, however, because of severe interferences in the low m/z range from the organic matrix used for desorption/ionization. In recent years, surface-assisted laser desorption/ionization (SALDI) techniques have shown promise for small molecule analysis, due to the unique properties of nanostructured surfaces, in particular, the lack of a chemical background in the low m/z range and enhanced production of analyte ions by SALDI. This short review article presents a summary of the most promising recent developments in SALDI materials for MS analysis of low molecular weight analytes, with emphasis on nanostructured materials based on metals and semiconductors.
Heat transfer with hockey-stick steam generator. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, E; Gabler, M J
1977-11-01
The hockey-stick modular design concept is a good answer to future needs for reliable, economic LMFBR steam generators. The concept was successfully demonstrated in the 30 Mwt MSG test unit; scaled up versions are currently in fabrication for CRBRP usage, and further scaling has been accomplished for PLBR applications. Design and performance characteristics are presented for the three generations of hockey-stick steam generators. The key features of the design are presented based on extensive analytical effort backed up by extensive ancillary test data. The bases for and actual performance evaluations are presented with emphasis on the CRBRP design. The designmore » effort on these units has resulted in the development of analytical techniques that are directly applicable to steam generators for any LMFBR application. In conclusion, the hockey-stick steam generator concept has been proven to perform both thermally and hydraulically as predicted. The heat transfer characteristics are well defined, and proven analytical techniques are available as are personnel experienced in their use.« less
NASA Astrophysics Data System (ADS)
Parrado, G.; Cañón, Y.; Peña, M.; Sierra, O.; Porras, A.; Alonso, D.; Herrera, D. C.; Orozco, J.
2016-07-01
The Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey has developed a technique for multi-elemental analysis of soil and plant matrices, based on Instrumental Neutron Activation Analysis (INAA) using the comparator method. In order to evaluate the analytical capabilities of the technique, the laboratory has been participating in inter-comparison tests organized by Wepal (Wageningen Evaluating Programs for Analytical Laboratories). In this work, the experimental procedure and results for the multi-elemental analysis of four soil and four plant samples during participation in the first round on 2015 of Wepal proficiency test are presented. Only elements with radioactive isotopes with medium and long half-lives have been evaluated, 15 elements for soils (As, Ce, Co, Cr, Cs, Fe, K, La, Na, Rb, Sb, Sc, Th, U and Zn) and 7 elements for plants (Br, Co, Cr, Fe, K, Na and Zn). The performance assessment by Wepal based on Z-score distributions showed that most results obtained |Z-scores| ≤ 3.
Recent Electrochemical and Optical Sensors in Flow-Based Analysis
Chailapakul, Orawon; Ngamukot, Passapol; Yoosamran, Alongkorn; Siangproh, Weena; Wangfuengkanagul, Nattakarn
2006-01-01
Some recent analytical sensors based on electrochemical and optical detection coupled with different flow techniques have been chosen in this overview. A brief description of fundamental concepts and applications of each flow technique, such as flow injection analysis (FIA), sequential injection analysis (SIA), all injection analysis (AIA), batch injection analysis (BIA), multicommutated FIA (MCFIA), multisyringe FIA (MSFIA), and multipumped FIA (MPFIA) were reviewed.
Asadollahi, Aziz; Khazanovich, Lev
2018-04-11
The emergence of ultrasonic dry point contact (DPC) transducers that emit horizontal shear waves has enabled efficient collection of high-quality data in the context of a nondestructive evaluation of concrete structures. This offers an opportunity to improve the quality of evaluation by adapting advanced imaging techniques. Reverse time migration (RTM) is a simulation-based reconstruction technique that offers advantages over conventional methods, such as the synthetic aperture focusing technique. RTM is capable of imaging boundaries and interfaces with steep slopes and the bottom boundaries of inclusions and defects. However, this imaging technique requires a massive amount of memory and its computation cost is high. In this study, both bottlenecks of the RTM are resolved when shear transducers are used for data acquisition. An analytical approach was developed to obtain the source and receiver wavefields needed for imaging using reverse time migration. It is shown that the proposed analytical approach not only eliminates the high memory demand, but also drastically reduces the computation time from days to minutes. Copyright © 2018 Elsevier B.V. All rights reserved.
Rapid acceleration of protons upstream of earthward propagating dipolarization fronts
Ukhorskiy, AY; Sitnov, MI; Merkin, VG; Artemyev, AV
2013-01-01
[1] Transport and acceleration of ions in the magnetotail largely occurs in the form of discrete impulsive events associated with a steep increase of the tail magnetic field normal to the neutral plane (Bz), which are referred to as dipolarization fronts. The goal of this paper is to investigate how protons initially located upstream of earthward moving fronts are accelerated at their encounter. According to our analytical analysis and simplified two-dimensional test-particle simulations of equatorially mirroring particles, there are two regimes of proton acceleration: trapping and quasi-trapping, which are realized depending on whether the front is preceded by a negative depletion in Bz. We then use three-dimensional test-particle simulations to investigate how these acceleration processes operate in a realistic magnetotail geometry. For this purpose we construct an analytical model of the front which is superimposed onto the ambient field of the magnetotail. According to our numerical simulations, both trapping and quasi-trapping can produce rapid acceleration of protons by more than an order of magnitude. In the case of trapping, the acceleration levels depend on the amount of time particles stay in phase with the front which is controlled by the magnetic field curvature ahead of the front and the front width. Quasi-trapping does not cause particle scattering out of the equatorial plane. Energization levels in this case are limited by the number of encounters particles have with the front before they get magnetized behind it. PMID:26167430
Systematic comparison of static and dynamic headspace sampling techniques for gas chromatography.
Kremser, Andreas; Jochmann, Maik A; Schmidt, Torsten C
2016-09-01
Six automated, headspace-based sample preparation techniques were used to extract volatile analytes from water with the goal of establishing a systematic comparison between commonly available instrumental alternatives. To that end, these six techniques were used in conjunction with the same gas chromatography instrument for analysis of a common set of volatile organic carbon (VOC) analytes. The methods were thereby divided into three classes: static sampling (by syringe or loop), static enrichment (SPME and PAL SPME Arrow), and dynamic enrichment (ITEX and trap sampling). For PAL SPME Arrow, different sorption phase materials were also included in the evaluation. To enable an effective comparison, method detection limits (MDLs), relative standard deviations (RSDs), and extraction yields were determined and are discussed for all techniques. While static sampling techniques exhibited sufficient extraction yields (approx. 10-20 %) to be reliably used down to approx. 100 ng L(-1), enrichment techniques displayed extraction yields of up to 80 %, resulting in MDLs down to the picogram per liter range. RSDs for all techniques were below 27 %. The choice on one of the different instrumental modes of operation (aforementioned classes) was thereby the most influential parameter in terms of extraction yields and MDLs. Individual methods inside each class showed smaller deviations, and the least influences were observed when evaluating different sorption phase materials for the individual enrichment techniques. The option of selecting specialized sorption phase materials may, however, be more important when analyzing analytes with different properties such as high polarity or the capability of specific molecular interactions. Graphical Abstract PAL SPME Arrow during the extraction of volatile analytes from the headspace of an aqueous sample.
A new experimental method for the accelerated characterization of composite materials
NASA Technical Reports Server (NTRS)
Yeow, Y. T.; Morris, D. H.; Brinson, H. F.
1978-01-01
The use of composite materials for a variety of practical structural applications is presented and the need for an accelerated characterization procedure is assessed. A new experimental and analytical method is presented which allows the prediction of long term properties from short term tests. Some preliminary experimental results are presented.
Comparative study of active plasma lenses in high-quality electron accelerator transport lines
NASA Astrophysics Data System (ADS)
van Tilborg, J.; Barber, S. K.; Benedetti, C.; Schroeder, C. B.; Isono, F.; Tsai, H.-E.; Geddes, C. G. R.; Leemans, W. P.
2018-05-01
Electrically discharged active plasma lenses (APLs) are actively pursued in compact high-brightness plasma-based accelerators due to their high-gradient, tunable, and radially symmetric focusing properties. In this manuscript, the APL is experimentally compared with a conventional quadrupole triplet, highlighting the favorable reduction in the energy dependence (chromaticity) in the transport line. Through transport simulations, it is explored how the non-uniform radial discharge current distribution leads to beam-integrated emittance degradation and a charge density reduction at focus. However, positioning an aperture at the APL entrance will significantly reduce emittance degradation without additional loss of charge in the high-quality core of the beam. An analytical model is presented that estimates the emittance degradation from a short beam driving a longitudinally varying wakefield in the APL. Optimizing laser plasma accelerator operation is discussed where emittance degradation from the non-uniform discharge current (favoring small beams inside the APL) and wakefield effects (favoring larger beam sizes) is minimized.
Comparative study of active plasma lenses in high-quality electron accelerator transport lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Tilborg, J.; Barber, S. K.; Benedetti, C.
Electrically discharged active plasma lenses (APLs) are actively pursued in compact high-brightness plasma-based accelerators due to their high-gradient, tunable, and radially symmetric focusing properties. In this paper, the APL is experimentally compared with a conventional quadrupole triplet, highlighting the favorable reduction in the energy dependence (chromaticity) in the transport line. Through transport simulations, it is explored how the non-uniform radial discharge current distribution leads to beam-integrated emittance degradation and a charge density reduction at focus. However, positioning an aperture at the APL entrance will significantly reduce emittance degradation without additional loss of charge in the high-quality core of the beam.more » An analytical model is presented that estimates the emittance degradation from a short beam driving a longitudinally varying wakefield in the APL. Finally, optimizing laser plasma accelerator operation is discussed where emittance degradation from the non-uniform discharge current (favoring small beams inside the APL) and wakefield effects (favoring larger beam sizes) is minimized.« less
Comparative study of active plasma lenses in high-quality electron accelerator transport lines
van Tilborg, J.; Barber, S. K.; Benedetti, C.; ...
2018-03-13
Electrically discharged active plasma lenses (APLs) are actively pursued in compact high-brightness plasma-based accelerators due to their high-gradient, tunable, and radially symmetric focusing properties. In this paper, the APL is experimentally compared with a conventional quadrupole triplet, highlighting the favorable reduction in the energy dependence (chromaticity) in the transport line. Through transport simulations, it is explored how the non-uniform radial discharge current distribution leads to beam-integrated emittance degradation and a charge density reduction at focus. However, positioning an aperture at the APL entrance will significantly reduce emittance degradation without additional loss of charge in the high-quality core of the beam.more » An analytical model is presented that estimates the emittance degradation from a short beam driving a longitudinally varying wakefield in the APL. Finally, optimizing laser plasma accelerator operation is discussed where emittance degradation from the non-uniform discharge current (favoring small beams inside the APL) and wakefield effects (favoring larger beam sizes) is minimized.« less
Analytic model for the dynamic Z-pinch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piriz, A. R., E-mail: roberto.piriz@uclm.es; Sun, Y. B.; Tahir, N. A.
2015-06-15
A model is presented for describing the cylindrical implosion of a shock wave driven by an accelerated piston. It is based in the identification of the acceleration of the shocked mass with the acceleration of the piston. The model yields the separate paths of the piston and the shock. In addition, by considering that the shocked region evolves isentropically, the approximate profiles of all the magnitudes in the shocked region are obtained. The application to the dynamic Z-pinch is presented and the results are compared with the well known snowplow and slug models which are also derived as limiting casesmore » of the present model. The snowplow model is seen to yield a trajectory in between those of the shock and the piston. Instead, the neglect of the inertial effects in the slug model is seen to produce a too fast implosion, and the pressure uniformity is shown to lead to an unphysical instantaneous piston stopping when the shock arrives to the axis.« less
Chromatic energy filter and characterization of laser-accelerated proton beams for particle therapy
NASA Astrophysics Data System (ADS)
Hofmann, Ingo; Meyer-ter-Vehn, Jürgen; Yan, Xueqing; Al-Omari, Husam
2012-07-01
The application of laser accelerated protons or ions for particle therapy has to cope with relatively large energy and angular spreads as well as possibly significant random fluctuations. We suggest a method for combined focusing and energy selection, which is an effective alternative to the commonly considered dispersive energy selection by magnetic dipoles. Our method is based on the chromatic effect of a magnetic solenoid (or any other energy dependent focusing device) in combination with an aperture to select a certain energy width defined by the aperture radius. It is applied to an initial 6D phase space distribution of protons following the simulation output from a Radiation Pressure Acceleration model. Analytical formula for the selection aperture and chromatic emittance are confirmed by simulation results using the TRACEWIN code. The energy selection is supported by properly placed scattering targets to remove the imprint of the chromatic effect on the beam and to enable well-controlled and shot-to-shot reproducible energy and transverse density profiles.
Opportunity and Challenges for Migrating Big Data Analytics in Cloud
NASA Astrophysics Data System (ADS)
Amitkumar Manekar, S.; Pradeepini, G., Dr.
2017-08-01
Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Linda
The objective of the proposal was to develop graduate student training in materials and engineering research relevant to the development of particle accelerators. Many components used in today's accelerators or storage rings are at the limit of performance. The path forward in many cases requires the development of new materials or fabrication techniques, or a novel engineering approach. Often, accelerator-based laboratories find it difficult to get top-level engineers or materials experts with the motivation to work on these problems. The three years of funding provided by this grant was used to support development of accelerator components through a multidisciplinary approachmore » that cut across the disciplinary boundaries of accelerator physics, materials science, and surface chemistry. The following results were achieved: (1) significant scientific results on fabrication of novel photocathodes, (2) application of surface science and superconducting materials expertise to accelerator problems through faculty involvement, (3) development of instrumentation for fabrication and characterization of materials for accelerator components, (4) student involvement with problems at the interface of material science and accelerator physics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroon, John J.; Becker, Peter A.; Dermer, Charles D.
The γ -ray flares from the Crab Nebula observed by AGILE and Fermi -LAT reaching GeV energies and lasting several days challenge the standard models for particle acceleration in pulsar-wind nebulae because the radiating electrons have energies exceeding the classical radiation-reaction limit for synchrotron. Previous modeling has suggested that the synchrotron limit can be exceeded if the electrons experience electrostatic acceleration, but the resulting spectra do not agree very well with the data. As a result, there are still some important unanswered questions about the detailed particle acceleration and emission processes occurring during the flares. We revisit the problem usingmore » a new analytical approach based on an electron transport equation that includes terms describing electrostatic acceleration, stochastic wave-particle acceleration, shock acceleration, synchrotron losses, and particle escape. An exact solution is obtained for the electron distribution, which is used to compute the associated γ -ray synchrotron spectrum. We find that in our model the γ -ray flares are mainly powered by electrostatic acceleration, but the contributions from stochastic and shock acceleration play an important role in producing the observed spectral shapes. Our model can reproduce the spectra of all the Fermi -LAT and AGILE flares from the Crab Nebula, using magnetic field strengths in agreement with the multi-wavelength observational constraints. We also compute the spectrum and duration of the synchrotron afterglow created by the accelerated electrons, after they escape into the region on the downstream side of the pulsar-wind termination shock. The afterglow is expected to fade over a maximum period of about three weeks after the γ -ray flare.« less
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bozkaya, Uğur, E-mail: ugur.bozkaya@hacettepe.edu.tr; Department of Chemistry, Atatürk University, Erzurum 25240; Sherrill, C. David
2016-05-07
An efficient implementation is presented for analytic gradients of the coupled-cluster singles and doubles (CCSD) method with the density-fitting approximation, denoted DF-CCSD. Frozen core terms are also included. When applied to a set of alkanes, the DF-CCSD analytic gradients are significantly accelerated compared to conventional CCSD for larger molecules. The efficiency of our DF-CCSD algorithm arises from the acceleration of several different terms, which are designated as the “gradient terms”: computation of particle density matrices (PDMs), generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the relaxed PDMs and GFM, back-transformation of PDMs and GFM to the atomic orbitalmore » (AO) basis, and evaluation of gradients in the AO basis. For the largest member of the alkane set (C{sub 10}H{sub 22}), the computational times for the gradient terms (with the cc-pVTZ basis set) are 2582.6 (CCSD) and 310.7 (DF-CCSD) min, respectively, a speed up of more than 8-folds. For gradient related terms, the DF approach avoids the usage of four-index electron repulsion integrals. Based on our previous study [U. Bozkaya, J. Chem. Phys. 141, 124108 (2014)], our formalism completely avoids construction or storage of the 4-index two-particle density matrix (TPDM), using instead 2- and 3-index TPDMs. The DF approach introduces negligible errors for equilibrium bond lengths and harmonic vibrational frequencies.« less
Closed-loop, pilot/vehicle analysis of the approach and landing task
NASA Technical Reports Server (NTRS)
Schmidt, D. K.; Anderson, M. R.
1985-01-01
Optimal-control-theoretic modeling and frequency-domain analysis is the methodology proposed to evaluate analytically the handling qualities of higher-order manually controlled dynamic systems. Fundamental to the methodology is evaluating the interplay between pilot workload and closed-loop pilot/vehicle performance and stability robustness. The model-based metric for pilot workload is the required pilot phase compensation. Pilot/vehicle performance and loop stability is then evaluated using frequency-domain techniques. When these techniques were applied to the flight-test data for thirty-two highly-augmented fighter configurations, strong correlation was obtained between the analytical and experimental results.
Qualitative evaluation of water displacement in simulated analytical breaststroke movements.
Martens, Jonas; Daly, Daniel
2012-05-01
One purpose of evaluating a swimmer is to establish the individualized optimal technique. A swimmer's particular body structure and the resulting movement pattern will cause the surrounding water to react in differing ways. Consequently, an assessment method based on flow visualization was developed complimentary to movement analysis and body structure quantification. A fluorescent dye was used to make the water displaced by the body visible on video. To examine the hypothesis on the propulsive mechanisms applied in breaststroke swimming, we analyzed the movements of the surrounding water during 4 analytical breaststroke movements using the flow visualization technique.
A comparative review of optical surface contamination assessment techniques
NASA Technical Reports Server (NTRS)
Heaney, James B.
1987-01-01
This paper will review the relative sensitivities and practicalities of the common surface analytical methods that are used to detect and identify unwelcome adsorbants on optical surfaces. The compared methods include visual inspection, simple reflectometry and transmissiometry, ellipsometry, infrared absorption and attenuated total reflectance spectroscopy (ATR), Auger electron spectroscopy (AES), scanning electron microscopy (SEM), secondary ion mass spectrometry (SIMS), and mass accretion determined by quartz crystal microbalance (QCM). The discussion is biased toward those methods that apply optical thin film analytical techniques to spacecraft optical contamination problems. Examples are cited from both ground based and in-orbit experiments.
In this study, a new analytical technique was developed for the identification and quantification of multi-functional compounds containing simultaneously at least one hydroxyl or one carboxylic group, or both. This technique is based on derivatizing first the carboxylic group(s) ...
Manual Solid-Phase Peptide Synthesis of Metallocene-Peptide Bioconjugates
ERIC Educational Resources Information Center
Kirin, Srecko I.; Noor, Fozia; Metzler-Nolte, Nils; Mier, Walter
2007-01-01
A simple and relatively inexpensive procedure for preparing a biologically active peptide using solid phase peptide synthesis (SPPS) is described. Fourth-year undergraduate students have gained firsthand experience from the solid-phase synthesis techniques and they have become familiar with modern analytical techniques based on the particular…
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Dental movement acceleration: Literature review by an alternative scientific evidence method
Camacho, Angela Domínguez; Cujar, Sergio Andres Velásquez
2014-01-01
The aim of this study was to analyze the majority of publications using effective methods to speed up orthodontic treatment and determine which publications carry high evidence-based value. The literature published in Pubmed from 1984 to 2013 was reviewed, in addition to well-known reports that were not classified under this database. To facilitate evidence-based decision making, guidelines such as the Consolidation Standards of Reporting Trials, Preferred Reporting items for systematic Reviews and Meta-analyses, and Transparent Reporting of Evaluations with Non-randomized Designs check list were used. The studies were initially divided into three groups: local application of cell mediators, physical stimuli, and techniques that took advantage of the regional acceleration phenomena. The articles were classified according to their level of evidence using an alternative method for orthodontic scientific article classification. 1a: Systematic Reviews (SR) of randomized clinical trials (RCTs), 1b: Individual RCT, 2a: SR of cohort studies, 2b: Individual cohort study, controlled clinical trials and low quality RCT, 3a: SR of case-control studies, 3b: Individual case-control study, low quality cohort study and short time following split mouth designs. 4: Case-series, low quality case-control study and non-systematic review, and 5: Expert opinion. The highest level of evidence for each group was: (1) local application of cell mediators: the highest level of evidence corresponds to a 3B level in Prostaglandins and Vitamin D; (2) physical stimuli: vibratory forces and low level laser irradiation have evidence level 2b, Electrical current is classified as 3b evidence-based level, Pulsed Electromagnetic Field is placed on the 4th level on the evidence scale; and (3) regional acceleration phenomena related techniques: for corticotomy the majority of the reports belong to level 4. Piezocision, dentoalveolar distraction, alveocentesis, monocortical tooth dislocation and ligament distraction technique, only had case series or single report cases (4th level of evidence). Surgery first and periodontal distraction have 1 study at level 2b and corticision one report at level 5. Multiple orthodontic acceleration reports on humans were identified by an alternative evidence level scale, which is a simple and accurate way of determining which techniques are better and have a higher rate of effectiveness. The highest level of evidence for a specific procedure to accelerate orthodontic dental movement up to October 2013 was surgery first followed by low level laser application, corticotomy and periodontal distraction located on level 2, recommendation grade b from this proposed scientific evidence-based scale. PMID:25332914
Simon van der Meer (1925-2011):. A Modest Genius of Accelerator Science
NASA Astrophysics Data System (ADS)
Chohan, Vinod C.
2011-02-01
Simon van der Meer was a brilliant scientist and a true giant of accelerator science. His seminal contributions to accelerator science have been essential to this day in our quest for satisfying the demands of modern particle physics. Whether we talk of long base-line neutrino physics or antiproton-proton physics at Fermilab or proton-proton physics at LHC, his techniques and inventions have been a vital part of the modern day successes. Simon van der Meer and Carlo Rubbia were the first CERN scientists to become Nobel laureates in Physics, in 1984. Van der Meer's lesserknown contributions spanned a whole range of subjects in accelerator science, from magnet design to power supply design, beam measurements, slow beam extraction, sophisticated programs and controls.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726
NASA Astrophysics Data System (ADS)
Pritykin, F. N.; Nebritov, V. I.
2017-06-01
The structure of graphic database specifying the shape and the work envelope projection position of an android arm mechanism with various positions of the known in advance forbidden zones is proposed. The technique of analytical assignment of the work envelope based on the methods of analytical geometry and theory of sets is represented. The conducted studies can be applied in creation of knowledge bases for intellectual systems of android control functioning independently in the sophisticated environment.
Teran-Escobar, Gerardo; Tanenbaum, David M; Voroshazi, Eszter; Hermenau, Martin; Norrman, Kion; Lloyd, Matthew T; Galagan, Yulia; Zimmermann, Birger; Hösel, Markus; Dam, Henrik F; Jørgensen, Mikkel; Gevorgyan, Suren; Kudret, Suleyman; Maes, Wouter; Lutsen, Laurence; Vanderzande, Dirk; Würfel, Uli; Andriessen, Ronn; Rösch, Roland; Hoppe, Harald; Rivaton, Agnès; Uzunoğlu, Gülşah Y; Germack, David; Andreasen, Birgitta; Madsen, Morten V; Bundgaard, Eva; Krebs, Frederik C; Lira-Cantu, Monica
2012-09-07
This work is part of the inter-laboratory collaboration to study the stability of seven distinct sets of state-of-the-art organic photovoltaic (OPV) devices prepared by leading research laboratories. All devices have been shipped to and degraded at RISØ-DTU up to 1830 hours in accordance with established ISOS-3 protocols under defined illumination conditions. In this work, we apply the Incident Photon-to-Electron Conversion Efficiency (IPCE) and the in situ IPCE techniques to determine the relation between solar cell performance and solar cell stability. Different ageing conditions were considered: accelerated full sun simulation, low level indoor fluorescent lighting and dark storage. The devices were also monitored under conditions of ambient and inert (N(2)) atmospheres, which allows for the identification of the solar cell materials more susceptible to degradation by ambient air (oxygen and moisture). The different OPVs configurations permitted the study of the intrinsic stability of the devices depending on: two different ITO-replacement alternatives, two different hole extraction layers (PEDOT:PSS and MoO(3)), and two different P3HT-based polymers. The response of un-encapsulated devices to ambient atmosphere offered insight into the importance of moisture in solar cell performance. Our results demonstrate that the IPCE and the in situ IPCE techniques are valuable analytical methods to understand device degradation and solar cell lifetime.
Arnau, Antonio
2008-01-01
From the first applications of AT-cut quartz crystals as sensors in solutions more than 20 years ago, the so-called quartz crystal microbalance (QCM) sensor is becoming into a good alternative analytical method in a great deal of applications such as biosensors, analysis of biomolecular interactions, study of bacterial adhesion at specific interfaces, pathogen and microorganism detection, study of polymer film-biomolecule or cell-substrate interactions, immunosensors and an extensive use in fluids and polymer characterization and electrochemical applications among others. The appropriate evaluation of this analytical method requires recognizing the different steps involved and to be conscious of their importance and limitations. The first step involved in a QCM system is the accurate and appropriate characterization of the sensor in relation to the specific application. The use of the piezoelectric sensor in contact with solutions strongly affects its behavior and appropriate electronic interfaces must be used for an adequate sensor characterization. Systems based on different principles and techniques have been implemented during the last 25 years. The interface selection for the specific application is important and its limitations must be known to be conscious of its suitability, and for avoiding the possible error propagation in the interpretation of results. This article presents a comprehensive overview of the different techniques used for AT-cut quartz crystal microbalance in in-solution applications, which are based on the following principles: network or impedance analyzers, decay methods, oscillators and lock-in techniques. The electronic interfaces based on oscillators and phase-locked techniques are treated in detail, with the description of different configurations, since these techniques are the most used in applications for detection of analytes in solutions, and in those where a fast sensor response is necessary. PMID:27879713
Gebauer, Petr; Malá, Zdena; Boček, Petr
2014-03-01
This contribution is the third part of the project on strategies used in the selection and tuning of electrolyte systems for anionic ITP with ESI-MS detection. The strategy presented here is based on the creation of self-maintained ITP subsystems in moving-boundary systems and describes two new principal approaches offering physical separation of analyte zones from their common ITP stack and/or simultaneous selective stacking of two different analyte groups. Both strategic directions are based on extending the number of components forming the electrolyte system by adding a third suitable anion. The first method is the application of the spacer technique to moving-boundary anionic ITP systems, the second method is a technique utilizing a moving-boundary ITP system in which two ITP subsystems exist and move with mutually different velocities. It is essential for ESI detection that both methods can be based on electrolyte systems containing only several simple chemicals, such as simple volatile organic acids (formic and acetic) and their ammonium salts. The properties of both techniques are defined theoretically and discussed from the viewpoint of their applicability to trace analysis by ITP-ESI-MS. Examples of system design for selected model separations of preservatives and pharmaceuticals illustrate the validity of the theoretical model and application potential of the proposed techniques by both computer simulations and experiments. Both new methods enhance the application range of ITP-MS and may be beneficial particularly for complex multicomponent samples or for analytes with identical molecular mass. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Applying Case-Based Reasoning in Knowledge Management to Support Organizational Performance
ERIC Educational Resources Information Center
Wang, Feng-Kwei
2006-01-01
Research and practice in human performance technology (HPT) has recently accelerated the search for innovative approaches to supplement or replace traditional training interventions for improving organizational performance. This article examines a knowledge management framework built upon the theories and techniques of case-based reasoning (CBR)…
BINP accelerator based epithermal neutron source.
Aleynik, V; Burdakov, A; Davydenko, V; Ivanov, A; Kanygin, V; Kuznetsov, A; Makarov, A; Sorokin, I; Taskaev, S
2011-12-01
Innovative facility for neutron capture therapy has been built at BINP. This facility is based on compact vacuum insulation tandem accelerator designed to produce proton current up to 10 mA. Epithermal neutrons are proposed to be generated by 1.915-2.5 MeV protons bombarding a lithium target using (7)Li(p,n)(7)Be threshold reaction. In the article, diagnostic techniques for proton beam and neutrons developed are described, results of experiments on proton beam transport and neutron generation are shown, discussed, and plans are presented. Copyright © 2011 Elsevier Ltd. All rights reserved.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Evidence-based perianesthesia care: accelerated postoperative recovery programs.
Pasero, Chris; Belden, Jan
2006-06-01
Prolonged stress response after surgery can cause numerous adverse effects, including gastrointestinal dysfunction, muscle wasting, impaired cognition, and cardiopulmonary, infectious, and thromboembolic complications. These events can delay hospital discharge, extend convalescence, and negatively impact long-term prognosis. Recent advances in perioperative management practices have allowed better control of the stress response and improved outcomes for patients undergoing surgery. At the center of the current focus on improved outcomes are evidence-based fast-track surgical techniques and what is commonly referred to as "accelerated postoperative recovery programs." These programs require a multidisciplinary, coordinated effort, and nurses are essential to their successful implementation.
Future Synchrotron Light Sources Based on Ultimate Storage Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Yunhai; /SLAC
2012-04-09
The main purpose of this talk is to describe how far one might push the state of the art in storage ring design. The talk will start with an overview of the latest developments and advances in the design of synchrotron light sources based on the concept of an 'ultimate' storage ring. The review will establish how bright a ring based light source might be, where the frontier of technological challenges are, and what the limits of accelerator physics are. Emphasis will be given to possible improvements in accelerator design and developments in technology toward the goal of achieving anmore » ultimate storage ring. An ultimate storage ring (USR), defined as an electron ring-based light source having an emittance in both transverse planes at the diffraction limit for the range of X-ray wavelengths of interest for a scientific community, would provide very high brightness photons having high transverse coherence that would extend the capabilities of X-ray imaging and probe techniques beyond today's performance. It would be a cost-effective, high-coherence 4th generation light source, competitive with one based on energy recovery linac (ERL) technology, serving a large number of users studying material, chemical, and biological sciences. Furthermore, because of the experience accumulated over many decades of ring operation, it would have the great advantage of stability and reliability. In this paper we consider the design of an USR having 10-pm-rad emittance. It is a tremendous challenge to design a storage ring having such an extremely low emittance, a factor of 100 smaller than those in existing light sources, especially such that it has adequate dynamic aperture and beam lifetime. In many ultra-low emittance designs, the injection acceptances are not large enough for accumulation of the electron beam, necessitating on-axis injection where stored electron bunches are completely replaced with newly injected ones. Recently, starting with the MAX-IV 7-bend achromatic cell, we have made significant progress with the design of PEP-X, a USR that would inhabit the decommissioned PEP-II tunnel at SLAC. The enlargement of the dynamic aperture is largely a result of the cancellations of the 4th-order resonances in the 3rd-order achromats and the effective use of lattice optimization programs. In this paper, we will show those cancellations of the 4th-order resonances using an analytical approach based on the exponential Lie operators and the Poisson brackets. Wherever possible, our analytical results will be compared with their numerical counterparts. Using the derived formulae, we will construct 4th-order geometric achromats and use them as modules for the lattice of the PEP-X USR, noting that only geometric terms are canceled to the 4th order.« less
Aksyonov, S A; Williams, P
2001-01-01
Impact desolvation of electrosprayed microdroplets (IDEM) is a new method for producing gas-phase ions of large biomolecules. Analytes are dissolved in an electrolyte solution which is electrosprayed in vacuum, producing highly charged micron and sub-micron sized droplets (microdroplets). These microdroplets are accelerated through potential differences approximately 5 - 10 kV to velocities of several km/s and allowed to impact a target surface. The energetic impacts vaporize the droplets and release desolvated gas-phase ions of the analyte molecules. Oligonucleotides (2- to 12-mer) and peptides (bradykinin, neurotensin) yield singly and doubly charged molecular ions with no detectable fragmentation. Because the extent of multiple charging is significantly less than in atmospheric pressure electrospray ionization, and the method produces ions largely free of adducts from solutions of high ionic strength, IDEM has some promise as a method for coupling to liquid chromatographic techniques and for mixture analysis. Ions are produced in vacuum at a flat equipotential surface, potentially allowing efficient ion extraction. Copyright 2001 John Wiley & Sons, Ltd.
Optimizing cosmological surveys in a crowded market
NASA Astrophysics Data System (ADS)
Bassett, Bruce A.
2005-04-01
Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.
The motion and stability of a dual spin satellite during the momentum wheel spin-up maneuver
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Sen, S.
1972-01-01
The stability of a dual-spin satellite system during the momentum wheel spin-up maneuver is treated both analytically and numerically. The dual-spin system consists of: a slowly rotating or despun main-body; a momentum wheel (or rotor) which is accelerated by a torque motor to change its initial angular velocity relative to the main part to some high terminal value; and a nutation damper. A closed form solution for the case of a symmetrical satellite indicates that when the nutation damper is physically constrained for movement (i.e. by use of a mechanical clamp) the magnitude of the vector sum of the transverse angular velocity components remains bounded during the wheel spin-up under the influence of a constant motor torque. The analysis is extended to consider such effects as: the motion of the nutation damper during spin-up; a non-uniform motor torque; and the effect of a non-symmetrical mass distribution in the main spacecraft and the rotor. An approximate analytical solution using perturbation techniques is developed for the case of a slightly asymmetric main spacecraft.
NASA Astrophysics Data System (ADS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-05-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.
Ahn, Kang-Ho; Kim, Sun-Man; Jung, Hae-Jin; Lee, Mi-Jung; Eom, Hyo-Jin; Maskey, Shila; Ro, Chul-Un
2010-10-01
In this work, an analytical method for the characterization of the hygroscopic property, chemical composition, and morphology of individual aerosol particles is introduced. The method, which is based on the combined use of optical and electron microscopic techniques, is simple and easy to apply. An optical microscopic technique was used to perform the visual observation of the phase transformation and hygroscopic growth of aerosol particles on a single particle level. A quantitative energy-dispersive electron probe X-ray microanalysis, named low-Z particle EPMA, was used to perform a quantitative chemical speciation of the same individual particles after the measurement of the hygroscopic property. To validate the analytical methodology, the hygroscopic properties of artificially generated NaCl, KCl, (NH(4))(2)SO(4), and Na(2)SO(4) aerosol particles of micrometer size were investigated. The practical applicability of the analytical method for studying the hygroscopic property, chemical composition, and morphology of ambient aerosol particles is demonstrated.
[Developments in preparation and experimental method of solid phase microextraction fibers].
Yi, Xu; Fu, Yujie
2004-09-01
Solid phase microextraction (SPME) is a simple and effective adsorption and desorption technique, which concentrates volatile or nonvolatile compounds from liquid samples or headspace of samples. SPME is compatible with analyte separation and detection by gas chromatography, high performance liquid chromatography, and other instrumental methods. It can provide many advantages, such as wide linear scale, low solvent and sample consumption, short analytical times, low detection limits, simple apparatus, and so on. The theory of SPME is introduced, which includes equilibrium theory and non-equilibrium theory. The novel development of fiber preparation methods and relative experimental techniques are discussed. In addition to commercial fiber preparation, different newly developed fabrication techniques, such as sol-gel, electronic deposition, carbon-base adsorption, high-temperature epoxy immobilization, are presented. Effects of extraction modes, selection of fiber coating, optimization of operating conditions, method sensitivity and precision, and systematical automation, are taken into considerations in the analytical process of SPME. A simple perspective of SPME is proposed at last.
De Vore, Karl W; Fatahi, Nadia M; Sass, John E
2016-08-01
Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.
Enhance your team-based qualitative research.
Fernald, Douglas H; Duclos, Christine W
2005-01-01
Qualitative research projects often involve the collaborative efforts of a research team. Challenges inherent in teamwork include changes in membership and differences in analytical style, philosophy, training, experience, and skill. This article discusses teamwork issues and tools and techniques used to improve team-based qualitative research. We drew on our experiences in working on numerous projects of varying, size, duration, and purpose. Through trials of different tools and techniques, expert consultation, and review of the literature, we learned to improve how we build teams, manage information, and disseminate results. Attention given to team members and team processes is as important as choosing appropriate analytical tools and techniques. Attentive team leadership, commitment to early and regular team meetings, and discussion of roles, responsibilities, and expectations all help build more effective teams and establish clear norms. As data are collected and analyzed, it is important to anticipate potential problems from differing skills and styles, and how information and files are managed. Discuss analytical preferences and biases and set clear guidelines and practices for how data will be analyzed and handled. As emerging ideas and findings disperse across team members, common tools (such as summary forms and data grids), coding conventions, intermediate goals or products, and regular documentation help capture essential ideas and insights. In a team setting, little should be left to chance. This article identifies ways to improve team-based qualitative research with more a considered and systematic approach. Qualitative researchers will benefit from further examination and discussion of effective, field-tested, team-based strategies.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Particle acceleration in laser-driven magnetic reconnection
Totorica, S. R.; Abel, T.; Fiuza, F.
2017-04-03
Particle acceleration induced by magnetic reconnection is thought to be a promising candidate for producing the nonthermal emissions associated with explosive phenomena such as solar flares, pulsar wind nebulae, and jets from active galactic nuclei. Laboratory experiments can play an important role in the study of the detailed microphysics of magnetic reconnection and the dominant particle acceleration mechanisms. We have used two- and three-dimensional particle-in-cell simulations to study particle acceleration in high Lundquist number reconnection regimes associated with laser-driven plasma experiments. For current experimental conditions, we show that nonthermal electrons can be accelerated to energies more than an order ofmore » magnitude larger than the initial thermal energy. The nonthermal electrons gain their energy mainly from the reconnection electric field near the X points, and particle injection into the reconnection layer and escape from the finite system establish a distribution of energies that resembles a power-law spectrum. Energetic electrons can also become trapped inside the plasmoids that form in the current layer and gain additional energy from the electric field arising from the motion of the plasmoid. We compare simulations for finite and infinite periodic systems to demonstrate the importance of particle escape on the shape of the spectrum. Based on our findings, we provide an analytical estimate of the maximum electron energy and threshold condition for observing suprathermal electron acceleration in terms of experimentally tunable parameters. We also discuss experimental signatures, including the angular distribution of the accelerated particles, and construct synthetic detector spectra. Finally, these results open the way for novel experimental studies of particle acceleration induced by reconnection.« less
An orientation soil survey at the Pebble Cu-Au-Mo porphyry deposit, Alaska
Smith, Steven M.; Eppinger, Robert G.; Fey, David L.; Kelley, Karen D.; Giles, S.A.
2009-01-01
Soil samples were collected in 2007 and 2008 along three traverses across the giant Pebble Cu-Au-Mo porphyry deposit. Within each soil pit, four subsamples were collected following recommended protocols for each of ten commonly-used and proprietary leach/digestion techniques. The significance of geochemical patterns generated by these techniques was classified by visual inspection of plots showing individual element concentration by each analytical method along the 2007 traverse. A simple matrix by element versus method, populated with a value based on the significance classification, provides a method for ranking the utility of methods and elements at this deposit. The interpretation of a complex multi-element dataset derived from multiple analytical techniques is challenging. An example of vanadium results from a single leach technique is used to illustrate the several possible interpretations of the data.
The role of analytical chemistry in Niger Delta petroleum exploration: a review.
Akinlua, Akinsehinwa
2012-06-12
Petroleum and organic matter from which the petroleum is derived are composed of organic compounds with some trace elements. These compounds give an insight into the origin, thermal maturity and paleoenvironmental history of petroleum, which are essential elements in petroleum exploration. The main tool to acquire the geochemical data is analytical techniques. Due to progress in the development of new analytical techniques, many hitherto petroleum exploration problems have been resolved. Analytical chemistry has played a significant role in the development of petroleum resources of Niger Delta. Various analytical techniques that have aided the success of petroleum exploration in the Niger Delta are discussed. The analytical techniques that have helped to understand the petroleum system of the basin are also described. Recent and emerging analytical methodologies including green analytical methods as applicable to petroleum exploration particularly Niger Delta petroleum province are discussed in this paper. Analytical chemistry is an invaluable tool in finding the Niger Delta oils. Copyright © 2011 Elsevier B.V. All rights reserved.
Analytical techniques: A compilation
NASA Technical Reports Server (NTRS)
1975-01-01
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.
Joyner, William B.; Boore, David M.
1981-01-01
We have taken advantage of the recent increase in strong-motion data at close distances to derive new attenuation relations for peak horizontal acceleration and velocity. This new analysis uses a magnitude-independent shape, based on geometrical spreading and anelastic attenuation, for the attenuation curve. An innovation in technique is introduced that decouples the determination of the distance dependence of the data from the magnitude dependence.
Using machine learning to accelerate sampling-based inversion
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Sambridge, M.
2017-12-01
In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.
Advances in developing rapid, reliable and portable detection systems for alcohol.
Thungon, Phurpa Dema; Kakoti, Ankana; Ngashangva, Lightson; Goswami, Pranab
2017-11-15
Development of portable, reliable, sensitive, simple, and inexpensive detection system for alcohol has been an instinctive demand not only in traditional brewing, pharmaceutical, food and clinical industries but also in rapidly growing alcohol based fuel industries. Highly sensitive, selective, and reliable alcohol detections are currently amenable typically through the sophisticated instrument based analyses confined mostly to the state-of-art analytical laboratory facilities. With the growing demand of rapid and reliable alcohol detection systems, an all-round attempt has been made over the past decade encompassing various disciplines from basic and engineering sciences. Of late, the research for developing small-scale portable alcohol detection system has been accelerated with the advent of emerging miniaturization techniques, advanced materials and sensing platforms such as lab-on-chip, lab-on-CD, lab-on-paper etc. With these new inter-disciplinary approaches along with the support from the parallel knowledge growth on rapid detection systems being pursued for various targets, the progress on translating the proof-of-concepts to commercially viable and environment friendly portable alcohol detection systems is gaining pace. Here, we summarize the progress made over the years on the alcohol detection systems, with a focus on recent advancement towards developing portable, simple and efficient alcohol sensors. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.
1993-01-01
The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOAR\\CR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This document is the final report describing the theoretical basis and analytical results from the ADPAC-AOACR codes developed under task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR Program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Suet Yi; Kleber, Markus; Takahashi, Lynelle K.
2013-04-01
Soil organic matter (OM) is important because its decay drives life processes in the biosphere. Analysis of organic compounds in geological systems is difficult because of their intimate association with mineral surfaces. To date there is no procedure capable of quantitatively separating organic from mineral phases without creating artifacts or mass loss. Therefore, analytical techniques that can (a) generate information about both organic and mineral phases simultaneously and (b) allow the examination of predetermined high-interest regions of the sample as opposed to conventional bulk analytical techniques are valuable. Laser Desorption Synchrotron Postionization (synchrotron-LDPI) mass spectrometry is introduced as a novelmore » analytical tool to characterize the molecular properties of organic compounds in mineral-organic samples from terrestrial systems, and it is demonstrated that when combined with Secondary Ion Mass Spectrometry (SIMS), can provide complementary information on mineral composition. Mass spectrometry along a decomposition gradient in density fractions, verifies the consistency of our results with bulk analytical techniques. We further demonstrate that by changing laser and photoionization energies, variations in molecular stability of organic compounds associated with mineral surfaces can be determined. The combination of synchrotron-LDPI and SIMS shows that the energetic conditions involved in desorption and ionization of organic matter may be a greater determinant of mass spectral signatures than the inherent molecular structure of the organic compounds investigated. The latter has implications for molecular models of natural organic matter that are based on mass spectrometric information.« less
Charged aerodynamics of a Low Earth Orbit cylinder
NASA Astrophysics Data System (ADS)
Capon, C. J.; Brown, M.; Boyce, R. R.
2016-11-01
This work investigates the charged aerodynamic interaction of a Low Earth Orbiting (LEO) cylinder with the ionosphere. The ratio of charge to neutral drag force on a 2D LEO cylinder with diffusely reflecting cool walls is derived analytically and compared against self-consistent electrostatic Particle-in-Cell (PIC) simulations. Analytical calculations predict that neglecting charged drag in an O+ dominated LEO plasma with a neutral to ion number density ratio of 102 will cause a 10% over-prediction of O density based on body accelerations when body potential (ɸB) is ≤ -390 V. Above 900 km altitude in LEO, where H+ becomes the dominant ion species, analytical predictions suggest charge drag becomes equivalent to neutral drag for ɸB ≤ -0.75 V. Comparing analytical predictions against PIC simulations in the range of 0 < - ɸB < 50 V found that analytical charged drag was under-estimated for all body potentials; the degree of under-estimation increasing with ɸB. Based on the -50 V PIC simulations, our in-house 6 degree of freedom orbital propagator saw a reduction in the semi-major axis of a 10 kg satellite at 700 km of 6.9 m/day and 0.98 m/day at 900 km compared that caused purely by neutral drag - 0.67 m/day and 0.056 m/day respectively. Hence, this work provides initial evidence that charged aerodynamics may become significant compared to neutral aerodynamics for high voltage LEO bodies.