Sample records for accelerator-based analytical technique

  1. Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging

    NASA Astrophysics Data System (ADS)

    Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke

    2011-12-01

    In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.

  2. Accelerator-based analytical technique in the evaluation of some Nigeria’s natural minerals: Fluorite, tourmaline and topaz

    NASA Astrophysics Data System (ADS)

    Olabanji, S. O.; Ige, O. A.; Mazzoli, C.; Ceccato, D.; Akintunde, J. A.; De Poli, M.; Moschini, G.

    2005-10-01

    For the first time, the complementary accelerator-based analytical technique of PIXE and electron microprobe analysis (EMPA) were employed for the characterization of some Nigeria's natural minerals namely fluorite, tourmaline and topaz. These minerals occur in different areas in Nigeria. The minerals are mainly used as gemstones and for other scientific and technological applications and therefore are very important. There is need to characterize them to know the quality of these gemstones and update the geochemical data on them geared towards useful applications. PIXE analysis was carried out using the 1.8 MeV collimated proton beam from the 2.5 MV AN 2000 Van de Graaff accelerator at INFN, LNL, Legnaro, Padova, Italy. The novel results which show many elements at different concentrations in these minerals are presented and discussed.

  3. Quantitative elemental analysis of an industrial mineral talc, using accelerator-based analytical technique

    NASA Astrophysics Data System (ADS)

    Olabanji, S. O.; Ige, A. O.; Mazzoli, C.; Ceccato, D.; Ajayi, E. O. B.; De Poli, M.; Moschini, G.

    2005-10-01

    Accelerator-based technique of PIXE was employed for the determination of the elemental concentration of an industrial mineral, talc. Talc is a very versatile mineral in industries with several applications. Due to this, there is a need to know its constituents to ensure that the workers are not exposed to health risks. Besides, microscopic tests on some talc samples in Nigeria confirm that they fall within the BP British Pharmacopoeia standard for tablet formation. However, for these samples to become a local source of raw material for pharmaceutical grade talc, the precise elemental compositions should be established which is the focus of this work. Proton beam produced by the 2.5 MV AN 2000 Van de Graaff accelerator at INFN, LNL, Legnaro, Padova, Italy was used for the PIXE measurements. The results which show the concentration of different elements in the talc samples, their health implications and metabolic roles are presented and discussed.

  4. Studies of industrial emissions by accelerator-based techniques: A review of applications at CEDAD

    NASA Astrophysics Data System (ADS)

    Calcagnile, L.; Quarta, G.

    2012-04-01

    Different research activities are in progress at the Centre for Dating and Diagnostics (CEDAD), University of Salento, in the field of environmental monitoring by exploiting the potentialities given by the different experimental beam lines implemented on the 3 MV Tande-tron accelerator and dedicated to AMS (Accelerator Mass Spectrome-try) radiocarbon dating and IB A (Ion Beam Analysis). An overview of these activities is presented by showing how accelerator-based analytical techniques can be a powerful tool for monitoring the anthropogenic carbon dioxide emissions from industrial sources and for the assessment of the biogenic content in SRF (Solid Recovered Fuel) burned in WTE (Waste to Energy) plants.

  5. Accelerator-based analytical technique in the study of some anti-diabetic medicinal plants of Nigeria

    NASA Astrophysics Data System (ADS)

    Olabanji, S. O.; Omobuwajo, O. R.; Ceccato, D.; Adebajo, A. C.; Buoso, M. C.; Moschini, G.

    2008-05-01

    Diabetes mellitus, a clinical syndrome characterized by hyperglycemia due to deficiency of insulin, is a disease involving the endocrine pancreas and causes considerable morbidity and mortality in the world. In Nigeria, many plants, especially those implicated in herbal recipes for the treatment of diabetes, have not been screened for their elemental constituents while information on phytochemistry of some of them is not available. There is therefore the need to document these constituents as some of these plants are becoming increasingly important as herbal drugs or food additives. The accelerator-based technique PIXE, using the 1.8 MeV collimated proton beam from the 2.5 MV AN 2000 Van de Graaff accelerator at INFN, LNL, Legnaro (Padova) Italy, was employed in the determination of the elemental constituents of these anti-diabetic medicinal plants. Leaves of Gardenia ternifolia, Caesalpina pulcherrima, Solemostenon monostachys, whole plant of Momordica charantia and leaf and stem bark of Hunteria umbellata could be taken as vegetables, neutraceuticals, food additives and supplements in the management of diabetes. However, Hexabolus monopetalus root should be used under prescription.

  6. Bioimaging of cells and tissues using accelerator-based sources.

    PubMed

    Petibois, Cyril; Cestelli Guidi, Mariangela

    2008-07-01

    A variety of techniques exist that provide chemical information in the form of a spatially resolved image: electron microprobe analysis, nuclear microprobe analysis, synchrotron radiation microprobe analysis, secondary ion mass spectrometry, and confocal fluorescence microscopy. Linear (LINAC) and circular (synchrotrons) particle accelerators have been constructed worldwide to provide to the scientific community unprecedented analytical performances. Now, these facilities match at least one of the three analytical features required for the biological field: (1) a sufficient spatial resolution for single cell (< 1 mum) or tissue (<1 mm) analyses, (2) a temporal resolution to follow molecular dynamics, and (3) a sensitivity in the micromolar to nanomolar range, thus allowing true investigations on biological dynamics. Third-generation synchrotrons now offer the opportunity of bioanalytical measurements at nanometer resolutions with incredible sensitivity. Linear accelerators are more specialized in their physical features but may exceed synchrotron performances. All these techniques have become irreplaceable tools for developing knowledge in biology. This review highlights the pros and cons of the most popular techniques that have been implemented on accelerator-based sources to address analytical issues on biological specimens.

  7. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  8. Intracavity optogalvanic spectroscopy. An analytical technique for 14C analysis with subattomole sensitivity.

    PubMed

    Murnick, Daniel E; Dogru, Ozgur; Ilkmen, Erhan

    2008-07-01

    We show a new ultrasensitive laser-based analytical technique, intracavity optogalvanic spectroscopy, allowing extremely high sensitivity for detection of (14)C-labeled carbon dioxide. Capable of replacing large accelerator mass spectrometers, the technique quantifies attomoles of (14)C in submicrogram samples. Based on the specificity of narrow laser resonances coupled with the sensitivity provided by standing waves in an optical cavity and detection via impedance variations, limits of detection near 10(-15) (14)C/(12)C ratios are obtained. Using a 15-W (14)CO2 laser, a linear calibration with samples from 10(-15) to >1.5 x 10(-12) in (14)C/(12)C ratios, as determined by accelerator mass spectrometry, is demonstrated. Possible applications include microdosing studies in drug development, individualized subtherapeutic tests of drug metabolism, carbon dating and real time monitoring of atmospheric radiocarbon. The method can also be applied to detection of other trace entities.

  9. MS-Based Analytical Techniques: Advances in Spray-Based Methods and EI-LC-MS Applications

    PubMed Central

    Medina, Isabel; Cappiello, Achille; Careri, Maria

    2018-01-01

    Mass spectrometry is the most powerful technique for the detection and identification of organic compounds. It can provide molecular weight information and a wealth of structural details that give a unique fingerprint for each analyte. Due to these characteristics, mass spectrometry-based analytical methods are showing an increasing interest in the scientific community, especially in food safety, environmental, and forensic investigation areas where the simultaneous detection of targeted and nontargeted compounds represents a key factor. In addition, safety risks can be identified at the early stage through online and real-time analytical methodologies. In this context, several efforts have been made to achieve analytical instrumentation able to perform real-time analysis in the native environment of samples and to generate highly informative spectra. This review article provides a survey of some instrumental innovations and their applications with particular attention to spray-based MS methods and food analysis issues. The survey will attempt to cover the state of the art from 2012 up to 2017. PMID:29850370

  10. Extended analytical formulas for the perturbed Keplerian motion under a constant control acceleration

    NASA Astrophysics Data System (ADS)

    Zuiani, Federico; Vasile, Massimiliano

    2015-03-01

    This paper presents a set of analytical formulae for the perturbed Keplerian motion of a spacecraft under the effect of a constant control acceleration. The proposed set of formulae can treat control accelerations that are fixed in either a rotating or inertial reference frame. Moreover, the contribution of the zonal harmonic is included in the analytical formulae. It will be shown that the proposed analytical theory allows for the fast computation of long, multi-revolution spirals while maintaining good accuracy. The combined effect of different perturbations and of the shadow regions due to solar eclipse is also included. Furthermore, a simplified control parameterisation is introduced to optimise thrusting patterns with two thrust arcs and two cost arcs per revolution. This simple parameterisation is shown to ensure enough flexibility to describe complex low thrust spirals. The accuracy and speed of the proposed analytical formulae are compared against a full numerical integration with different integration schemes. An averaging technique is then proposed as an application of the analytical formulae. Finally, the paper presents an example of design of an optimal low-thrust spiral to transfer a spacecraft from an elliptical to a circular orbit around the Earth.

  11. Hyphenated analytical techniques for materials characterisation

    NASA Astrophysics Data System (ADS)

    Armstrong, Gordon; Kailas, Lekshmi

    2017-09-01

    This topical review will provide a survey of the current state of the art in ‘hyphenated’ techniques for characterisation of bulk materials, surface, and interfaces, whereby two or more analytical methods investigating different properties are applied simultaneously to the same sample to better characterise the sample than can be achieved by conducting separate analyses in series using different instruments. It is intended for final year undergraduates and recent graduates, who may have some background knowledge of standard analytical techniques, but are not familiar with ‘hyphenated’ techniques or hybrid instrumentation. The review will begin by defining ‘complementary’, ‘hybrid’ and ‘hyphenated’ techniques, as there is not a broad consensus among analytical scientists as to what each term means. The motivating factors driving increased development of hyphenated analytical methods will also be discussed. This introduction will conclude with a brief discussion of gas chromatography-mass spectroscopy and energy dispersive x-ray analysis in electron microscopy as two examples, in the context that combining complementary techniques for chemical analysis were among the earliest examples of hyphenated characterisation methods. The emphasis of the main review will be on techniques which are sufficiently well-established that the instrumentation is commercially available, to examine physical properties including physical, mechanical, electrical and thermal, in addition to variations in composition, rather than methods solely to identify and quantify chemical species. Therefore, the proposed topical review will address three broad categories of techniques that the reader may expect to encounter in a well-equipped materials characterisation laboratory: microscopy based techniques, scanning probe-based techniques, and thermal analysis based techniques. Examples drawn from recent literature, and a concluding case study, will be used to explain the

  12. Analytical tools in accelerator physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litvinenko, V.N.

    2010-09-01

    This paper is a sub-set of my lectures presented in the Accelerator Physics course (USPAS, Santa Rosa, California, January 14-25, 2008). It is based on my notes I wrote during period from 1976 to 1979 in Novosibirsk. Only few copies (in Russian) were distributed to my colleagues in Novosibirsk Institute of Nuclear Physics. The goal of these notes is a complete description starting from the arbitrary reference orbit, explicit expressions for 4-potential and accelerator Hamiltonian and finishing with parameterization with action and angle variables. To a large degree follow logic developed in Theory of Cyclic Particle Accelerators by A.A.Kolmensky andmore » A.N.Lebedev [Kolomensky], but going beyond the book in a number of directions. One of unusual feature is these notes use of matrix function and Sylvester formula for calculating matrices of arbitrary elements. Teaching the USPAS course motivated me to translate significant part of my notes into the English. I also included some introductory materials following Classical Theory of Fields by L.D. Landau and E.M. Liftsitz [Landau]. A large number of short notes covering various techniques are placed in the Appendices.« less

  13. Analytical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.

  14. Cost and schedule analytical techniques development

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This contract provided technical services and products to the Marshall Space Flight Center's Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) for the period of 3 Aug. 1991 - 30 Nov. 1994. Accomplishments summarized cover the REDSTAR data base, NASCOM hard copy data base, NASCOM automated data base, NASCOM cost model, complexity generators, program planning, schedules, NASA computer connectivity, other analytical techniques, and special project support.

  15. Comparison of extraction techniques and modeling of accelerated solvent extraction for the authentication of natural vanilla flavors.

    PubMed

    Cicchetti, Esmeralda; Chaintreau, Alain

    2009-06-01

    Accelerated solvent extraction (ASE) of vanilla beans has been optimized using ethanol as a solvent. A theoretical model is proposed to account for this multistep extraction. This allows the determination, for the first time, of the total amount of analytes initially present in the beans and thus the calculation of recoveries using ASE or any other extraction technique. As a result, ASE and Soxhlet extractions have been determined to be efficient methods, whereas recoveries are modest for maceration techniques and depend on the solvent used. Because industrial extracts are obtained by many different procedures, including maceration in various solvents, authenticating vanilla extracts using quantitative ratios between the amounts of vanilla flavor constituents appears to be unreliable. When authentication techniques based on isotopic ratios are used, ASE is a valid sample preparation technique because it does not induce isotopic fractionation.

  16. Accelerator-based neutrino oscillation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Deborah A.; /Fermilab

    2007-12-01

    Neutrino oscillations were first discovered by experiments looking at neutrinos coming from extra-terrestrial sources, namely the sun and the atmosphere, but we will be depending on earth-based sources to take many of the next steps in this field. This article describes what has been learned so far from accelerator-based neutrino oscillation experiments, and then describe very generally what the next accelerator-based steps are. In section 2 the article discusses how one uses an accelerator to make a neutrino beam, in particular, one made from decays in flight of charged pions. There are several different neutrino detection methods currently in use,more » or under development. In section 3 these are presented, with a description of the general concept, an example of such a detector, and then a brief discussion of the outstanding issues associated with this detection technique. Finally, section 4 describes how the measurements of oscillation probabilities are made. This includes a description of the near detector technique and how it can be used to make the most precise measurements of neutrino oscillations.« less

  17. Analytical Techniques and Pharmacokinetics of Gastrodia elata Blume and Its Constituents.

    PubMed

    Wu, Jinyi; Wu, Bingchu; Tang, Chunlan; Zhao, Jinshun

    2017-07-08

    Gastrodia elata Blume ( G. elata ), commonly called Tianma in Chinese, is an important and notable traditional Chinese medicine (TCM), which has been used in China as an anticonvulsant, analgesic, sedative, anti-asthma, anti-immune drug since ancient times. The aim of this review is to provide an overview of the abundant efforts of scientists in developing analytical techniques and performing pharmacokinetic studies of G. elata and its constituents, including sample pretreatment methods, analytical techniques, absorption, distribution, metabolism, excretion (ADME) and influence factors to its pharmacokinetics. Based on the reported pharmacokinetic property data of G. elata and its constituents, it is hoped that more studies will focus on the development of rapid and sensitive analytical techniques, discovering new therapeutic uses and understanding the specific in vivo mechanisms of action of G. elata and its constituents from the pharmacokinetic viewpoint in the near future. The present review discusses analytical techniques and pharmacokinetics of G. elata and its constituents reported from 1985 onwards.

  18. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  19. Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.

    PubMed

    Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y

    2017-01-01

    Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.

  20. Considerations on the Use of Custom Accelerators for Big Data Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Minutoli, Marco

    Accelerators, including Graphic Processing Units (GPUs) for gen- eral purpose computation, many-core designs with wide vector units (e.g., Intel Phi), have become a common component of many high performance clusters. The appearance of more stable and reliable tools tools that can automatically convert code written in high-level specifications with annotations (such as C or C++) to hardware de- scription languages (High-Level Synthesis - HLS), is also setting the stage for a broader use of reconfigurable devices (e.g., Field Pro- grammable Gate Arrays - FPGAs) in high performance system for the implementation of custom accelerators, helped by the fact that newmore » processors include advanced cache-coherent interconnects for these components. In this chapter, we briefly survey the status of the use of accelerators in high performance systems targeted at big data analytics applications. We argue that, although the progress in the use of accelerators for this class of applications has been sig- nificant, differently from scientific simulations there still are gaps to close. This is particularly true for the ”irregular” behaviors exhibited by no-SQL, graph databases. We focus our attention on the limits of HLS tools for data analytics and graph methods, and discuss a new architectural template that better fits the requirement of this class of applications. We validate the new architectural templates by mod- ifying the Graph Engine for Multithreaded System (GEMS) frame- work to support accelerators generated with such a methodology, and testing with queries coming from the Lehigh University Benchmark (LUBM). The architectural template enables better supporting the task and memory level parallelism present in graph methods by sup- porting a new control model and a enhanced memory interface. We show that out solution allows generating parallel accelerators, pro- viding speed ups with respect to conventional HLS flows. We finally draw conclusions and present a

  1. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  2. Deriving Earth Science Data Analytics Tools/Techniques Requirements

    NASA Astrophysics Data System (ADS)

    Kempler, S. J.

    2015-12-01

    Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists. Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics tools/techniques requirements that would support specific ESDA type goals. Representative existing data analytics tools/techniques relevant to ESDA will also be addressed.

  3. Compensation Techniques in Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayed, Hisham Kamal

    2011-05-01

    Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Twomore » problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.« less

  4. Fate of the chemical warfare agent O-ethyl S-2-diisopropylaminoethyl methylphosphonothiolate (VX) on soil following accelerant-based fire and liquid decontamination.

    PubMed

    Gravett, M R; Hopkins, F B; Self, A J; Webb, A J; Timperley, C M; Riches, J R

    2014-08-01

    In the event of alleged use of organophosphorus nerve agents, all kinds of environmental samples can be received for analysis. These might include decontaminated and charred matter collected from the site of a suspected chemical attack. In other scenarios, such matter might be sampled to confirm the site of a chemical weapon test or clandestine laboratory decontaminated and burned to prevent discovery. To provide an analytical capability for these contingencies, we present a preliminary investigation of the effect of accelerant-based fire and liquid decontamination on soil contaminated with the nerve agent O-ethyl S-2-diisopropylaminoethyl methylphosphonothiolate (VX). The objectives were (a) to determine if VX or its degradation products were detectable in soil after an accelerant-based fire promoted by aviation fuel, including following decontamination with Decontamination Solution 2 (DS2) or aqueous sodium hypochlorite, (b) to develop analytical methods to support forensic analysis of accelerant-soaked, decontaminated and charred soil and (c) to inform the design of future experiments of this type to improve analytical fidelity. Our results show for the first time that modern analytical techniques can be used to identify residual VX and its degradation products in contaminated soil after an accelerant-based fire and after chemical decontamination and then fire. Comparison of the gas chromatography-mass spectrometry (GC-MS) profiles of VX and its impurities/degradation products from contaminated burnt soil, and burnt soil spiked with VX, indicated that the fire resulted in the production of diethyl methylphosphonate and O,S-diethyl methylphosphonothiolate (by an unknown mechanism). Other products identified were indicative of chemical decontamination, and some of these provided evidence of the decontaminant used, for example, ethyl 2-methoxyethyl methylphosphonate and bis(2-methoxyethyl) methylphosphonate following decontamination with DS2. Sample preparation

  5. An analytical reconstruction model of the spread-out Bragg peak using laser-accelerated proton beams.

    PubMed

    Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing

    2017-07-07

    With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.

  6. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-07

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Analytical techniques for steroid estrogens in water samples - A review.

    PubMed

    Fang, Ting Yien; Praveena, Sarva Mangala; deBurbure, Claire; Aris, Ahmad Zaharin; Ismail, Sharifah Norkhadijah Syed; Rasdi, Irniza

    2016-12-01

    In recent years, environmental concerns over ultra-trace levels of steroid estrogens concentrations in water samples have increased because of their adverse effects on human and animal life. Special attention to the analytical techniques used to quantify steroid estrogens in water samples is therefore increasingly important. The objective of this review was to present an overview of both instrumental and non-instrumental analytical techniques available for the determination of steroid estrogens in water samples, evidencing their respective potential advantages and limitations using the Need, Approach, Benefit, and Competition (NABC) approach. The analytical techniques highlighted in this review were instrumental and non-instrumental analytical techniques namely gas chromatography mass spectrometry (GC-MS), liquid chromatography mass spectrometry (LC-MS), enzyme-linked immuno sorbent assay (ELISA), radio immuno assay (RIA), yeast estrogen screen (YES) assay, and human breast cancer cell line proliferation (E-screen) assay. The complexity of water samples and their low estrogenic concentrations necessitates the use of highly sensitive instrumental analytical techniques (GC-MS and LC-MS) and non-instrumental analytical techniques (ELISA, RIA, YES assay and E-screen assay) to quantify steroid estrogens. Both instrumental and non-instrumental analytical techniques have their own advantages and limitations. However, the non-instrumental ELISA analytical techniques, thanks to its lower detection limit and simplicity, its rapidity and cost-effectiveness, currently appears to be the most reliable for determining steroid estrogens in water samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Evaluation of marginal gap of Ni-Cr copings made with conventional and accelerated casting techniques.

    PubMed

    Tannamala, Pavan Kumar; Azhagarasan, Nagarasampatti Sivaprakasam; Shankar, K Chitra

    2013-01-01

    Conventional casting techniques following the manufacturers' recommendations are time consuming. Accelerated casting techniques have been reported, but their accuracy with base metal alloys has not been adequately studied. We measured the vertical marginal gap of nickel-chromium copings made by conventional and accelerated casting techniques and determined the clinical acceptability of the cast copings in this study. Experimental design, in vitro study, lab settings. Ten copings each were cast by conventional and accelerated casting techniques. All copings were identical, only their mold preparation schedules differed. Microscopic measurements were recorded at ×80 magnification on the perpendicular to the axial wall at four predetermined sites. The marginal gap values were evaluated by paired t test. The mean marginal gap by conventional technique (34.02 μm) is approximately 10 μm lesser than that of accelerated casting technique (44.62 μm). As the P value is less than 0.0001, there is highly significant difference between the two techniques with regard to vertical marginal gap. The accelerated casting technique is time saving and the marginal gap measured was within the clinically acceptable limits and could be an alternative to time-consuming conventional techniques.

  9. Accelerated Evaluation of Automated Vehicles Safety in Lane-Change Scenarios Based on Importance Sampling Techniques

    PubMed Central

    Zhao, Ding; Lam, Henry; Peng, Huei; Bao, Shan; LeBlanc, David J.; Nobukawa, Kazutoshi; Pan, Christopher S.

    2016-01-01

    Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs. PMID:27840592

  10. Accelerated Evaluation of Automated Vehicles Safety in Lane-Change Scenarios Based on Importance Sampling Techniques.

    PubMed

    Zhao, Ding; Lam, Henry; Peng, Huei; Bao, Shan; LeBlanc, David J; Nobukawa, Kazutoshi; Pan, Christopher S

    2017-03-01

    Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs.

  11. Nuclear analytical techniques in medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cesareo, R.

    1988-01-01

    This book acquaints one with the fundamental principles and the instrumentation relevant to analytical technique based on atomic and nuclear physics, as well as present and future biomedical applications. Besides providing a theoretical description of the physical phenomena, a large part of the book is devoted to applications in the medical and biological field, particularly in hematology, forensic medicine and environmental science. This volume reviews methods such as the possibility of carrying out rapid multi-element analysis of trace elements on biomedical samples, in vitro and in vivo, by XRF-analysis; the ability of the PIXE-microprobe to analyze in detail and tomore » map trace elements in fragments of biomedical samples or inside the cells; the potentiality of in vivo nuclear activation analysis for diagnostic purposes. Finally, techniques are described such as radiation scattering (elastic and inelastic scattering) and attenuation measurements which will undoubtedly see great development in the immediate future.« less

  12. Application of real-time digitization techniques in beam measurement for accelerators

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Zhan, Lin-Song; Gao, Xing-Shun; Liu, Shu-Bin; An, Qi

    2016-04-01

    Beam measurement is very important for accelerators. In this paper, modern digital beam measurement techniques based on IQ (In-phase & Quadrature-phase) analysis are discussed. Based on this method and high-speed high-resolution analog-to-digital conversion, we have completed three beam measurement electronics systems designed for the China Spallation Neutron Source (CSNS), Shanghai Synchrotron Radiation Facility (SSRF), and Accelerator Driven Sub-critical system (ADS). Core techniques of hardware design and real-time system calibration are discussed, and performance test results of these three instruments are also presented. Supported by National Natural Science Foundation of China (11205153, 10875119), Knowledge Innovation Program of the Chinese Academy of Sciences (KJCX2-YW-N27), and the Fundamental Research Funds for the Central Universities (WK2030040029),and the CAS Center for Excellence in Particle Physics (CCEPP).

  13. Analytical impact time and angle guidance via time-varying sliding mode technique.

    PubMed

    Zhao, Yao; Sheng, Yongzhi; Liu, Xiangdong

    2016-05-01

    To concretely provide a feasible solution for homing missiles with the precise impact time and angle, this paper develops a novel guidance law, based on the nonlinear engagement dynamics. The guidance law is firstly designed with the prior assumption of a stationary target, followed by the practical extension to a moving target scenario. The time-varying sliding mode (TVSM) technique is applied to fulfill the terminal constraints, in which a specific TVSM surface is constructed with two unknown coefficients. One is tuned to meet the impact time requirement and the other one is targeted with a global sliding mode, so that the impact angle constraint as well as the zero miss distance can be satisfied. Because the proposed law possesses three guidance gain as design parameters, the intercept trajectory can be shaped according to the operational conditions and missile׳s capability. To improve the tolerance of initial heading errors and broaden the application, a new frame of reference is also introduced. Furthermore, the analytical solutions of the flight trajectory, heading angle and acceleration command can be totally expressed for the prediction and offline parameter selection by solving a first-order linear differential equation. Numerical simulation results for various scenarios validate the effectiveness of the proposed guidance law and demonstrate the accuracy of the analytic solutions. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Comparison of marginal accuracy of castings fabricated by conventional casting technique and accelerated casting technique: an in vitro study.

    PubMed

    Reddy, S Srikanth; Revathi, Kakkirala; Reddy, S Kranthikumar

    2013-01-01

    Conventional casting technique is time consuming when compared to accelerated casting technique. In this study, marginal accuracy of castings fabricated using accelerated and conventional casting technique was compared. 20 wax patterns were fabricated and the marginal discrepancy between the die and patterns were measured using Optical stereomicroscope. Ten wax patterns were used for Conventional casting and the rest for Accelerated casting. A Nickel-Chromium alloy was used for the casting. The castings were measured for marginal discrepancies and compared. Castings fabricated using Conventional casting technique showed less vertical marginal discrepancy than the castings fabricated by Accelerated casting technique. The values were statistically highly significant. Conventional casting technique produced better marginal accuracy when compared to Accelerated casting. The vertical marginal discrepancy produced by the Accelerated casting technique was well within the maximum clinical tolerance limits. Accelerated casting technique can be used to save lab time to fabricate clinical crowns with acceptable vertical marginal discrepancy.

  15. Aberration measurement technique based on an analytical linear model of a through-focus aerial image.

    PubMed

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas

    2014-03-10

    We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.

  16. A general, cryogenically-based analytical technique for the determination of trace quantities of volatile organic compounds in the atmosphere

    NASA Technical Reports Server (NTRS)

    Coleman, R. A.; Cofer, W. R., III; Edahl, R. A., Jr.

    1985-01-01

    An analytical technique for the determination of trace (sub-ppbv) quantities of volatile organic compounds in air was developed. A liquid nitrogen-cooled trap operated at reduced pressures in series with a Dupont Nafion-based drying tube and a gas chromatograph was utilized. The technique is capable of analyzing a variety of organic compounds, from simple alkanes to alcohols, while offering a high level of precision, peak sharpness, and sensitivity.

  17. Analytical techniques and instrumentation: A compilation. [analytical instrumentation, materials performance, and systems analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical information is presented covering the areas of: (1) analytical instrumentation useful in the analysis of physical phenomena; (2) analytical techniques used to determine the performance of materials; and (3) systems and component analyses for design and quality control.

  18. Network acceleration techniques

    NASA Technical Reports Server (NTRS)

    Crowley, Patricia (Inventor); Maccabe, Arthur Barney (Inventor); Awrach, James Michael (Inventor)

    2012-01-01

    Splintered offloading techniques with receive batch processing are described for network acceleration. Such techniques offload specific functionality to a NIC while maintaining the bulk of the protocol processing in the host operating system ("OS"). The resulting protocol implementation allows the application to bypass the protocol processing of the received data. Such can be accomplished this by moving data from the NIC directly to the application through direct memory access ("DMA") and batch processing the receive headers in the host OS when the host OS is interrupted to perform other work. Batch processing receive headers allows the data path to be separated from the control path. Unlike operating system bypass, however, the operating system still fully manages the network resource and has relevant feedback about traffic and flows. Embodiments of the present disclosure can therefore address the challenges of networks with extreme bandwidth delay products (BWDP).

  19. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  20. Analytical Electrochemistry: Methodology and Applications of Dynamic Techniques.

    ERIC Educational Resources Information Center

    Heineman, William R.; Kissinger, Peter T.

    1980-01-01

    Reports developments involving the experimental aspects of finite and current analytical electrochemistry including electrode materials (97 cited references), hydrodynamic techniques (56), spectroelectrochemistry (62), stripping voltammetry (70), voltammetric techniques (27), polarographic techniques (59), and miscellany (12). (CS)

  1. An iterative analytical technique for the design of interplanetary direct transfer trajectories including perturbations

    NASA Astrophysics Data System (ADS)

    Parvathi, S. P.; Ramanan, R. V.

    2018-06-01

    An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.

  2. CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages

    PubMed Central

    Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia

    2017-01-01

    Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440

  3. Trends in analytical techniques applied to particulate matter characterization: A critical review of fundaments and applications.

    PubMed

    Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael

    2018-05-01

    Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  5. Analytical and experimental investigation of the coaxial plasma gun for use as a particle accelerator

    NASA Technical Reports Server (NTRS)

    Shriver, E. L.

    1972-01-01

    The coaxial plasma accelerator for use as a projectile accelerator is discussed. The accelerator is described physically and analytically by solution of circuit equations, and by solving for the magnetic pressures which are formed by the j cross B vector forces on the plasma. It is shown that the plasma density must be increased if the accelerator is to be used as a projectile accelerator. Three different approaches to increasing plasma density are discussed. When a magnetic field containment scheme was used to increase the plasma density, glass beads of 0.66 millimeter diameter were accelerated to 7 to 8 kilometers per second velocities. Glass beads of smaller diameter were accelerated to more than twice this velocity.

  6. Rotational Acceleration during Head Impact Resulting from Different Judo Throwing Techniques

    PubMed Central

    MURAYAMA, Haruo; HITOSUGI, Masahito; MOTOZAWA, Yasuki; OGINO, Masahiro; KOYAMA, Katsuhiro

    2014-01-01

    Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchigari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s2 to 5,525.9 rad/s2 and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s2 to 2,104.1 rad/s2 and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami. PMID:24477065

  7. Rotational acceleration during head impact resulting from different judo throwing techniques.

    PubMed

    Murayama, Haruo; Hitosugi, Masahito; Motozawa, Yasuki; Ogino, Masahiro; Koyama, Katsuhiro

    2014-01-01

    Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchi-gari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s(2) to 5,525.9 rad/s(2) and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s(2) to 2,104.1 rad/s(2) and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami.

  8. Accelerator-based techniques for the support of senior-level undergraduate physics laboratories

    NASA Astrophysics Data System (ADS)

    Williams, J. R.; Clark, J. C.; Isaacs-Smith, T.

    2001-07-01

    Approximately three years ago, Auburn University replaced its aging Dynamitron accelerator with a new 2MV tandem machine (Pelletron) manufactured by the National Electrostatics Corporation (NEC). This new machine is maintained and operated for the University by Physics Department personnel, and the accelerator supports a wide variety of materials modification/analysis studies. Computer software is available that allows the NEC Pelletron to be operated from a remote location, and an Internet link has been established between the Accelerator Laboratory and the Upper-Level Undergraduate Teaching Laboratory in the Physics Department. Additional software supplied by Canberra Industries has also been used to create a second Internet link that allows live-time data acquisition in the Teaching Laboratory. Our senior-level undergraduates and first-year graduate students perform a number of experiments related to radiation detection and measurement as well as several standard accelerator-based experiments that have been added recently. These laboratory exercises will be described, and the procedures used to establish the Internet links between our Teaching Laboratory and the Accelerator Laboratory will be discussed.

  9. Recent developments and future trends in solid phase microextraction techniques towards green analytical chemistry.

    PubMed

    Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek

    2013-12-20

    Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Text-based Analytics for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah

    The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are

  11. Optical Diagnostics for Plasma-based Particle Accelerators

    NASA Astrophysics Data System (ADS)

    Muggli, Patric

    2009-05-01

    One of the challenges for plasma-based particle accelerators is to measure the spatio-temporal characteristics of the accelerated particle bunch. ``Optical'' diagnostics are particularly interesting and useful because of the large number of techniques that exits to determine the properties of photon pulses. The accelerated bunch can produce photons pulses that carry information about its characteristics for example through synchrotron radiation in a magnet, Cherenkov radiation in a gas, and transition radiation (TR) at the boundary between two media with different dielectric constants. Depending on the wavelength of the emission when compared to the particle bunch length, the radiation can be incoherent or coherent. Incoherent TR in the optical range (or OTR) is useful to measure the transverse spatial characteristics of the beam, such as charge distribution and size. Coherent TR (or CTR) carries information about the bunch length that can in principle be retrieved by standard auto-correlation or interferometric techniques, as well as by spectral measurements. A measurement of the total CTR energy emitted by bunches with constant charge can also be used as a shot-to-shot measurement for the relative bunch length as the CTR energy is proportional to the square of the bunch population and inversely proportional to its length (for a fixed distribution). Spectral interferometry can also yield the spacing between bunches in the case where multiple bunches are trapped in subsequent buckets of the plasma wave. Cherenkov radiation can be used as an energy threshold diagnostic for low energy particles. Cherenkov, synchrotron and transition radiation can be used in a dispersive section of the beam line to measure the bunch energy spectrum. The application of these diagnostics to plasma-based particle accelerators, with emphasis on the beam-driven, plasma wakefield accelerator (PWFA) at the SLAC National Accelerator Laboratory will be discussed.

  12. Conventional and Accelerated-Solvent Extractions of Green Tea (Camellia sinensis) for Metabolomics-based Chemometrics

    PubMed Central

    Kellogg, Joshua J.; Wallace, Emily D.; Graf, Tyler N.; Oberlies, Nicholas H.; Cech, Nadja B.

    2018-01-01

    Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. PMID:28787673

  13. Emittance preservation in plasma-based accelerators with ion motion

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Esarey, E.; ...

    2017-11-01

    In a plasma-accelerator-based linear collider, the density of matched, low-emittance, high-energy particle bunches required for collider applications can be orders of magnitude above the background ion density, leading to ion motion, perturbation of the focusing fields, and, hence, to beam emittance growth. By analyzing the response of the background ions to an ultrahigh density beam, analytical expressions, valid for nonrelativistic ion motion, are derived for the transverse wakefield and for the final (i.e., after saturation) bunch emittance. Analytical results are validated against numerical modeling. Initial beam distributions are derived that are equilibrium solutions, which require head-to-tail bunch shaping, enabling emittancemore » preservation with ion motion.« less

  14. Analysis techniques for residual acceleration data

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.; Snyder, Robert S.

    1990-01-01

    Various aspects of residual acceleration data are of interest to low-gravity experimenters. Maximum and mean values and various other statistics can be obtained from data as collected in the time domain. Additional information may be obtained through manipulation of the data. Fourier analysis is discussed as a means of obtaining information about dominant frequency components of a given data window. Transformation of data into different coordinate axes is useful in the analysis of experiments with different orientations and can be achieved by the use of a transformation matrix. Application of such analysis techniques to residual acceleration data provides additional information than what is provided in a time history and increases the effectiveness of post-flight analysis of low-gravity experiments.

  15. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  16. A Comparison of the Glass Meta-Analytic Technique with the Hunter-Schmidt Meta-Analytic Technique on Three Studies from the Education Literature.

    ERIC Educational Resources Information Center

    Hough, Susan L.; Hall, Bruce W.

    The meta-analytic techniques of G. V. Glass (1976) and J. E. Hunter and F. L. Schmidt (1977) were compared through their application to three meta-analytic studies from education literature. The following hypotheses were explored: (1) the overall mean effect size would be larger in a Hunter-Schmidt meta-analysis (HSMA) than in a Glass…

  17. Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation

    NASA Astrophysics Data System (ADS)

    Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong

    2017-05-01

    Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.

  18. Enabling the High Level Synthesis of Data Analytics Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino

    Conventional High Level Synthesis (HLS) tools mainly tar- get compute intensive kernels typical of digital signal pro- cessing applications. We are developing techniques and ar- chitectural templates to enable HLS of data analytics appli- cations. These applications are memory intensive, present fine-grained, unpredictable data accesses, and irregular, dy- namic task parallelism. We discuss an architectural tem- plate based around a distributed controller to efficiently ex- ploit thread level parallelism. We present a memory in- terface that supports parallel memory subsystems and en- ables implementing atomic memory operations. We intro- duce a dynamic task scheduling approach to efficiently ex- ecute heavilymore » unbalanced workload. The templates are val- idated by synthesizing queries from the Lehigh University Benchmark (LUBM), a well know SPARQL benchmark.« less

  19. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  20. Applications of surface analytical techniques in Earth Sciences

    NASA Astrophysics Data System (ADS)

    Qian, Gujie; Li, Yubiao; Gerson, Andrea R.

    2015-03-01

    This review covers a wide range of surface analytical techniques: X-ray photoelectron spectroscopy (XPS), scanning photoelectron microscopy (SPEM), photoemission electron microscopy (PEEM), dynamic and static secondary ion mass spectroscopy (SIMS), electron backscatter diffraction (EBSD), atomic force microscopy (AFM). Others that are relatively less widely used but are also important to the Earth Sciences are also included: Auger electron spectroscopy (AES), low energy electron diffraction (LEED) and scanning tunnelling microscopy (STM). All these techniques probe only the very top sample surface layers (sub-nm to several tens of nm). In addition, we also present several other techniques i.e. Raman microspectroscopy, reflection infrared (IR) microspectroscopy and quantitative evaluation of minerals by scanning electron microscopy (QEMSCAN) that penetrate deeper into the sample, up to several μm, as all of them are fundamental analytical tools for the Earth Sciences. Grazing incidence synchrotron techniques, sensitive to surface measurements, are also briefly introduced at the end of this review. (Scanning) transmission electron microscopy (TEM/STEM) is a special case that can be applied to characterisation of mineralogical and geological sample surfaces. Since TEM/STEM is such an important technique for Earth Scientists, we have also included it to draw attention to the capability of TEM/STEM applied as a surface-equivalent tool. While this review presents most of the important techniques for the Earth Sciences, it is not an all-inclusive bibliography of those analytical techniques. Instead, for each technique that is discussed, we first give a very brief introduction about its principle and background, followed by a short section on approaches to sample preparation that are important for researchers to appreciate prior to the actual sample analysis. We then use examples from publications (and also some of our known unpublished results) within the Earth Sciences

  1. Gas-filled capillaries for plasma-based accelerators

    NASA Astrophysics Data System (ADS)

    Filippi, F.; Anania, M. P.; Brentegani, E.; Biagioni, A.; Cianchi, A.; Chiadroni, E.; Ferrario, M.; Pompili, R.; Romeo, S.; Zigler, A.

    2017-07-01

    Plasma Wakefield Accelerators are based on the excitation of large amplitude plasma waves excited by either a laser or a particle driver beam. The amplitude of the waves, as well as their spatial dimensions and the consequent accelerating gradient depend strongly on the background electron density along the path of the accelerated particles. The process needs stable and reliable plasma sources, whose density profile must be controlled and properly engineered to ensure the appropriate accelerating mechanism. Plasma confinement inside gas filled capillaries have been studied in the past since this technique allows to control the evolution of the plasma, ensuring a stable and repeatable plasma density distribution during the interaction with the drivers. Moreover, in a gas filled capillary plasma can be pre-ionized by a current discharge to avoid ionization losses. Different capillary geometries have been studied to allow the proper temporal and spatial evolution of the plasma along the acceleration length. Results of this analysis obtained by varying the length and the number of gas inlets will be presented.

  2. Critical review of analytical techniques for safeguarding the thorium-uranium fuel cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakkila, E.A.

    1978-10-01

    Conventional analytical methods applicable to the determination of thorium, uranium, and plutonium in feed, product, and waste streams from reprocessing thorium-based nuclear reactor fuels are reviewed. Separations methods of interest for these analyses are discussed. Recommendations concerning the applicability of various techniques to reprocessing samples are included. 15 tables, 218 references.

  3. WetDATA Hub: Democratizing Access to Water Data to Accelerate Innovation through Data Visualization, Predictive Analytics and Artificial Intelligence Applications

    NASA Astrophysics Data System (ADS)

    Sarni, W.

    2017-12-01

    Water scarcity and poor quality impacts economic development, business growth, and social well-being. Water has become, in our generation, the foremost critical local, regional, and global issue of our time. Despite these needs, there is no water hub or water technology accelerator solely dedicated to water data and tools. There is a need by the public and private sectors for vastly improved data management and visualization tools. This is the WetDATA opportunity - to develop a water data tech hub dedicated to water data acquisition, analytics, and visualization tools for informed policy and business decisions. WetDATA's tools will help incubate disruptive water data technologies and accelerate adoption of current water data solutions. WetDATA is a Colorado-based (501c3), global hub for water data analytics and technology innovation. WetDATA's vision is to be a global leader in water information, data technology innovation and collaborate with other US and global water technology hubs. ROADMAP * Portal (www.wetdata.org) to provide stakeholders with tools/resources to understand related water risks. * The initial activities will provide education, awareness and tools to stakeholders to support the implementation of the Colorado State Water Plan. * Leverage the Western States Water Council Water Data Exchange database. * Development of visualization, predictive analytics and AI tools to engage with stakeholders and provide actionable data and information. TOOLS Education: Provide information on water issues and risks at the local, state, national and global scale. Visualizations: Development of data analytics and visualization tools based upon the 2030 Water Resources Group methodology to support the implementation of the Colorado State Water Plan. Predictive Analytics: Accessing publically available water databases and using machine learning to develop water availability forecasting tools, and time lapse images to support city / urban planning.

  4. Conventional and accelerated-solvent extractions of green tea (camellia sinensis) for metabolomics-based chemometrics.

    PubMed

    Kellogg, Joshua J; Wallace, Emily D; Graf, Tyler N; Oberlies, Nicholas H; Cech, Nadja B

    2017-10-25

    Metabolomics has emerged as an important analytical technique for multiple applications. The value of information obtained from metabolomics analysis depends on the degree to which the entire metabolome is present and the reliability of sample treatment to ensure reproducibility across the study. The purpose of this study was to compare methods of preparing complex botanical extract samples prior to metabolomics profiling. Two extraction methodologies, accelerated solvent extraction and a conventional solvent maceration, were compared using commercial green tea [Camellia sinensis (L.) Kuntze (Theaceae)] products as a test case. The accelerated solvent protocol was first evaluated to ascertain critical factors influencing extraction using a D-optimal experimental design study. The accelerated solvent and conventional extraction methods yielded similar metabolite profiles for the green tea samples studied. The accelerated solvent extraction yielded higher total amounts of extracted catechins, was more reproducible, and required less active bench time to prepare the samples. This study demonstrates the effectiveness of accelerated solvent as an efficient methodology for metabolomics studies. Copyright © 2017. Published by Elsevier B.V.

  5. A novel analytical technique suitable for the identification of plastics.

    PubMed

    Nečemer, Marijan; Kump, Peter; Sket, Primož; Plavec, Janez; Grdadolnik, Jože; Zvanut, Maja

    2013-01-01

    The enormous development and production of plastic materials in the last century resulted in increasing numbers of such kinds of objects. Development of a simple and fast technique to classify different types of plastics could be used in many activities dealing with plastic materials such as packaging of food, sorting of used plastic materials, and also, if technique would be non-destructive, for conservation of plastic artifacts in museum collections, a relatively new field of interest since 1990. In our previous paper we introduced a non-destructive technique for fast identification of unknown plastics based on EDXRF spectrometry,1 using as a case study some plastic artifacts archived in the Museum in order to show the advantages of the nondestructive identification of plastic material. In order to validate our technique it was necessary to apply for this purpose the comparison of analyses with some of the analytical techniques, which are more suitable and so far rather widely applied in identifying some most common sorts of plastic materials.

  6. Analytical Validation of Accelerator Mass Spectrometry for Pharmaceutical Development: the Measurement of Carbon-14 Isotope Ratio.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keck, B D; Ognibene, T; Vogel, J S

    2010-02-05

    Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of anymore » separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30

  7. Big data analytics as a service infrastructure: challenges, desired properties and solutions

    NASA Astrophysics Data System (ADS)

    Martín-Márquez, Manuel

    2015-12-01

    CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.

  8. GraphReduce: Large-Scale Graph Analytics on Accelerator-Based HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Dipanjan; Agarwal, Kapil; Song, Shuaiwen

    2015-09-30

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of both edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the hostmore » and the device.« less

  9. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  10. Accelerator-based BNCT.

    PubMed

    Kreiner, A J; Baldo, M; Bergueiro, J R; Cartelli, D; Castell, W; Thatar Vento, V; Gomez Asoia, J; Mercuri, D; Padulo, J; Suarez Sandin, J C; Erhardt, J; Kesque, J M; Valda, A A; Debray, M E; Somacal, H R; Igarzabal, M; Minsky, D M; Herrera, M S; Capoulat, M E; Gonzalez, S J; del Grosso, M F; Gagetti, L; Suarez Anzorena, M; Gun, M; Carranza, O

    2014-06-01

    The activity in accelerator development for accelerator-based BNCT (AB-BNCT) both worldwide and in Argentina is described. Projects in Russia, UK, Italy, Japan, Israel, and Argentina to develop AB-BNCT around different types of accelerators are briefly presented. In particular, the present status and recent progress of the Argentine project will be reviewed. The topics will cover: intense ion sources, accelerator tubes, transport of intense beams, beam diagnostics, the (9)Be(d,n) reaction as a possible neutron source, Beam Shaping Assemblies (BSA), a treatment room, and treatment planning in realistic cases. © 2013 Elsevier Ltd. All rights reserved.

  11. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  12. Detecting sea-level hazards: Simple regression-based methods for calculating the acceleration of sea level

    USGS Publications Warehouse

    Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.

    2016-01-04

    Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.

  13. On Convergence Acceleration Techniques for Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A discussion of convergence acceleration techniques as they relate to computational fluid dynamics problems on unstructured meshes is given. Rather than providing a detailed description of particular methods, the various different building blocks of current solution techniques are discussed and examples of solution strategies using one or several of these ideas are given. Issues relating to unstructured grid CFD problems are given additional consideration, including suitability of algorithms to current hardware trends, memory and cpu tradeoffs, treatment of non-linearities, and the development of efficient strategies for handling anisotropy-induced stiffness. The outlook for future potential improvements is also discussed.

  14. Recent Developments in the Speciation and Determination of Mercury Using Various Analytical Techniques

    PubMed Central

    Suvarapu, Lakshmi Narayana; Baek, Sung-Ok

    2015-01-01

    This paper reviews the speciation and determination of mercury by various analytical techniques such as atomic absorption spectrometry, voltammetry, inductively coupled plasma techniques, spectrophotometry, spectrofluorometry, high performance liquid chromatography, and gas chromatography. Approximately 126 research papers on the speciation and determination of mercury by various analytical techniques published in international journals since 2013 are reviewed. PMID:26236539

  15. The LeRC rail accelerators: Test designs and diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Zana, L. M.; Kerslake, W. R.; Sturman, J. C.; Wang, S. Y.; Terdan, F. F.

    1983-01-01

    The feasibility of using rail accelerators for various in-space and to-space propulsion applications was investigated. A 1 meter, 24 sq mm bore accelerator was designed with the goal of demonstrating projectile velocities of 15 km/sec using a peak current of 200 kA. A second rail accelerator, 1 meter long with a 156.25 sq mm bore, was designed with clear polycarbonate sidewalls to permit visual observation of the plasma arc. A study of available diagnostic techniques and their application to the rail accelerator is presented. Specific topics of discussion include the use of interferometry and spectroscopy to examine the plasma armature as well as the use of optical sensors to measure rail displacement during acceleration. Standard diagnostics such as current and voltage measurements are also discussed.

  16. Analytical technique characterizes all trace contaminants in water

    NASA Technical Reports Server (NTRS)

    Foster, J. N.; Lysyj, I.; Nelson, K. H.

    1967-01-01

    Properly programmed combination of advanced chemical and physical analytical techniques characterize critically all trace contaminants in both the potable and waste water from the Apollo Command Module. This methodology can also be applied to the investigation of the source of water pollution.

  17. Visual Analytics for Law Enforcement: Deploying a Service-Oriented Analytic Framework for Web-based Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.

    2009-04-14

    This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less

  18. Ion Beam Facilities at the National Centre for Accelerator based Research using a 3 MV Pelletron Accelerator

    NASA Astrophysics Data System (ADS)

    Trivedi, T.; Patel, Shiv P.; Chandra, P.; Bajpai, P. K.

    A 3.0 MV (Pelletron 9 SDH 4, NEC, USA) low energy ion accelerator has been recently installed as the National Centre for Accelerator based Research (NCAR) at the Department of Pure & Applied Physics, Guru Ghasidas Vishwavidyalaya, Bilaspur, India. The facility is aimed to carried out interdisciplinary researches using ion beams with high current TORVIS (for H, He ions) and SNICS (for heavy ions) ion sources. The facility includes two dedicated beam lines, one for ion beam analysis (IBA) and other for ion implantation/ irradiation corresponding to switching magnet at +20 and -10 degree, respectively. Ions with 60 kV energy are injected into the accelerator tank where after stripping positively charged ions are accelerated up to 29 MeV for Au. The installed ion beam analysis techniques include RBS, PIXE, ERDA and channelling.

  19. Accelerated Peer-Review Journal Usage Technique for Undergraduates

    ERIC Educational Resources Information Center

    Wallace, J. D.

    2008-01-01

    The internet has given undergraduate students ever-increasing access to academic journals via search engines and online databases. However, students typically do not have the ability to use these journals effectively. This often poses a dilemma for instructors. The accelerated peer-review journal usage (APJU) technique provides a way for…

  20. Development and application of an information-analytic system on the problem of flow accelerated corrosion of pipeline elements in the secondary coolant circuit of VVER-440-based power units at the Novovoronezh nuclear power plant

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Kiselev, A. N.; Shepelev, S. V.; Galanin, A. V.

    2015-02-01

    Specific features relating to development of the information-analytical system on the problem of flow-accelerated corrosion of pipeline elements in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh nuclear power plant are considered. The results from a statistical analysis of data on the quantity, location, and operating conditions of the elements and preinserted segments of pipelines used in the condensate-feedwater and wet steam paths are presented. The principles of preparing and using the information-analytical system for determining the lifetime to reaching inadmissible wall thinning in elements of pipelines used in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered.

  1. Analytical and numerical techniques for predicting the interfacial stresses of wavy carbon nanotube/polymer composites

    NASA Astrophysics Data System (ADS)

    Yazdchi, K.; Salehi, M.; Shokrieh, M. M.

    2009-03-01

    By introducing a new simplified 3D representative volume element for wavy carbon nanotubes, an analytical model is developed to study the stress transfer in single-walled carbon nanotube-reinforced polymer composites. Based on the pull-out modeling technique, the effects of waviness, aspect ratio, and Poisson ratio on the axial and interfacial shear stresses are analyzed in detail. The results of the present analytical model are in a good agreement with corresponding results for straight nanotubes.

  2. Accuracy of ringless casting and accelerated wax-elimination technique: a comparative in vitro study.

    PubMed

    Prasad, Rahul; Al-Keraif, Abdulaziz Abdullah; Kathuria, Nidhi; Gandhi, P V; Bhide, S V

    2014-02-01

    The purpose of this study was to determine whether the ringless casting and accelerated wax-elimination techniques can be combined to offer a cost-effective, clinically acceptable, and time-saving alternative for fabricating single unit castings in fixed prosthodontics. Sixty standardized wax copings were fabricated on a type IV stone replica of a stainless steel die. The wax patterns were divided into four groups. The first group was cast using the ringless investment technique and conventional wax-elimination method; the second group was cast using the ringless investment technique and accelerated wax-elimination method; the third group was cast using the conventional metal ring investment technique and conventional wax-elimination method; the fourth group was cast using the metal ring investment technique and accelerated wax-elimination method. The vertical marginal gap was measured at four sites per specimen, using a digital optical microscope at 100× magnification. The results were analyzed using two-way ANOVA to determine statistical significance. The vertical marginal gaps of castings fabricated using the ringless technique (76.98 ± 7.59 μm) were significantly less (p < 0.05) than those castings fabricated using the conventional metal ring technique (138.44 ± 28.59 μm); however, the vertical marginal gaps of the conventional (102.63 ± 36.12 μm) and accelerated wax-elimination (112.79 ± 38.34 μm) castings were not statistically significant (p > 0.05). The ringless investment technique can produce castings with higher accuracy and can be favorably combined with the accelerated wax-elimination method as a vital alternative to the time-consuming conventional technique of casting restorations in fixed prosthodontics. © 2013 by the American College of Prosthodontists.

  3. Novel concept of washing for microfluidic paper-based analytical devices based on capillary force of paper substrates.

    PubMed

    Mohammadi, Saeed; Busa, Lori Shayne Alamo; Maeki, Masatoshi; Mohamadi, Reza M; Ishida, Akihiko; Tani, Hirofumi; Tokeshi, Manabu

    2016-11-01

    A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported. Liquids can flow through a porous medium (such as paper) in the absence of external pressure as a result of capillary action. Uniform results were achieved when washing a paper substrate in a PDMS holder which was integrated with a cartridge absorber acting as a porous medium. Our study demonstrated that applying this washing technique would allow μPADs to become the least expensive microfluidic device platform with high reproducibility and sensitivity. In a model μPAD assay that utilized this novel washing technique, C-reactive protein (CRP) was detected with a limit of detection (LOD) of 5 μg mL -1 . Graphical Abstract A novel washing technique for microfluidic paper-based analytical devices (μPADs) that is based on the spontaneous capillary action of paper and eliminates unbound antigen and antibody in a sandwich immunoassay is reported.

  4. Modeling of ion acceleration through drift and diffusion at interplanetary shocks

    NASA Technical Reports Server (NTRS)

    Decker, R. B.; Vlahos, L.

    1986-01-01

    A test particle simulation designed to model ion acceleration through drift and diffusion at interplanetary shocks is described. The technique consists of integrating along exact particle orbits in a system where the angle between the shock normal and mean upstream magnetic field, the level of magnetic fluctuations, and the energy of injected particles can assume a range of values. The technique makes it possible to study time-dependent shock acceleration under conditions not amenable to analytical techniques. To illustrate the capability of the numerical model, proton acceleration was considered under conditions appropriate for interplanetary shocks at 1 AU, including large-amplitude transverse magnetic fluctuations derived from power spectra of both ambient and shock-associated MHD waves.

  5. Comparison of commercial analytical techniques for measuring chlorine dioxide in urban desalinated drinking water.

    PubMed

    Ammar, T A; Abid, K Y; El-Bindary, A A; El-Sonbati, A Z

    2015-12-01

    Most drinking water industries are closely examining options to maintain a certain level of disinfectant residual through the entire distribution system. Chlorine dioxide is one of the promising disinfectants that is usually used as a secondary disinfectant, whereas the selection of the proper monitoring analytical technique to ensure disinfection and regulatory compliance has been debated within the industry. This research endeavored to objectively compare the performance of commercially available analytical techniques used for chlorine dioxide measurements (namely, chronoamperometry, DPD (N,N-diethyl-p-phenylenediamine), Lissamine Green B (LGB WET) and amperometric titration), to determine the superior technique. The commonly available commercial analytical techniques were evaluated over a wide range of chlorine dioxide concentrations. In reference to pre-defined criteria, the superior analytical technique was determined. To discern the effectiveness of such superior technique, various factors, such as sample temperature, high ionic strength, and other interferences that might influence the performance were examined. Among the four techniques, chronoamperometry technique indicates a significant level of accuracy and precision. Furthermore, the various influencing factors studied did not diminish the technique's performance where it was fairly adequate in all matrices. This study is a step towards proper disinfection monitoring and it confidently assists engineers with chlorine dioxide disinfection system planning and management.

  6. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  7. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  8. Paper-based analytical devices for clinical diagnosis: recent advances in the fabrication techniques and sensing mechanisms

    PubMed Central

    Sher, Mazhar; Zhuang, Rachel; Demirci, Utkan; Asghar, Waseem

    2017-01-01

    Introduction There is a significant interest in developing inexpensive portable biosensing platforms for various applications including disease diagnostics, environmental monitoring, food safety, and water testing at the point-of-care (POC) settings. Current diagnostic assays available in the developed world require sophisticated laboratory infrastructure and expensive reagents. Hence, they are not suitable for resource-constrained settings with limited financial resources, basic health infrastructure, and few trained technicians. Cellulose and flexible transparency paper-based analytical devices have demonstrated enormous potential for developing robust, inexpensive and portable devices for disease diagnostics. These devices offer promising solutions to disease management in resource-constrained settings where the vast majority of the population cannot afford expensive and highly sophisticated treatment options. Areas covered In this review, the authors describe currently developed cellulose and flexible transparency paper-based microfluidic devices, device fabrication techniques, and sensing technologies that are integrated with these devices. The authors also discuss the limitations and challenges associated with these devices and their potential in clinical settings. Expert commentary In recent years, cellulose and flexible transparency paper-based microfluidic devices have demonstrated the potential to become future healthcare options despite a few limitations such as low sensitivity and reproducibility. PMID:28103450

  9. Paper-based analytical devices for clinical diagnosis: recent advances in the fabrication techniques and sensing mechanisms.

    PubMed

    Sher, Mazhar; Zhuang, Rachel; Demirci, Utkan; Asghar, Waseem

    2017-04-01

    There is a significant interest in developing inexpensive portable biosensing platforms for various applications including disease diagnostics, environmental monitoring, food safety, and water testing at the point-of-care (POC) settings. Current diagnostic assays available in the developed world require sophisticated laboratory infrastructure and expensive reagents. Hence, they are not suitable for resource-constrained settings with limited financial resources, basic health infrastructure, and few trained technicians. Cellulose and flexible transparency paper-based analytical devices have demonstrated enormous potential for developing robust, inexpensive and portable devices for disease diagnostics. These devices offer promising solutions to disease management in resource-constrained settings where the vast majority of the population cannot afford expensive and highly sophisticated treatment options. Areas covered: In this review, the authors describe currently developed cellulose and flexible transparency paper-based microfluidic devices, device fabrication techniques, and sensing technologies that are integrated with these devices. The authors also discuss the limitations and challenges associated with these devices and their potential in clinical settings. Expert commentary: In recent years, cellulose and flexible transparency paper-based microfluidic devices have demonstrated the potential to become future healthcare options despite a few limitations such as low sensitivity and reproducibility.

  10. Pre-concentration technique for reduction in "Analytical instrument requirement and analysis"

    NASA Astrophysics Data System (ADS)

    Pal, Sangita; Singha, Mousumi; Meena, Sher Singh

    2018-04-01

    Availability of analytical instruments for a methodical detection of known and unknown effluents imposes a serious hindrance in qualification and quantification. Several analytical instruments such as Elemental analyzer, ICP-MS, ICP-AES, EDXRF, ion chromatography, Electro-analytical instruments which are not only expensive but also time consuming, required maintenance, damaged essential parts replacement which are of serious concern. Move over for field study and instant detection installation of these instruments are not convenient to each and every place. Therefore, technique such as pre-concentration of metal ions especially for lean stream elaborated and justified. Chelation/sequestration is the key of immobilization technique which is simple, user friendly, most effective, least expensive, time efficient; easy to carry (10g - 20g vial) to experimental field/site has been demonstrated.

  11. Analysis of Volatile Compounds by Advanced Analytical Techniques and Multivariate Chemometrics.

    PubMed

    Lubes, Giuseppe; Goodarzi, Mohammad

    2017-05-10

    Smelling is one of the five senses, which plays an important role in our everyday lives. Volatile compounds are, for example, characteristics of food where some of them can be perceivable by humans because of their aroma. They have a great influence on the decision making of consumers when they choose to use a product or not. In the case where a product has an offensive and strong aroma, many consumers might not appreciate it. On the contrary, soft and fresh natural aromas definitely increase the acceptance of a given product. These properties can drastically influence the economy; thus, it has been of great importance to manufacturers that the aroma of their food product is characterized by analytical means to provide a basis for further optimization processes. A lot of research has been devoted to this domain in order to link the quality of, e.g., a food to its aroma. By knowing the aromatic profile of a food, one can understand the nature of a given product leading to developing new products, which are more acceptable by consumers. There are two ways to analyze volatiles: one is to use human senses and/or sensory instruments, and the other is based on advanced analytical techniques. This work focuses on the latter. Although requirements are simple, low-cost technology is an attractive research target in this domain; most of the data are generated with very high-resolution analytical instruments. Such data gathered based on different analytical instruments normally have broad, overlapping sensitivity profiles and require substantial data analysis. In this review, we have addressed not only the question of the application of chemometrics for aroma analysis but also of the use of different analytical instruments in this field, highlighting the research needed for future focus.

  12. Theory of unfolded cyclotron accelerator

    NASA Astrophysics Data System (ADS)

    Rax, J.-M.; Robiche, J.

    2010-10-01

    An acceleration process based on the interaction between an ion, a tapered periodic magnetic structure, and a circularly polarized oscillating electric field is identified and analyzed, and its potential is evaluated. A Hamiltonian analysis is developed in order to describe the interplay between the cyclotron motion, the electric acceleration, and the magnetic modulation. The parameters of this universal class of magnetic modulation leading to continuous acceleration without Larmor radius increase are expressed analytically. Thus, this study provides the basic scaling of what appears as a compact unfolded cyclotron accelerator.

  13. Synchrotron X-ray Analytical Techniques for Studying Materials Electrochemistry in Rechargeable Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Feng; Liu, Yijin; Yu, Xiqian

    Rechargeable battery technologies have ignited major breakthroughs in contemporary society, including but not limited to revolutions in transportation, electronics, and grid energy storage. The remarkable development of rechargeable batteries is largely attributed to in-depth efforts to improve battery electrode and electrolyte materials. There are, however, still intimidating challenges of lower cost, longer cycle and calendar life, higher energy density, and better safety for large scale energy storage and vehicular applications. Further progress with rechargeable batteries may require new chemistries (lithium ion batteries and beyond) and better understanding of materials electrochemistry in the various battery technologies. In the past decade, advancementmore » of battery materials has been complemented by new analytical techniques that are capable of probing battery chemistries at various length and time scales. Synchrotron X-ray techniques stand out as one of the most effective methods that allows for nearly nondestructive probing of materials characteristics such as electronic and geometric structures with various depth sensitivities through spectroscopy, scattering, and imaging capabilities. This article begins with the discussion of various rechargeable batteries and associated important scientific questions in the field, followed by a review of synchrotron X-ray based analytical tools (scattering, spectroscopy and imaging) and their successful applications (ex situ, in situ, and in operando) in gaining fundamental insights into these scientific questions. Furthermore, electron microscopy and spectroscopy complement the detection length scales of synchrotron X-ray tools, and are also discussed towards the end. We highlight the importance of studying battery materials by combining analytical techniques with complementary length sensitivities, such as the combination of X-ray absorption spectroscopy and electron spectroscopy with spatial resolution

  14. Synchrotron X-ray Analytical Techniques for Studying Materials Electrochemistry in Rechargeable Batteries

    DOE PAGES

    Lin, Feng; Liu, Yijin; Yu, Xiqian; ...

    2017-08-30

    Rechargeable battery technologies have ignited major breakthroughs in contemporary society, including but not limited to revolutions in transportation, electronics, and grid energy storage. The remarkable development of rechargeable batteries is largely attributed to in-depth efforts to improve battery electrode and electrolyte materials. There are, however, still intimidating challenges of lower cost, longer cycle and calendar life, higher energy density, and better safety for large scale energy storage and vehicular applications. Further progress with rechargeable batteries may require new chemistries (lithium ion batteries and beyond) and better understanding of materials electrochemistry in the various battery technologies. In the past decade, advancementmore » of battery materials has been complemented by new analytical techniques that are capable of probing battery chemistries at various length and time scales. Synchrotron X-ray techniques stand out as one of the most effective methods that allows for nearly nondestructive probing of materials characteristics such as electronic and geometric structures with various depth sensitivities through spectroscopy, scattering, and imaging capabilities. This article begins with the discussion of various rechargeable batteries and associated important scientific questions in the field, followed by a review of synchrotron X-ray based analytical tools (scattering, spectroscopy and imaging) and their successful applications (ex situ, in situ, and in operando) in gaining fundamental insights into these scientific questions. Furthermore, electron microscopy and spectroscopy complement the detection length scales of synchrotron X-ray tools, and are also discussed towards the end. We highlight the importance of studying battery materials by combining analytical techniques with complementary length sensitivities, such as the combination of X-ray absorption spectroscopy and electron spectroscopy with spatial resolution

  15. Analytic study of 1D diffusive relativistic shock acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keshet, Uri, E-mail: ukeshet@bgu.ac.il

    2017-10-01

    Diffusive shock acceleration (DSA) by relativistic shocks is thought to generate the dN / dE ∝ E{sup −p} spectra of charged particles in various astronomical relativistic flows. We show that for test particles in one dimension (1D), p {sup −1}=1−ln[γ{sub d}(1+β{sub d})]/ln[γ{sub u}(1+β{sub u})], where β{sub u}(β{sub d}) is the upstream (downstream) normalized velocity, and γ is the respective Lorentz factor. This analytically captures the main properties of relativistic DSA in higher dimensions, with no assumptions on the diffusion mechanism. Unlike 2D and 3D, here the spectrum is sensitive to the equation of state even in the ultra-relativistic limit, andmore » (for a J(üttner-Synge equation of state) noticeably hardens with increasing 1« less

  16. Ultra-Compact Accelerator Technologies for Application in Nuclear Techniques

    NASA Astrophysics Data System (ADS)

    Sampayan, S.; Caporaso, G.; Chen, Y.-J.; Carazo, V.; Falabella, S.; Guethlein, G.; Guse, S.; Harris, J. R.; Hawkins, S.; Holmes, C.; Krogh, M.; Nelson, S.; Paul, A. C.; Pearson, D.; Poole, B.; Schmidt, R.; Sanders, D.; Selenes, K.; Sitaraman, S.; Sullivan, J.; Wang, L.; Watson, J.

    2009-12-01

    We report on compact accelerator technology development for potential use as a pulsed neutron source quantitative post verifier. The technology is derived from our on-going compact accelerator technology development program for radiography under the US Department of Energy and for a clinic sized compact proton therapy systems under an industry sponsored Cooperative Research and Development Agreement. The accelerator technique relies on the synchronous discharge of a prompt pulse generating stacked transmission line structure with the beam transit. The goal of this technology is to achieve ˜10 MV/m gradients for 10 s of nanoseconds pulses and ˜100 MV/m gradients for ˜1 ns systems. As a post verifier for supplementing existing x-ray equipment, this system can remain in a charged, stand-by state with little or no energy consumption. We describe the progress of our overall component development effort with the multilayer dielectric wall insulators (i.e., the accelerator wall), compact power supply technology, kHz repetition-rate surface flashover ion sources, and the prompt pulse generation system consisting of wide-bandgap switches and high performance dielectric materials.

  17. Accelerated Learning Techniques for the Foreign Language Class: A Personal View.

    ERIC Educational Resources Information Center

    Bancroft, W. Jane

    Foreign language instructors cope with problems of learner anxiety in the classroom, fossilization of language use and language skill loss. Relaxation and concentration techniques can alleviate stress and fatigue and improve students' capabilities. Three categories of accelerated learning techniques are: (1) those that serve as a preliminary to…

  18. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  19. Challenges of accelerated aging techniques for elastomer lifetime predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, Kenneth T.; Bernstein, R.; Celina, M.

    Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less

  20. Challenges of accelerated aging techniques for elastomer lifetime predictions

    DOE PAGES

    Gillen, Kenneth T.; Bernstein, R.; Celina, M.

    2015-03-01

    Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less

  1. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  2. Paper-based analytical devices for environmental analysis.

    PubMed

    Meredith, Nathan A; Quinn, Casey; Cate, David M; Reilly, Thomas H; Volckens, John; Henry, Charles S

    2016-03-21

    The field of paper-based microfluidics has experienced rapid growth over the past decade. Microfluidic paper-based analytical devices (μPADs), originally developed for point-of-care medical diagnostics in resource-limited settings, are now being applied in new areas, such as environmental analyses. Low-cost paper sensors show great promise for on-site environmental analysis; the theme of ongoing research complements existing instrumental techniques by providing high spatial and temporal resolution for environmental monitoring. This review highlights recent applications of μPADs for environmental analysis along with technical advances that may enable μPADs to be more widely implemented in field testing.

  3. BINP accelerator based epithermal neutron source.

    PubMed

    Aleynik, V; Burdakov, A; Davydenko, V; Ivanov, A; Kanygin, V; Kuznetsov, A; Makarov, A; Sorokin, I; Taskaev, S

    2011-12-01

    Innovative facility for neutron capture therapy has been built at BINP. This facility is based on compact vacuum insulation tandem accelerator designed to produce proton current up to 10 mA. Epithermal neutrons are proposed to be generated by 1.915-2.5 MeV protons bombarding a lithium target using (7)Li(p,n)(7)Be threshold reaction. In the article, diagnostic techniques for proton beam and neutrons developed are described, results of experiments on proton beam transport and neutron generation are shown, discussed, and plans are presented. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Electron Injections: A Study of Electron Acceleration by Multiple Dipolarizing Flux Bundles Using an Analytical Model

    NASA Astrophysics Data System (ADS)

    Gabrielse, C.; Angelopoulos, V.; Artemyev, A.; Runov, A.; Harris, C.

    2016-12-01

    We study energetic electron injections using an analytical model that self-consistently describes electric and magnetic field perturbations of transient, localized dipolarizing flux bundles (DFBs). Previous studies using THEMIS, Van Allen Probes, and the Magnetospheric Multiscale Mission have shown that injections can occur on short (minutes) or long (10s of minutes) timescales. These studies suggest that the short timescale injections correspond to a single DFB, whereas long timescale injections are likely caused by an aggregate of multiple DFBs, each incrementally heating the particle population. We therefore model the effects of multiple DFBs on the electron population using multi-spacecraft observations of the fields and particle fluxes to constrain the model parameters. The analytical model is the first of its kind to model multiple dipolarization fronts in order to better understand the transport and acceleration process throughout the plasma sheet. It can reproduce most injection signatures at multiple locations simultaneously, reaffirming earlier findings that multiple earthward-traveling DFBs can both transport and accelerate electrons to suprathermal energies, and can thus be considered the injections' primary driver.

  5. Modeling of phonon scattering in n-type nanowire transistors using one-shot analytic continuation technique

    NASA Astrophysics Data System (ADS)

    Bescond, Marc; Li, Changsheng; Mera, Hector; Cavassilas, Nicolas; Lannoo, Michel

    2013-10-01

    We present a one-shot current-conserving approach to model the influence of electron-phonon scattering in nano-transistors using the non-equilibrium Green's function formalism. The approach is based on the lowest order approximation (LOA) to the current and its simplest analytic continuation (LOA+AC). By means of a scaling argument, we show how both LOA and LOA+AC can be easily obtained from the first iteration of the usual self-consistent Born approximation (SCBA) algorithm. Both LOA and LOA+AC are then applied to model n-type silicon nanowire field-effect-transistors and are compared to SCBA current characteristics. In this system, the LOA fails to describe electron-phonon scattering, mainly because of the interactions with acoustic phonons at the band edges. In contrast, the LOA+AC still well approximates the SCBA current characteristics, thus demonstrating the power of analytic continuation techniques. The limits of validity of LOA+AC are also discussed, and more sophisticated and general analytic continuation techniques are suggested for more demanding cases.

  6. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  7. A technique for setting analytical thresholds in massively parallel sequencing-based forensic DNA analysis

    PubMed Central

    2017-01-01

    Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338

  8. A technique for setting analytical thresholds in massively parallel sequencing-based forensic DNA analysis.

    PubMed

    Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi

    2017-01-01

    Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.

  9. Review on microfluidic paper-based analytical devices towards commercialisation.

    PubMed

    Akyazi, Tugce; Basabe-Desmonts, Lourdes; Benito-Lopez, Fernando

    2018-02-25

    Paper-based analytical devices introduce an innovative platform technology for fluid handling and analysis, with wide range of applications, promoting low cost, ease of fabrication/operation and equipment independence. This review gives a general overview on the fabrication techniques reported to date, revealing and discussing their weak points as well as the newest approaches in order to overtake current mass production limitations and therefore commercialisation. Moreover, this review aims especially to highlight novel technologies appearing in literature for the effective handling and controlling of fluids. The lack of flow control is the main problem of paper-based analytical devices, which generates obstacles for marketing and slows down the transition of paper devices from the laboratory into the consumers' hands. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Visual analytics techniques for large multi-attribute time series data

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.

    2008-01-01

    Time series data commonly occur when variables are monitored over time. Many real-world applications involve the comparison of long time series across multiple variables (multi-attributes). Often business people want to compare this year's monthly sales with last year's sales to make decisions. Data warehouse administrators (DBAs) want to know their daily data loading job performance. DBAs need to detect the outliers early enough to act upon them. In this paper, two new visual analytic techniques are introduced: The color cell-based Visual Time Series Line Charts and Maps highlight significant changes over time in a long time series data and the new Visual Content Query facilitates finding the contents and histories of interesting patterns and anomalies, which leads to root cause identification. We have applied both methods to two real-world applications to mine enterprise data warehouse and customer credit card fraud data to illustrate the wide applicability and usefulness of these techniques.

  11. Multiplexed Paper Analytical Device for Quantification of Metals using Distance-Based Detection

    PubMed Central

    Cate, David M.; Noblitt, Scott D.; Volckens, John; Henry, Charles S.

    2015-01-01

    Exposure to metal-containing aerosols has been linked with adverse health outcomes for almost every organ in the human body. Commercially available techniques for quantifying particulate metals are time-intensive, laborious, and expensive; often sample analysis exceeds $100. We report a simple technique, based upon a distance-based detection motif, for quantifying metal concentrations of Ni, Cu, and Fe in airborne particulate matter using microfluidic paper-based analytical devices. Paper substrates are used to create sensors that are self-contained, self-timing, and require only a drop of sample for operation. Unlike other colorimetric approaches in paper microfluidics that rely on optical instrumentation for analysis, with distance-based detection, analyte is quantified visually based on the distance of a colorimetric reaction, similar to reading temperature on a thermometer. To demonstrate the effectiveness of this approach, Ni, Cu, and Fe were measured individually in single-channel devices; detection limits as low as 0.1, 0.1, and 0.05 µg were reported for Ni, Cu, and Fe. Multiplexed analysis of all three metals was achieved with detection limits of 1, 5, and 1 µg for Ni, Cu, and Fe. We also extended the dynamic range for multi-analyte detection by printing concentration gradients of colorimetric reagents using an off the shelf inkjet printer. Analyte selectivity was demonstrated for common interferences. To demonstrate utility of the method, Ni, Cu, and Fe were measured from samples of certified welding fume; levels measured with paper sensors matched known values determined gravimetrically. PMID:26009988

  12. Acceleration of FDTD mode solver by high-performance computing techniques.

    PubMed

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  13. Ultramicroelectrode Array Based Sensors: A Promising Analytical Tool for Environmental Monitoring

    PubMed Central

    Orozco, Jahir; Fernández-Sánchez, César; Jiménez-Jorquera, Cecilia

    2010-01-01

    The particular analytical performance of ultramicroelectrode arrays (UMEAs) has attracted a high interest by the research community and has led to the development of a variety of electroanalytical applications. UMEA-based approaches have demonstrated to be powerful, simple, rapid and cost-effective analytical tools for environmental analysis compared to available conventional electrodes and standardised analytical techniques. An overview of the fabrication processes of UMEAs, their characterization and applications carried out by the Spanish scientific community is presented. A brief explanation of theoretical aspects that highlight their electrochemical behavior is also given. Finally, the applications of this transducer platform in the environmental field are discussed. PMID:22315551

  14. ARPEFS as an analytic technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schach von Wittenau, A.E.

    1991-04-01

    Two modifications to the ARPEFS technique are introduced. These are studied using p(2 {times} 2)S/Cu(001) as a model system. The first modification is the obtaining of ARPEFS {chi}(k) curves at temperatures as low as our equipment will permit. While adding to the difficulty of the experiment, this modification is shown to almost double the signal-to-noise ratio of normal emission p(2 {times} 2)S/Cu(001) {chi}(k) curves. This is shown by visual comparison of the raw data and by the improved precision of the extracted structural parameters. The second change is the replacement of manual fitting of the Fourier filtered {chi}(k) curves bymore » the use of the simplex algorithm for parameter determination. Again using p(2 {times} 2)S/Cu(001) data, this is shown to result in better agreement between experimental {chi}(k) curves and curves calculated based on model structures. The improved ARPEFS is then applied to p(2 {times} 2)S/Ni(111) and ({radical}3 {times} {radical}3) R30{degree}S/Ni(111). For p(2 {times} 2)S/Cu(001) we find a S-Cu bond length of 2.26 {Angstrom}, with the S adatom 1.31 {Angstrom} above the fourfold hollow site. The second Cu layer appears to be corrugated. Analysis of the p(2 {times} 2)S/Ni(111) data indicates that the S adatom adatom adsorbs onto the FCC threefold hollow site 1.53 {Angstrom} above the Ni surface. The S-Ni bond length is determined to be 2.13 {Angstrom}, indicating an outwards shift of the first layer Ni atoms. We are unable to assign a unique structure to ({radical}3 {times} {radical}3)R30{degree}S/Ni(111). An analysis of the strengths and weaknesses of ARPEFS as an experimental and analytic technique is presented, along with a summary of problems still to be addressed.« less

  15. Cost and Schedule Analytical Techniques Development

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This Final Report summarizes the activities performed by Science Applications International Corporation (SAIC) under contract NAS 8-40431 "Cost and Schedule Analytical Techniques Development Contract" (CSATD) during Option Year 3 (December 1, 1997 through November 30, 1998). This Final Report is in compliance with Paragraph 5 of Section F of the contract. This CSATD contract provides technical products and deliverables in the form of parametric models, databases, methodologies, studies, and analyses to the NASA Marshall Space Flight Center's (MSFC) Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) and other user organizations. Detailed Monthly Reports were submitted to MSFC in accordance with the contract's Statement of Work, Section IV "Reporting and Documentation". These reports spelled out each month's specific work performed, deliverables submitted, major meetings conducted, and other pertinent information. Therefore, this Final Report will summarize these activities at a higher level. During this contract Option Year, SAIC expended 25,745 hours in the performance of tasks called out in the Statement of Work. This represents approximately 14 full-time EPs. Included are the Huntsville-based team, plus SAIC specialists in San Diego, Ames Research Center, Tampa, and Colorado Springs performing specific tasks for which they are uniquely qualified.

  16. An Example of a Hakomi Technique Adapted for Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Collis, Peter

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a model of therapy that lends itself to integration with other therapy models. This paper aims to provide an example to assist others in assimilating techniques from other forms of therapy into FAP. A technique from the Hakomi Method is outlined and modified for FAP. As, on the whole, psychotherapy…

  17. Many-core graph analytics using accelerated sparse linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric

    2016-05-01

    Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.

  18. Analytical techniques of pilot scanning behavior and their application

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Glover, B. J.; Spady, A. A., Jr.

    1986-01-01

    The state of the art of oculometric data analysis techniques and their applications in certain research areas such as pilot workload, information transfer provided by various display formats, crew role in automated systems, and pilot training are documented. These analytical techniques produce the following data: real-time viewing of the pilot's scanning behavior, average dwell times, dwell percentages, instrument transition paths, dwell histograms, and entropy rate measures. These types of data are discussed, and overviews of the experimental setup, data analysis techniques, and software are presented. A glossary of terms frequently used in pilot scanning behavior and a bibliography of reports on related research sponsored by NASA Langley Research Center are also presented.

  19. General Analytical Schemes for the Characterization of Pectin-Based Edible Gelled Systems

    PubMed Central

    Haghighi, Maryam; Rezaei, Karamatollah

    2012-01-01

    Pectin-based gelled systems have gained increasing attention for the design of newly developed food products. For this reason, the characterization of such formulas is a necessity in order to present scientific data and to introduce an appropriate finished product to the industry. Various analytical techniques are available for the evaluation of the systems formulated on the basis of pectin and the designed gel. In this paper, general analytical approaches for the characterization of pectin-based gelled systems were categorized into several subsections including physicochemical analysis, visual observation, textural/rheological measurement, microstructural image characterization, and psychorheological evaluation. Three-dimensional trials to assess correlations among microstructure, texture, and taste were also discussed. Practical examples of advanced objective techniques including experimental setups for small and large deformation rheological measurements and microstructural image analysis were presented in more details. PMID:22645484

  20. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  1. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy.

    PubMed

    Wahl, N; Hennig, P; Wieser, H P; Bangert, M

    2017-06-26

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time

  2. Analytical solution of the problem of acceleration of cargo by a bridge crane with constant acceleration at elimination of swings of a cargo rope

    NASA Astrophysics Data System (ADS)

    Korytov, M. S.; Shcherbakov, V. S.; Titenko, V. V.

    2018-01-01

    Limitation of the swing of the bridge crane cargo rope is a matter of urgency, as it can significantly improve the efficiency and safety of the work performed. In order to completely dampen the pendulum swing after the break-up of a bridge or a bridge-crane freight cart to maximum speed, it is necessary, in the normal repulsion control of the electric motor, to split the process of dispersion into a minimum of three gaps. For a dynamic system of swinging of a bridge crane on a flexible cable hanger in a separate vertical plane, an analytical solution was obtained to determine the temporal dependence of the cargo rope angle relative to the gravitational vertical when the cargo suspension point moves with constant acceleration. The resulting analytical dependence of the cargo rope angle and its first derivative can break the process of dispersing the cargo suspension point into three stages of dispersal and braking with various accelerations and enter maximum speed of movement of the cargo suspension point. In doing so, the condition of eliminating the swings of the cargo rope relative to the gravitational vertical is fulfilled. Provides examples of the maximum speed output constraints-to-time when removing the rope swing.

  3. Analytical techniques for characterization of cyclodextrin complexes in the solid state: A review.

    PubMed

    Mura, Paola

    2015-09-10

    Cyclodextrins are cyclic oligosaccharides able to form inclusion complexes with a variety of hydrophobic guest molecules, positively modifying their physicochemical properties. A thorough analytical characterization of cyclodextrin complexes is of fundamental importance to provide an adequate support in selection of the most suitable cyclodextrin for each guest molecule, and also in view of possible future patenting and marketing of drug-cyclodextrin formulations. The demonstration of the actual formation of a drug-cyclodextrin inclusion complex in solution does not guarantee its existence also in the solid state. Moreover, the technique used to prepare the solid complex can strongly influence the properties of the final product. Therefore, an appropriate characterization of the drug-cyclodextrin solid systems obtained has also a key role in driving in the choice of the most effective preparation method, able to maximize host-guest interactions. The analytical characterization of drug-cyclodextrin solid systems and the assessment of the actual inclusion complex formation is not a simple task and involves the combined use of several analytical techniques, whose results have to be evaluated together. The objective of the present review is to present a general prospect of the principal analytical techniques which can be employed for a suitable characterization of drug-cyclodextrin systems in the solid state, evidencing their respective potential advantages and limits. The applications of each examined technique are described and discussed by pertinent examples from literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    PubMed

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. On the Applications of IBA Techniques to Biological Samples Analysis: PIXE and RBS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falcon-Gonzalez, J. M.; Bernal-Alvarado, J.; Sosa, M.

    2008-08-11

    The analytical techniques based on ion beams or IBA techniques give quantitative information on elemental concentration in samples of a wide variety of nature. In this work, we focus on PIXE technique, analyzing thick target biological specimens (TTPIXE), using 3 MeV protons produced by an electrostatic accelerator. A nuclear microprobe was used performing PIXE and RBS simultaneously, in order to solve the uncertainties produced in the absolute PIXE quantifying. The advantages of using both techniques and a nuclear microprobe are discussed. Quantitative results are shown to illustrate the multielemental resolution of the PIXE technique; for this, a blood standard wasmore » used.« less

  6. Fourier-based integration of quasi-periodic gait accelerations for drift-free displacement estimation using inertial sensors.

    PubMed

    Sabatini, Angelo Maria; Ligorio, Gabriele; Mannini, Andrea

    2015-11-23

    In biomechanical studies Optical Motion Capture Systems (OMCS) are considered the gold standard for determining the orientation and the position (pose) of an object in a global reference frame. However, the use of OMCS can be difficult, which has prompted research on alternative sensing technologies, such as body-worn inertial sensors. We developed a drift-free method to estimate the three-dimensional (3D) displacement of a body part during cyclical motions using body-worn inertial sensors. We performed the Fourier analysis of the stride-by-stride estimates of the linear acceleration, which were obtained by transposing the specific forces measured by the tri-axial accelerometer into the global frame using a quaternion-based orientation estimation algorithm and detecting when each stride began using a gait-segmentation algorithm. The time integration was performed analytically using the Fourier series coefficients; the inverse Fourier series was then taken for reconstructing the displacement over each single stride. The displacement traces were concatenated and spline-interpolated to obtain the entire trace. The method was applied to estimate the motion of the lower trunk of healthy subjects that walked on a treadmill and it was validated using OMCS reference 3D displacement data; different approaches were tested for transposing the measured specific force into the global frame, segmenting the gait and performing time integration (numerically and analytically). The width of the limits of agreements were computed between each tested method and the OMCS reference method for each anatomical direction: Medio-Lateral (ML), VerTical (VT) and Antero-Posterior (AP); using the proposed method, it was observed that the vertical component of displacement (VT) was within ±4 mm (±1.96 standard deviation) of OMCS data and each component of horizontal displacement (ML and AP) was within ±9 mm of OMCS data. Fourier harmonic analysis was applied to model stride-by-stride linear

  7. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Seidl, P. A.; Ji, Q.; Feinberg, E.; Waldron, W. L.; Schenkel, T.; Ardanuc, S.; Vinayakumar, K. B.; Lal, A.

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3 × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.

  8. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    DOE PAGES

    Persaud, A.; Seidl, P. A.; Ji, Q.; ...

    2017-10-26

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3more » × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.« less

  9. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Persaud, A.; Seidl, P. A.; Ji, Q.

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3more » × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.« less

  10. A New Paradigm for Flare Particle Acceleration

    NASA Astrophysics Data System (ADS)

    Guidoni, Silvina E.; Karpen, Judith T.; DeVore, C. Richard

    2017-08-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy impulsive emission and its spectra in solar flares is not well understood. Here, we propose a first-principle-based model of particle acceleration that produces energy spectra that closely resemble those derived from hard X-ray observations. Our mechanism uses contracting magnetic islands formed during fast reconnection in solar flares to accelerate electrons, as first proposed by Drake et al. (2006) for kinetic-scale plasmoids. We apply these ideas to MHD-scale islands formed during fast reconnection in a simulated eruptive flare. A simple analytic model based on the particles’ adiabatic invariants is used to calculate the energy gain of particles orbiting field lines in our ultrahigh-resolution, 2.5D, MHD numerical simulation of a solar eruption (flare + coronal mass ejection). Then, we analytically model electrons visiting multiple contracting islands to account for the observed high-energy flare emission. Our acceleration mechanism inherently produces sporadic emission because island formation is intermittent. Moreover, a large number of particles could be accelerated in each macroscopic island, which may explain the inferred rates of energetic-electron production in flares. We conclude that island contraction in the flare current sheet is a promising candidate for electron acceleration in solar eruptions. This work was supported in part by the NASA LWS and H-SR programs..

  11. Algal Biomass Analysis by Laser-Based Analytical Techniques—A Review

    PubMed Central

    Pořízka, Pavel; Prochazková, Petra; Prochazka, David; Sládková, Lucia; Novotný, Jan; Petrilak, Michal; Brada, Michal; Samek, Ota; Pilát, Zdeněk; Zemánek, Pavel; Adam, Vojtěch; Kizek, René; Novotný, Karel; Kaiser, Jozef

    2014-01-01

    Algal biomass that is represented mainly by commercially grown algal strains has recently found many potential applications in various fields of interest. Its utilization has been found advantageous in the fields of bioremediation, biofuel production and the food industry. This paper reviews recent developments in the analysis of algal biomass with the main focus on the Laser-Induced Breakdown Spectroscopy, Raman spectroscopy, and partly Laser-Ablation Inductively Coupled Plasma techniques. The advantages of the selected laser-based analytical techniques are revealed and their fields of use are discussed in detail. PMID:25251409

  12. A Meta-Analytic Review of School-Based Prevention for Cannabis Use

    ERIC Educational Resources Information Center

    Porath-Waller, Amy J.; Beasley, Erin; Beirness, Douglas J.

    2010-01-01

    This investigation used meta-analytic techniques to evaluate the effectiveness of school-based prevention programming in reducing cannabis use among youth aged 12 to 19. It summarized the results from 15 studies published in peer-reviewed journals since 1999 and identified features that influenced program effectiveness. The results from the set of…

  13. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations

    PubMed Central

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094

  14. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.

    PubMed

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.

  15. A technique for accelerating the convergence of restarted GMRES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Jessup, E R; Manteuffel, T

    2004-03-09

    We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.

  16. Quality assessment of internet pharmaceutical products using traditional and non-traditional analytical techniques.

    PubMed

    Westenberger, Benjamin J; Ellison, Christopher D; Fussner, Andrew S; Jenney, Susan; Kolinski, Richard E; Lipe, Terra G; Lyon, Robbe C; Moore, Terry W; Revelle, Larry K; Smith, Anjanette P; Spencer, John A; Story, Kimberly D; Toler, Duckhee Y; Wokovich, Anna M; Buhse, Lucinda F

    2005-12-08

    This work investigated the use of non-traditional analytical methods to evaluate the quality of a variety of pharmaceutical products purchased via internet sites from foreign sources and compared the results with those obtained from conventional quality assurance methods. Traditional analytical techniques employing HPLC for potency, content uniformity, chromatographic purity and drug release profiles were used to evaluate the quality of five selected drug products (fluoxetine hydrochloride, levothyroxine sodium, metformin hydrochloride, phenytoin sodium, and warfarin sodium). Non-traditional techniques, such as near infrared spectroscopy (NIR), NIR imaging and thermogravimetric analysis (TGA), were employed to verify the results and investigate their potential as alternative testing methods. Two of 20 samples failed USP monographs for quality attributes. The additional analytical methods found 11 of 20 samples had different formulations when compared to the U.S. product. Seven of the 20 samples arrived in questionable containers, and 19 of 20 had incomplete labeling. Only 1 of the 20 samples had final packaging similar to the U.S. products. The non-traditional techniques complemented the traditional techniques used and highlighted additional quality issues for the products tested. For example, these methods detected suspect manufacturing issues (such as blending), which were not evident from traditional testing alone.

  17. Fabrication and Operation of Paper-Based Analytical Devices

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao; Fan, Z. Hugh

    2016-06-01

    This review focuses on the fabrication techniques and operational components of microfluidic paper-based analytical devices (μPADs). Being low-cost, user-friendly, fast, and simple, μPADs have seen explosive growth in the literature in the last decade. Many different materials and technologies have been employed to fabricate μPADs for various applications, including those that employ patterning, the creation of physical boundaries, and three-dimensional structures. In addition to fabrication techniques, flow control and other operational components in μPADs are of great interest. These components enable μPADs to control flow rates, direct flow paths via valves, sequentially deliver reagents automatically, and display test results, all of which will make μPADs more suitable for point-of-care applications.

  18. DART-MS: A New Analytical Technique for Forensic Paint Analysis.

    PubMed

    Marić, Mark; Marano, James; Cody, Robert B; Bridge, Candice

    2018-06-05

    Automotive paint evidence is one of the most significant forms of evidence obtained in automotive-related incidents. Therefore, the analysis of automotive paint evidence is imperative in forensic casework. Most analytical schemes for automotive paint characterization involve optical microscopy, followed by infrared spectroscopy and pyrolysis-gas chromatography mass spectrometry ( py-GCMS) if required. The main drawback with py-GCMS, aside from its destructive nature, is that this technique is relatively time intensive in comparison to other techniques. Direct analysis in real-time-time-of-flight mass spectrometry (DART-TOFMS) may provide an alternative to py-GCMS, as the rapidity of analysis and minimal sample preparation affords a significant advantage. In this study, automotive clear coats from four vehicles were characterized by DART-TOFMS and a standard py-GCMS protocol. Principal component analysis was utilized to interpret the resultant data and suggested the two techniques provided analogous sample discrimination. Moreover, in some instances DART-TOFMS was able to identify components not observed by py-GCMS and vice versa, which indicates that the two techniques may provide complementary information. Additionally, a thermal desorption/pyrolysis DART-TOFMS methodology was also evaluated to characterize the intact paint chips from the vehicles to ascertain if the linear temperature gradient provided additional discriminatory information. All the paint samples were able to be discriminated based on the distinctive thermal desorption plots afforded from this technique, which may also be utilized for sample discrimination. On the basis of the results, DART-TOFMS may provide an additional tool to the forensic paint examiner.

  19. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE PAGES

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...

    2016-07-05

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  20. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  1. Analytic model of a laser-accelerated composite plasma target and its stability

    NASA Astrophysics Data System (ADS)

    Khudik, Vladimir; Shvets, Gennady

    2013-10-01

    A self-consistent analytical model of monoenergetic acceleration of a one and two-species ultrathin target irradiated by a circularly polarized laser pulse is developed. In the accelerated reference frame, the bulk plasma in the target is neutral and its parameters are assumed to be stationary. It is found that the structure of the target depends strongly on the temperatures of electrons and ions, which are both strongly influenced by the laser pulse pedestal. When the electron temperature is large, the hot electrons bounce back and forth inside the potential well formed by ponderomotive and electrostatic potentials while the heavy and light ions are forced-balanced by the electrostatic and non-inertial fields forming two separated layers. In the opposite limiting case when the ion temperature is large, the hot ions are trapped in the potential well formed by the ion-sheath's electric and non-inertial potentials while the cold electrons are forced-balanced by the electrostatic and ponderomotive fields. Using PIC simulations we have determined which scenario is realized in practice depending on the initial target structure and laser intensity. Target stability with respect to Rayleigh-Taylor instability will also be discussed. This work is supported by the US DOE grants DE-FG02-04ER41321 and DE-FG02-07ER54945.

  2. A systematic FPGA acceleration design for applications based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.

  3. An efficient and accurate molecular alignment and docking technique using ab initio quality scoring

    PubMed Central

    Füsti-Molnár, László; Merz, Kenneth M.

    2008-01-01

    An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561

  4. Nuclear and atomic analytical techniques in environmental studies in South America.

    PubMed

    Paschoa, A S

    1990-01-01

    The use of nuclear analytical techniques for environmental studies in South America is selectively reviewed since the time of earlier works of Lattes with cosmic rays until the recent applications of the PIXE (particle-induced X-ray emission) technique to study air pollution problems in large cities, such as São Paulo and Rio de Janeiro. The studies on natural radioactivity and fallout from nuclear weapons in South America are briefly examined.

  5. An analytical study of reduced-gravity liquid reorientation using a simplified marker and cell technique

    NASA Technical Reports Server (NTRS)

    Betts, W. S., Jr.

    1972-01-01

    A computer program called HOPI was developed to predict reorientation flow dynamics, wherein liquids move from one end of a closed, partially filled, rigid container to the other end under the influence of container acceleration. The program uses the simplified marker and cell numerical technique and, using explicit finite-differencing, solves the Navier-Stokes equations for an incompressible viscous fluid. The effects of turbulence are also simulated in the program. HOPI can consider curved as well as straight walled boundaries. Both free-surface and confined flows can be calculated. The program was used to simulate five liquid reorientation cases. Three of these cases simulated actual NASA LeRC drop tower test conditions while two cases simulated full-scale Centaur tank conditions. It was concluded that while HOPI can be used to analytically determine the fluid motion in a typical settling problem, there is a current need to optimize HOPI. This includes both reducing the computer usage time and also reducing the core storage required for a given size problem.

  6. Acceleration techniques and their impact on arterial input function sampling: Non-accelerated versus view-sharing and compressed sensing sequences.

    PubMed

    Benz, Matthias R; Bongartz, Georg; Froehlich, Johannes M; Winkel, David; Boll, Daniel T; Heye, Tobias

    2018-07-01

    The aim was to investigate the variation of the arterial input function (AIF) within and between various DCE MRI sequences. A dynamic flow-phantom and steady signal reference were scanned on a 3T MRI using fast low angle shot (FLASH) 2d, FLASH3d (parallel imaging factor (P) = P0, P2, P4), volumetric interpolated breath-hold examination (VIBE) (P = P0, P3, P2 × 2, P2 × 3, P3 × 2), golden-angle radial sparse parallel imaging (GRASP), and time-resolved imaging with stochastic trajectories (TWIST). Signal over time curves were normalized and quantitatively analyzed by full width half maximum (FWHM) measurements to assess variation within and between sequences. The coefficient of variation (CV) for the steady signal reference ranged from 0.07-0.8%. The non-accelerated gradient echo FLASH2d, FLASH3d, and VIBE sequences showed low within sequence variation with 2.1%, 1.0%, and 1.6%. The maximum FWHM CV was 3.2% for parallel imaging acceleration (VIBE P2 × 3), 2.7% for GRASP and 9.1% for TWIST. The FWHM CV between sequences ranged from 8.5-14.4% for most non-accelerated/accelerated gradient echo sequences except 6.2% for FLASH3d P0 and 0.3% for FLASH3d P2; GRASP FWHM CV was 9.9% versus 28% for TWIST. MRI acceleration techniques vary in reproducibility and quantification of the AIF. Incomplete coverage of the k-space with TWIST as a representative of view-sharing techniques showed the highest variation within sequences and might be less suited for reproducible quantification of the AIF. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Analytical estimates of radial segregation in Bridgman growth from low-level steady and periodic accelerations

    NASA Astrophysics Data System (ADS)

    Naumann, Robert J.; Baugher, Charles

    1992-08-01

    Estimates of the convective flows driven by horizontal temperature gradients in the vertical Bridgman configuration are made for dilute systems subject to the low level accelerations typical of the residual accelerations experienced by a spacecraft in low Earth orbit. The estimates are made by solving the Navier-Stokes momentum equation in one dimension. The mass transport equation is then solved in two dimensions using a first-order perturbation method. This approach is valid provided the convective velocities are small compared to the growth velocity which generally requires a reduced gravity environment. If this condition is satisfied, there will be no circulating cells, and hence no convective transport along the vertical axis. However, the variations in the vertical velocity with radius will give rise to radial segregation. The approximate analytical model developed here can predict the degree of radial segregation for a variety of material and processing parameters to an accuracy well within a factor of two as compared against numerical computations of the full set of Navier-Stokes equations for steady accelerations. It has the advantage of providing more insight into the complex interplay of the processing parameters and how they affect the solute distribution in the grown crystal. This could be extremely valuable in the design of low-gravity experiments in which the intent is to control radial segregation. Also, the analysis can be extended to consider transient and periodic accelerations, which is difficult and costly to do numerically. Surprisingly, it was found that the relative radial segregation falls as the inverse cube of the frequency for periodic accelerations whose periods are short compared with the characteristic diffusion time.

  8. Evidence-based perianesthesia care: accelerated postoperative recovery programs.

    PubMed

    Pasero, Chris; Belden, Jan

    2006-06-01

    Prolonged stress response after surgery can cause numerous adverse effects, including gastrointestinal dysfunction, muscle wasting, impaired cognition, and cardiopulmonary, infectious, and thromboembolic complications. These events can delay hospital discharge, extend convalescence, and negatively impact long-term prognosis. Recent advances in perioperative management practices have allowed better control of the stress response and improved outcomes for patients undergoing surgery. At the center of the current focus on improved outcomes are evidence-based fast-track surgical techniques and what is commonly referred to as "accelerated postoperative recovery programs." These programs require a multidisciplinary, coordinated effort, and nurses are essential to their successful implementation.

  9. Accelerated testing of space mechanisms

    NASA Technical Reports Server (NTRS)

    Murray, S. Frank; Heshmat, Hooshang

    1995-01-01

    This report contains a review of various existing life prediction techniques used for a wide range of space mechanisms. Life prediction techniques utilized in other non-space fields such as turbine engine design are also reviewed for applicability to many space mechanism issues. The development of new concepts on how various tribological processes are involved in the life of the complex mechanisms used for space applications are examined. A 'roadmap' for the complete implementation of a tribological prediction approach for complex mechanical systems including standard procedures for test planning, analytical models for life prediction and experimental verification of the life prediction and accelerated testing techniques are discussed. A plan is presented to demonstrate a method for predicting the life and/or performance of a selected space mechanism mechanical component.

  10. Accelerating simultaneous algebraic reconstruction technique with motion compensation using CUDA-enabled GPU.

    PubMed

    Pang, Wai-Man; Qin, Jing; Lu, Yuqiang; Xie, Yongming; Chui, Chee-Kong; Heng, Pheng-Ann

    2011-03-01

    To accelerate the simultaneous algebraic reconstruction technique (SART) with motion compensation for speedy and quality computed tomography reconstruction by exploiting CUDA-enabled GPU. Two core techniques are proposed to fit SART into the CUDA architecture: (1) a ray-driven projection along with hardware trilinear interpolation, and (2) a voxel-driven back-projection that can avoid redundant computation by combining CUDA shared memory. We utilize the independence of each ray and voxel on both techniques to design CUDA kernel to represent a ray in the projection and a voxel in the back-projection respectively. Thus, significant parallelization and performance boost can be achieved. For motion compensation, we rectify each ray's direction during the projection and back-projection stages based on a known motion vector field. Extensive experiments demonstrate the proposed techniques can provide faster reconstruction without compromising image quality. The process rate is nearly 100 projections s (-1), and it is about 150 times faster than a CPU-based SART. The reconstructed image is compared against ground truth visually and quantitatively by peak signal-to-noise ratio (PSNR) and line profiles. We further evaluate the reconstruction quality using quantitative metrics such as signal-to-noise ratio (SNR) and mean-square-error (MSE). All these reveal that satisfactory results are achieved. The effects of major parameters such as ray sampling interval and relaxation parameter are also investigated by a series of experiments. A simulated dataset is used for testing the effectiveness of our motion compensation technique. The results demonstrate our reconstructed volume can eliminate undesirable artifacts like blurring. Our proposed method has potential to realize instantaneous presentation of 3D CT volume to physicians once the projection data are acquired.

  11. Lithium target performance evaluation for low-energy accelerator-based in vivo measurements using gamma spectroscopy.

    PubMed

    Aslam; Prestwich, W V; McNeill, F E

    2003-03-01

    The operating conditions at McMaster KN Van de Graaf accelerator have been optimized to produce neutrons via the (7)Li(p, n)(7)Be reaction for in vivo neutron activation analysis. In a number of earlier studies (development of an accelerator based system for in vivo neutron activation analysis measurements of manganese in humans, Ph.D. Thesis, McMaster University, Hamilton, ON, Canada; Appl. Radiat. Isot. 53 (2000) 657; in vivo measurement of some trace elements in human Bone, Ph.D. Thesis. McMaster University, Hamilton, ON, Canada), a significant discrepancy between the experimental and the calculated neutron doses has been pointed out. The hypotheses formulated in the above references to explain the deviation of the experimental results from analytical calculations, have been tested experimentally. The performance of the lithium target for neutron production has been evaluated by measuring the (7)Be activity produced as a result of (p, n) interaction with (7)Li. In contradiction to the formulated hypotheses, lithium target performance was found to be mainly affected by inefficient target cooling and the presence of oxides layer on target surface. An appropriate choice of these parameters resulted in neutron yields same as predicated by analytical calculations.

  12. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  13. A new multi-step technique with differential transform method for analytical solution of some nonlinear variable delay differential equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2016-01-01

    This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.

  14. Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).

    PubMed

    Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel

    2018-02-07

    The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.

  15. Counterfeit drugs: analytical techniques for their identification.

    PubMed

    Martino, R; Malet-Martino, M; Gilard, V; Balayssac, S

    2010-09-01

    In recent years, the number of counterfeit drugs has increased dramatically, including not only "lifestyle" products but also vital medicines. Besides the threat to public health, the financial and reputational damage to pharmaceutical companies is substantial. The lack of robust information on the prevalence of fake drugs is an obstacle in the fight against drug counterfeiting. It is generally accepted that approximately 10% of drugs worldwide could be counterfeit, but it is also well known that this number covers very different situations depending on the country, the places where the drugs are purchased, and the definition of what constitutes a counterfeit drug. The chemical analysis of drugs suspected to be fake is a crucial step as counterfeiters are becoming increasingly sophisticated, rendering visual inspection insufficient to distinguish the genuine products from the counterfeit ones. This article critically reviews the recent analytical methods employed to control the quality of drug formulations, using as an example artemisinin derivatives, medicines particularly targeted by counterfeiters. Indeed, a broad panel of techniques have been reported for their analysis, ranging from simple and cheap in-field ones (colorimetry and thin-layer chromatography) to more advanced laboratory methods (mass spectrometry, nuclear magnetic resonance, and vibrational spectroscopies) through chromatographic methods, which remain the most widely used. The conclusion section of the article highlights the questions to be posed before selecting the most appropriate analytical approach.

  16. Development of Impurity Profiling Methods Using Modern Analytical Techniques.

    PubMed

    Ramachandra, Bondigalla

    2017-01-02

    This review gives a brief introduction about the process- and product-related impurities and emphasizes on the development of novel analytical methods for their determination. It describes the application of modern analytical techniques, particularly the ultra-performance liquid chromatography (UPLC), liquid chromatography-mass spectrometry (LC-MS), high-resolution mass spectrometry (HRMS), gas chromatography-mass spectrometry (GC-MS) and high-performance thin layer chromatography (HPTLC). In addition to that, the application of nuclear magnetic resonance (NMR) spectroscopy was also discussed for the characterization of impurities and degradation products. The significance of the quality, efficacy and safety of drug substances/products, including the source of impurities, kinds of impurities, adverse effects by the presence of impurities, quality control of impurities, necessity for the development of impurity profiling methods, identification of impurities and regulatory aspects has been discussed. Other important aspects that have been discussed are forced degradation studies and the development of stability indicating assay methods.

  17. Development of design technique for vacuum insulation in large size multi-aperture multi-grid accelerator for nuclear fusion.

    PubMed

    Kojima, A; Hanada, M; Tobari, H; Nishikiori, R; Hiratsuka, J; Kashiwagi, M; Umeda, N; Yoshida, M; Ichikawa, M; Watanabe, K; Yamano, Y; Grisham, L R

    2016-02-01

    Design techniques for the vacuum insulation have been developed in order to realize a reliable voltage holding capability of multi-aperture multi-grid (MAMuG) accelerators for fusion application. In this method, the nested multi-stage configuration of the MAMuG accelerator can be uniquely designed to satisfy the target voltage within given boundary conditions. The evaluation of the voltage holding capabilities of each acceleration stages was based on the previous experimental results about the area effect and the multi-aperture effect. Since the multi-grid effect was found to be the extension of the area effect by the total facing area this time, the total voltage holding capability of the multi-stage can be estimated from that per single stage by assuming the stage with the highest electric field, the total facing area, and the total apertures. By applying these consideration, the analysis on the 3-stage MAMuG accelerator for JT-60SA agreed well with the past gap-scan experiments with an accuracy of less than 10% variation, which demonstrated the high reliability to design MAMuG accelerators and also multi-stage high voltage bushings.

  18. Development of design technique for vacuum insulation in large size multi-aperture multi-grid accelerator for nuclear fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kojima, A., E-mail: kojima.atsushi@jaea.go.jp; Hanada, M.; Tobari, H.

    Design techniques for the vacuum insulation have been developed in order to realize a reliable voltage holding capability of multi-aperture multi-grid (MAMuG) accelerators for fusion application. In this method, the nested multi-stage configuration of the MAMuG accelerator can be uniquely designed to satisfy the target voltage within given boundary conditions. The evaluation of the voltage holding capabilities of each acceleration stages was based on the previous experimental results about the area effect and the multi-aperture effect. Since the multi-grid effect was found to be the extension of the area effect by the total facing area this time, the total voltagemore » holding capability of the multi-stage can be estimated from that per single stage by assuming the stage with the highest electric field, the total facing area, and the total apertures. By applying these consideration, the analysis on the 3-stage MAMuG accelerator for JT-60SA agreed well with the past gap-scan experiments with an accuracy of less than 10% variation, which demonstrated the high reliability to design MAMuG accelerators and also multi-stage high voltage bushings.« less

  19. SHEAR ACCELERATION IN EXPANDING FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rieger, F. M.; Duffy, P., E-mail: frank.rieger@mpi-hd.mpg.de, E-mail: peter.duffy@ucd.ie

    Shear flows are naturally expected to occur in astrophysical environments and potential sites of continuous non-thermal Fermi-type particle acceleration. Here we investigate the efficiency of expanding relativistic outflows to facilitate the acceleration of energetic charged particles to higher energies. To this end, the gradual shear acceleration coefficient is derived based on an analytical treatment. The results are applied to the context of the relativistic jets from active galactic nuclei. The inferred acceleration timescale is investigated for a variety of conical flow profiles (i.e., power law, Gaussian, Fermi–Dirac) and compared to the relevant radiative and non-radiative loss timescales. The results exemplifymore » that relativistic shear flows are capable of boosting cosmic-rays to extreme energies. Efficient electron acceleration, on the other hand, requires weak magnetic fields and may thus be accompanied by a delayed onset of particle energization and affect the overall jet appearance (e.g., core, ridge line, and limb-brightening).« less

  20. Assessing the Value of Structured Analytic Techniques in the U.S. Intelligence Community

    DTIC Science & Technology

    2016-01-01

    Analytic Techniques, and Why Do Analysts Use Them? SATs are methods of organizing and stimulating thinking about intelligence problems. These methods... thinking ; and imaginative thinking techniques encourage new perspectives, insights, and alternative scenarios. Among the many SATs in use today, the...more transparent, so that other analysts and customers can bet - ter understand how the judgments were reached. SATs also facilitate group involvement

  1. Electrical field-induced extraction and separation techniques: promising trends in analytical chemistry--a review.

    PubMed

    Yamini, Yadollah; Seidi, Shahram; Rezazadeh, Maryam

    2014-03-03

    Sample preparation is an important issue in analytical chemistry, and is often a bottleneck in chemical analysis. So, the major incentive for the recent research has been to attain faster, simpler, less expensive, and more environmentally friendly sample preparation methods. The use of auxiliary energies, such as heat, ultrasound, and microwave, is one of the strategies that have been employed in sample preparation to reach the above purposes. Application of electrical driving force is the current state-of-the-art, which presents new possibilities for simplifying and shortening the sample preparation process as well as enhancing its selectivity. The electrical driving force has scarcely been utilized in comparison with other auxiliary energies. In this review, the different roles of electrical driving force (as a powerful auxiliary energy) in various extraction techniques, including liquid-, solid-, and membrane-based methods, have been taken into consideration. Also, the references have been made available, relevant to the developments in separation techniques and Lab-on-a-Chip (LOC) systems. All aspects of electrical driving force in extraction and separation methods are too specific to be treated in this contribution. However, the main aim of this review is to provide a brief knowledge about the different fields of analytical chemistry, with an emphasis on the latest efforts put into the electrically assisted membrane-based sample preparation systems. The advantages and disadvantages of these approaches as well as the new achievements in these areas have been discussed, which might be helpful for further progress in the future. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Observation of acceleration and deceleration in gigaelectron-volt-per-metre gradient dielectric wakefield accelerators

    DOE PAGES

    O’Shea, B. D.; Andonian, G.; Barber, S. K.; ...

    2016-09-14

    There is urgent need to develop new acceleration techniques capable of exceeding gigaelectron-volt-per-metre (GeV m –1) gradients in order to enable future generations of both light sources and high-energy physics experiments. To address this need, short wavelength accelerators based on wakefields, where an intense relativistic electron beam radiates the demanded fields directly into the accelerator structure or medium, are currently under intense investigation. One such wakefield based accelerator, the dielectric wakefield accelerator, uses a dielectric lined-waveguide to support a wakefield used for acceleration. Here we show gradients of 1.347±0.020 GeV m –1 using a dielectric wakefield accelerator of 15 cmmore » length, with sub-millimetre transverse aperture, by measuring changes of the kinetic state of relativistic electron beams. We follow this measurement by demonstrating accelerating gradients of 320±17 MeV m –1. As a result, both measurements improve on previous measurements by and order of magnitude and show promise for dielectric wakefield accelerators as sources of high-energy electrons.« less

  3. Observation of acceleration and deceleration in gigaelectron-volt-per-metre gradient dielectric wakefield accelerators

    PubMed Central

    O'Shea, B. D.; Andonian, G.; Barber, S. K.; Fitzmorris, K. L.; Hakimi, S.; Harrison, J.; Hoang, P. D.; Hogan, M. J.; Naranjo, B.; Williams, O. B.; Yakimenko, V.; Rosenzweig, J. B.

    2016-01-01

    There is urgent need to develop new acceleration techniques capable of exceeding gigaelectron-volt-per-metre (GeV m−1) gradients in order to enable future generations of both light sources and high-energy physics experiments. To address this need, short wavelength accelerators based on wakefields, where an intense relativistic electron beam radiates the demanded fields directly into the accelerator structure or medium, are currently under intense investigation. One such wakefield based accelerator, the dielectric wakefield accelerator, uses a dielectric lined-waveguide to support a wakefield used for acceleration. Here we show gradients of 1.347±0.020 GeV m−1 using a dielectric wakefield accelerator of 15 cm length, with sub-millimetre transverse aperture, by measuring changes of the kinetic state of relativistic electron beams. We follow this measurement by demonstrating accelerating gradients of 320±17 MeV m−1. Both measurements improve on previous measurements by and order of magnitude and show promise for dielectric wakefield accelerators as sources of high-energy electrons. PMID:27624348

  4. Coarse mesh and one-cell block inversion based diffusion synthetic acceleration

    NASA Astrophysics Data System (ADS)

    Kim, Kang-Seog

    DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent

  5. Using machine learning to accelerate sampling-based inversion

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  6. Problem-based learning on quantitative analytical chemistry course

    NASA Astrophysics Data System (ADS)

    Fitri, Noor

    2017-12-01

    This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.

  7. Role of Knowledge Management and Analytical CRM in Business: Data Mining Based Framework

    ERIC Educational Resources Information Center

    Ranjan, Jayanthi; Bhatnagar, Vishal

    2011-01-01

    Purpose: The purpose of the paper is to provide a thorough analysis of the concepts of business intelligence (BI), knowledge management (KM) and analytical CRM (aCRM) and to establish a framework for integrating all the three to each other. The paper also seeks to establish a KM and aCRM based framework using data mining (DM) techniques, which…

  8. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  9. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  10. Present status of Accelerator-Based BNCT

    PubMed Central

    Kreiner, Andres Juan; Bergueiro, Javier; Cartelli, Daniel; Baldo, Matias; Castell, Walter; Asoia, Javier Gomez; Padulo, Javier; Suárez Sandín, Juan Carlos; Igarzabal, Marcelo; Erhardt, Julian; Mercuri, Daniel; Valda, Alejandro A.; Minsky, Daniel M.; Debray, Mario E.; Somacal, Hector R.; Capoulat, María Eugenia; Herrera, María S.; del Grosso, Mariela F.; Gagetti, Leonardo; Anzorena, Manuel Suarez; Canepa, Nicolas; Real, Nicolas; Gun, Marcelo; Tacca, Hernán

    2016-01-01

    Aim This work aims at giving an updated report of the worldwide status of Accelerator-Based BNCT (AB-BNCT). Background There is a generalized perception that the availability of accelerators installed in hospitals, as neutron sources, may be crucial for the advancement of BNCT. Accordingly, in recent years a significant effort has started to develop such machines. Materials and methods A variety of possible charged-particle induced nuclear reactions and the characteristics of the resulting neutron spectra are discussed along with the worldwide activity in suitable accelerator development. Results Endothermic 7Li(p,n)7Be and 9Be(p,n)9B and exothermic 9Be(d,n)10B are compared. In addition to having much better thermo-mechanical properties than Li, Be as a target leads to stable products. This is a significant advantage for a hospital-based facility. 9Be(p,n)9B needs at least 4–5 MeV bombarding energy to have a sufficient yield, while 9Be(d,n)10B can be utilized at about 1.4 MeV, implying the smallest possible accelerator. This reaction operating with a thin target can produce a sufficiently soft spectrum to be viable for AB-BNCT. The machines considered are electrostatic single ended or tandem accelerators or radiofrequency quadrupoles plus drift tube Linacs. Conclusions 7Li(p,n)7Be provides one of the best solutions for the production of epithermal neutron beams for deep-seated tumors. However, a Li-based target poses significant technological challenges. Hence, Be has been considered as an alternative target, both in combination with (p,n) and (d,n) reactions. 9Be(d,n)10B at 1.4 MeV, with a thin target has been shown to be a realistic option for the treatment of deep-seated lesions. PMID:26933390

  11. Present status of Accelerator-Based BNCT.

    PubMed

    Kreiner, Andres Juan; Bergueiro, Javier; Cartelli, Daniel; Baldo, Matias; Castell, Walter; Asoia, Javier Gomez; Padulo, Javier; Suárez Sandín, Juan Carlos; Igarzabal, Marcelo; Erhardt, Julian; Mercuri, Daniel; Valda, Alejandro A; Minsky, Daniel M; Debray, Mario E; Somacal, Hector R; Capoulat, María Eugenia; Herrera, María S; Del Grosso, Mariela F; Gagetti, Leonardo; Anzorena, Manuel Suarez; Canepa, Nicolas; Real, Nicolas; Gun, Marcelo; Tacca, Hernán

    2016-01-01

    This work aims at giving an updated report of the worldwide status of Accelerator-Based BNCT (AB-BNCT). There is a generalized perception that the availability of accelerators installed in hospitals, as neutron sources, may be crucial for the advancement of BNCT. Accordingly, in recent years a significant effort has started to develop such machines. A variety of possible charged-particle induced nuclear reactions and the characteristics of the resulting neutron spectra are discussed along with the worldwide activity in suitable accelerator development. Endothermic (7)Li(p,n)(7)Be and (9)Be(p,n)(9)B and exothermic (9)Be(d,n)(10)B are compared. In addition to having much better thermo-mechanical properties than Li, Be as a target leads to stable products. This is a significant advantage for a hospital-based facility. (9)Be(p,n)(9)B needs at least 4-5 MeV bombarding energy to have a sufficient yield, while (9)Be(d,n)(10)B can be utilized at about 1.4 MeV, implying the smallest possible accelerator. This reaction operating with a thin target can produce a sufficiently soft spectrum to be viable for AB-BNCT. The machines considered are electrostatic single ended or tandem accelerators or radiofrequency quadrupoles plus drift tube Linacs. (7)Li(p,n)(7)Be provides one of the best solutions for the production of epithermal neutron beams for deep-seated tumors. However, a Li-based target poses significant technological challenges. Hence, Be has been considered as an alternative target, both in combination with (p,n) and (d,n) reactions. (9)Be(d,n)(10)B at 1.4 MeV, with a thin target has been shown to be a realistic option for the treatment of deep-seated lesions.

  12. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    PubMed

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  13. Model-independent particle accelerator tuning

    DOE PAGES

    Scheinker, Alexander; Pang, Xiaoying; Rybarcyk, Larry

    2013-10-21

    We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: 1) It has the ability to handle unknown, time-varying systems, 2) It gives known bounds on parameter update rates, 3) We give an analytic proof of its convergence and its stability, and 4) It has a simple digital implementation through a control system such as the Experimental Physics and Industrial Control System (EPICS). Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme formore » uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multi-particle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of twenty two quadrupole magnets and two RF buncher cavities in the Los Alamos Neutron Science Center Linear Accelerator’s transport region, while the beam properties and RF phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.« less

  14. Pre-analytic and analytic sources of variations in thiopurine methyltransferase activity measurement in patients prescribed thiopurine-based drugs: A systematic review.

    PubMed

    Loit, Evelin; Tricco, Andrea C; Tsouros, Sophia; Sears, Margaret; Ansari, Mohammed T; Booth, Ronald A

    2011-07-01

    Low thiopurine S-methyltransferase (TPMT) enzyme activity is associated with increased thiopurine drug toxicity, particularly myelotoxicity. Pre-analytic and analytic variables for TPMT genotype and phenotype (enzyme activity) testing were reviewed. A systematic literature review was performed, and diagnostic laboratories were surveyed. Thirty-five studies reported relevant data for pre-analytic variables (patient age, gender, race, hematocrit, co-morbidity, co-administered drugs and specimen stability) and thirty-three for analytic variables (accuracy, reproducibility). TPMT is stable in blood when stored for up to 7 days at room temperature, and 3 months at -30°C. Pre-analytic patient variables do not affect TPMT activity. Fifteen drugs studied to date exerted no clinically significant effects in vivo. Enzymatic assay is the preferred technique. Radiochemical and HPLC techniques had intra- and inter-assay coefficients of variation (CVs) below 10%. TPMT is a stable enzyme, and its assay is not affected by age, gender, race or co-morbidity. Copyright © 2011. Published by Elsevier Inc.

  15. Contrast-enhanced MR Angiography of the Abdomen with Highly Accelerated Acquisition Techniques

    PubMed Central

    Mostardi, Petrice M.; Glockner, James F.; Young, Phillip M.

    2011-01-01

    Purpose: To demonstrate that highly accelerated (net acceleration factor [Rnet] ≥ 10) acquisition techniques can be used to generate three-dimensional (3D) subsecond timing images, as well as diagnostic-quality high-spatial-resolution contrast material–enhanced (CE) renal magnetic resonance (MR) angiograms with a single split dose of contrast material. Materials and Methods: All studies were approved by the institutional review board and were HIPAA compliant; written consent was obtained from all participants. Twenty-two studies were performed in 10 female volunteers (average age, 47 years; range, 27–62 years) and six patients with renovascular disease (three women; average age, 48 years; range, 37–68 years; three men; average age, 60 years; range, 50–67 years; composite average age, 54 years; range, 38–68 years). The two-part protocol consisted of a low-dose (2 mL contrast material) 3D timing image with approximate 1-second frame time, followed by a high-spatial-resolution (1.0–1.6-mm isotropic voxels) breath-hold 3D renal MR angiogram (18 mL) over the full abdominal field of view. Both acquisitions used two-dimensional (2D) sensitivity encoding acceleration factor (R) of eight and 2D homodyne (HD) acceleration (RHD) of 1.4–1.8 for Rnet = R · RHD of 10 or higher. Statistical analysis included determination of mean values and standard deviations of image quality scores performed by two experienced reviewers with use of eight evaluation criteria. Results: The 2-mL 3D time-resolved image successfully portrayed progressive arterial filling in all 22 studies and provided an anatomic overview of the vasculature. Successful timing was also demonstrated in that the renal MR angiogram showed adequate or excellent portrayal of the main renal arteries in 21 of 22 studies. Conclusion: Two-dimensional acceleration techniques with Rnet of 10 or higher can be used in CE MR angiography to acquire (a) a 3D image series with 1-second frame time, allowing accurate

  16. Neural Networks for Modeling and Control of Particle Accelerators

    NASA Astrophysics Data System (ADS)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  17. New modes of particle accelerations techniques and sources. Formal report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1996-12-31

    This Report includes copies of transparencies and notes from the presentations made at the Symposium on New Modes of Particle Accelerations - Techniques and Sources, August 19-23, 1996 at the Institute for Theoretical Physics, University of California, Santa Barbara California, that was made available by the authors. Editing, reduction and changes to the authors contributions were made only to fulfill the printing and publication requirements. We would like to take this opportunity and thank the speakers for their informative presentations and for providing copies of their transparencies and notes for inclusion in this Report.

  18. Active matrix-based collection of airborne analytes: an analyte recording chip providing exposure history and finger print.

    PubMed

    Fang, Jun; Park, Se-Chul; Schlag, Leslie; Stauden, Thomas; Pezoldt, Jörg; Jacobs, Heiko O

    2014-12-03

    In the field of sensors that target the detection of airborne analytes, Corona/lens-based-collection provides a new path to achieve a high sensitivity. An active-matrix-based analyte collection approach referred to as "airborne analyte memory chip/recorder" is demonstrated, which takes and stores airborne analytes in a matrix to provide an exposure history for off-site analysis. © 2014 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Analytical transmissibility based transfer path analysis for multi-energy-domain systems using four-pole parameter theory

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Mohammad Jalali; Behdinan, Kamran

    2017-10-01

    The increasing demand to minimize undesired vibration and noise levels in several high-tech industries has generated a renewed interest in vibration transfer path analysis. Analyzing vibration transfer paths within a system is of crucial importance in designing an effective vibration isolation strategy. Most of the existing vibration transfer path analysis techniques are empirical which are suitable for diagnosis and troubleshooting purpose. The lack of an analytical transfer path analysis to be used in the design stage is the main motivation behind this research. In this paper an analytical transfer path analysis based on the four-pole theory is proposed for multi-energy-domain systems. Bond graph modeling technique which is an effective approach to model multi-energy-domain systems is used to develop the system model. In this paper an electro-mechanical system is used as a benchmark example to elucidate the effectiveness of the proposed technique. An algorithm to obtain the equivalent four-pole representation of a dynamical systems based on the corresponding bond graph model is also presented in this paper.

  20. Uncovering category specificity of genital sexual arousal in women: The critical role of analytic technique.

    PubMed

    Pulverman, Carey S; Hixon, J Gregory; Meston, Cindy M

    2015-10-01

    Based on analytic techniques that collapse data into a single average value, it has been reported that women lack category specificity and show genital sexual arousal to a large range of sexual stimuli including those that both match and do not match their self-reported sexual interests. These findings may be a methodological artifact of the way in which data are analyzed. This study examined whether using an analytic technique that models data over time would yield different results. Across two studies, heterosexual (N = 19) and lesbian (N = 14) women viewed erotic films featuring heterosexual, lesbian, and gay male couples, respectively, as their physiological sexual arousal was assessed with vaginal photoplethysmography. Data analysis with traditional methods comparing average genital arousal between films failed to detect specificity of genital arousal for either group. When data were analyzed with smoothing regression splines and a within-subjects approach, both heterosexual and lesbian women demonstrated different patterns of genital sexual arousal to the different types of erotic films, suggesting that sophisticated statistical techniques may be necessary to more fully understand women's genital sexual arousal response. Heterosexual women showed category-specific genital sexual arousal. Lesbian women showed higher arousal to the heterosexual film than the other films. However, within subjects, lesbian women showed significantly different arousal responses suggesting that lesbian women's genital arousal discriminates between different categories of stimuli at the individual level. Implications for the future use of vaginal photoplethysmography as a diagnostic tool of sexual preferences in clinical and forensic settings are discussed. © 2015 Society for Psychophysiological Research.

  1. Microwave-Accelerated Method for Ultra-Rapid Extraction of Neisseria gonorrhoeae DNA for Downstream Detection

    PubMed Central

    Melendez, Johan H.; Santaus, Tonya M.; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A.; Geddes, Chris D.

    2016-01-01

    Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by the detection of the genomic target often involving PCR-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (GC) DNA. Our approach is based on the use of highly-focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the present study, we show that highly focused microwaves at 2.45 GHz, using 12.3 mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification in less than 10 minutes total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward towards the development of a point-of-care (POC) platform for detection of gonorrhea infections. PMID:27325503

  2. Microwave-accelerated method for ultra-rapid extraction of Neisseria gonorrhoeae DNA for downstream detection.

    PubMed

    Melendez, Johan H; Santaus, Tonya M; Brinsley, Gregory; Kiang, Daniel; Mali, Buddha; Hardick, Justin; Gaydos, Charlotte A; Geddes, Chris D

    2016-10-01

    Nucleic acid-based detection of gonorrhea infections typically require a two-step process involving isolation of the nucleic acid, followed by detection of the genomic target often involving polymerase chain reaction (PCR)-based approaches. In an effort to improve on current detection approaches, we have developed a unique two-step microwave-accelerated approach for rapid extraction and detection of Neisseria gonorrhoeae (gonorrhea, GC) DNA. Our approach is based on the use of highly focused microwave radiation to rapidly lyse bacterial cells, release, and subsequently fragment microbial DNA. The DNA target is then detected by a process known as microwave-accelerated metal-enhanced fluorescence (MAMEF), an ultra-sensitive direct DNA detection analytical technique. In the current study, we show that highly focused microwaves at 2.45 GHz, using 12.3-mm gold film equilateral triangles, are able to rapidly lyse both bacteria cells and fragment DNA in a time- and microwave power-dependent manner. Detection of the extracted DNA can be performed by MAMEF, without the need for DNA amplification, in less than 10 min total time or by other PCR-based approaches. Collectively, the use of a microwave-accelerated method for the release and detection of DNA represents a significant step forward toward the development of a point-of-care (POC) platform for detection of gonorrhea infections. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation

    NASA Astrophysics Data System (ADS)

    Veltri, M.

    2016-09-01

    This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.

  4. Analytical techniques and method validation for the measurement of selected semivolatile and nonvolatile organofluorochemicals in air.

    PubMed

    Reagen, William K; Lindstrom, Kent R; Thompson, Kathy L; Flaherty, John M

    2004-09-01

    The widespread use of semi- and nonvolatile organofluorochemicals in industrial facilities, concern about their persistence, and relatively recent advancements in liquid chromatography/mass spectrometry (LC/MS) technology have led to the development of new analytical methods to assess potential worker exposure to airborne organofluorochemicals. Techniques were evaluated for the determination of 19 organofluorochemicals and for total fluorine in ambient air samples. Due to the potential biphasic nature of most of these fluorochemicals when airborne, Occupational Safety and Health Administration (OSHA) versatile sampler (OVS) tubes were used to simultaneously trap fluorochemical particulates and vapors from workplace air. Analytical methods were developed for OVS air samples to quantitatively analyze for total fluorine using oxygen bomb combustion/ion selective electrode and for 17 organofluorochemicals using LC/MS and gas chromatography/mass spectrometry (GC/MS). The experimental design for this validation was based on the National Institute of Occupational Safety and Health (NIOSH) Guidelines for Air Sampling and Analytical Method Development and Evaluation, with some revisions of the experimental design. The study design incorporated experiments to determine analytical recovery and stability, sampler capacity, the effect of some environmental parameters on recoveries, storage stability, limits of detection, precision, and accuracy. Fluorochemical mixtures were spiked onto each OVS tube over a range of 0.06-6 microg for each of 12 compounds analyzed by LC/MS and 0.3-30 microg for 5 compounds analyzed by GC/MS. These ranges allowed reliable quantitation at 0.001-0.1 mg/m3 in general for LC/MS analytes and 0.005-0.5 mg/m3 for GC/MS analytes when 60 L of air are sampled. The organofluorochemical exposure guideline (EG) is currently 0.1 mg/m3 for many analytes, with one exception being ammonium perfluorooctanoate (EG is 0.01 mg/m3). Total fluorine results may be used

  5. MEMS-based, RF-driven, compact accelerators

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Seidl, P. A.; Ji, Q.; Breinyn, I.; Waldron, W. L.; Schenkel, T.; Vinayakumar, K. B.; Ni, D.; Lal, A.

    2017-10-01

    Shrinking existing accelerators in size can reduce their cost by orders of magnitude. Furthermore, by using radio frequency (RF) technology and accelerating ions in several stages, the applied voltages can be kept low paving the way to new ion beam applications. We make use of the concept of a Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) and have previously shown the implementation of its basic components using printed circuit boards, thereby reducing the size of earlier MEQALACs by an order of magnitude. We now demonstrate the combined integration of these components to form a basic accelerator structure, including an initial beam-matching section. In this presentation, we will discuss the results from the integrated multi-beam ion accelerator and also ion acceleration using RF voltages generated on-board. Furthermore, we will show results from Micro-Electro-Mechanical Systems (MEMS) fabricated focusing wafers, which can shrink the dimension of the system to the sub-mm regime and lead to cheaper fabrication. Based on these proof-of-concept results we outline a scaling path to high beam power for applications in plasma heating in magnetized target fusion and in neutral beam injectors for future Tokamaks. This work was supported by the Office of Science of the US Department of Energy through the ARPA-e ALPHA program under contracts DE-AC02-05CH11231.

  6. Second International Conference on Accelerating Biopharmaceutical Development

    PubMed Central

    2009-01-01

    The Second International Conference on Accelerating Biopharmaceutical Development was held in Coronado, California. The meeting was organized by the Society for Biological Engineering (SBE) and the American Institute of Chemical Engineers (AIChE); SBE is a technological community of the AIChE. Bob Adamson (Wyeth) and Chuck Goochee (Centocor) were co-chairs of the event, which had the theme “Delivering cost-effective, robust processes and methods quickly and efficiently.” The first day focused on emerging disruptive technologies and cutting-edge analytical techniques. Day two featured presentations on accelerated cell culture process development, critical quality attributes, specifications and comparability, and high throughput protein formulation development. The final day was dedicated to discussion of technology options and new analysis methods provided by emerging disruptive technologies; functional interaction, integration and synergy in platform development; and rapid and economic purification process development. PMID:20065637

  7. Neural Networks for Modeling and Control of Particle Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.

    Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less

  8. Neural Networks for Modeling and Control of Particle Accelerators

    DOE PAGES

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; ...

    2016-04-01

    Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less

  9. A gas-dynamical approach to radiation pressure acceleration

    NASA Astrophysics Data System (ADS)

    Schmidt, Peter; Boine-Frankenheim, Oliver

    2016-06-01

    The study of high intensity ion beams driven by high power pulsed lasers is an active field of research. Of particular interest is the radiation pressure acceleration, for which simulations predict narrow band ion energies up to GeV. We derive a laser-piston model by applying techniques for non-relativistic gas-dynamics. The model reveals a laser intensity limit, below which sufficient laser-piston acceleration is impossible. The relation between target thickness and piston velocity as a function of the laser pulse length yields an approximation for the permissible target thickness. We performed one-dimensional Particle-In-Cell simulations to confirm the predictions of the analytical model. These simulations also reveal the importance of electromagnetic energy transport. We find that this energy transport limits the achievable compression and rarefies the plasma.

  10. Analytical learning and term-rewriting systems

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Gamble, Evan

    1990-01-01

    Analytical learning is a set of machine learning techniques for revising the representation of a theory based on a small set of examples of that theory. When the representation of the theory is correct and complete but perhaps inefficient, an important objective of such analysis is to improve the computational efficiency of the representation. Several algorithms with this purpose have been suggested, most of which are closely tied to a first order logical language and are variants of goal regression, such as the familiar explanation based generalization (EBG) procedure. But because predicate calculus is a poor representation for some domains, these learning algorithms are extended to apply to other computational models. It is shown that the goal regression technique applies to a large family of programming languages, all based on a kind of term rewriting system. Included in this family are three language families of importance to artificial intelligence: logic programming, such as Prolog; lambda calculus, such as LISP; and combinatorial based languages, such as FP. A new analytical learning algorithm, AL-2, is exhibited that learns from success but is otherwise quite different from EBG. These results suggest that term rewriting systems are a good framework for analytical learning research in general, and that further research should be directed toward developing new techniques.

  11. Hippocampal-Sparing Whole-Brain Radiotherapy: A 'How-To' Technique Using Helical Tomotherapy and Linear Accelerator-Based Intensity-Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gondi, Vinai; Tolakanahalli, Ranjini; Mehta, Minesh P.

    2010-11-15

    Purpose: Sparing the hippocampus during cranial irradiation poses important technical challenges with respect to contouring and treatment planning. Herein we report our preliminary experience with whole-brain radiotherapy using hippocampal sparing for patients with brain metastases. Methods and Materials: Five anonymous patients previously treated with whole-brain radiotherapy with hippocampal sparing were reviewed. The hippocampus was contoured, and hippocampal avoidance regions were created using a 5-mm volumetric expansion around the hippocampus. Helical tomotherapy and linear accelerator (LINAC)-based intensity-modulated radiotherapy (IMRT) treatment plans were generated for a prescription dose of 30 Gy in 10 fractions. Results: On average, the hippocampal avoidance volume wasmore » 3.3 cm{sup 3}, occupying 2.1% of the whole-brain planned target volume. Helical tomotherapy spared the hippocampus, with a median dose of 5.5 Gy and maximum dose of 12.8 Gy. LINAC-based IMRT spared the hippocampus, with a median dose of 7.8 Gy and maximum dose of 15.3 Gy. On a per-fraction basis, mean dose to the hippocampus (normalized to 2-Gy fractions) was reduced by 87% to 0.49 Gy{sub 2} using helical tomotherapy and by 81% to 0.73 Gy{sub 2} using LINAC-based IMRT. Target coverage and homogeneity was acceptable with both IMRT modalities, with differences largely attributed to more rapid dose fall-off with helical tomotherapy. Conclusion: Modern IMRT techniques allow for sparing of the hippocampus with acceptable target coverage and homogeneity. Based on compelling preclinical evidence, a Phase II cooperative group trial has been developed to test the postulated neurocognitive benefit.« less

  12. Evaluation of the marginal fit of metal copings fabricated on three different marginal designs using conventional and accelerated casting techniques: an in vitro study.

    PubMed

    Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad

    2014-01-01

    Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.

  13. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less

  14. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  15. New test techniques and analytical procedures for understanding the behavior of advanced propellers

    NASA Technical Reports Server (NTRS)

    Stefko, G. L.; Bober, L. J.; Neumann, H. E.

    1983-01-01

    Analytical procedures and experimental techniques were developed to improve the capability to design advanced high speed propellers. Some results from the propeller lifting line and lifting surface aerodynamic analysis codes are compared with propeller force data, probe data and laser velocimeter data. In general, the code comparisons with data indicate good qualitative agreement. A rotating propeller force balance demonstrated good accuracy and reduced test time by 50 percent. Results from three propeller flow visualization techniques are shown which illustrate some of the physical phenomena occurring on these propellers.

  16. AI based HealthCare Platform for Real Time, Predictive and Prescriptive Analytics using Reactive Programming

    NASA Astrophysics Data System (ADS)

    Kaur, Jagreet; Singh Mann, Kulwinder, Dr.

    2018-01-01

    AI in Healthcare needed to bring real, actionable insights and Individualized insights in real time for patients and Doctors to support treatment decisions., We need a Patient Centred Platform for integrating EHR Data, Patient Data, Prescriptions, Monitoring, Clinical research and Data. This paper proposes a generic architecture for enabling AI based healthcare analytics Platform by using open sources Technologies Apache beam, Apache Flink Apache Spark, Apache NiFi, Kafka, Tachyon, Gluster FS, NoSQL- Elasticsearch, Cassandra. This paper will show the importance of applying AI based predictive and prescriptive analytics techniques in Health sector. The system will be able to extract useful knowledge that helps in decision making and medical monitoring in real-time through an intelligent process analysis and big data processing.

  17. Vision-based system identification technique for building structures using a motion capture system

    NASA Astrophysics Data System (ADS)

    Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon

    2015-11-01

    This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.

  18. Classification of user interfaces for graph-based online analytical processing

    NASA Astrophysics Data System (ADS)

    Michaelis, James R.

    2016-05-01

    In the domain of business intelligence, user-oriented software for conducting multidimensional analysis via Online- Analytical Processing (OLAP) is now commonplace. In this setting, datasets commonly have well-defined sets of dimensions and measures around which analysis tasks can be conducted. However, many forms of data used in intelligence operations - deriving from social networks, online communications, and text corpora - will consist of graphs with varying forms of potential dimensional structure. Hence, enabling OLAP over such data collections requires explicit definition and extraction of supporting dimensions and measures. Further, as Graph OLAP remains an emerging technique, limited research has been done on its user interface requirements. Namely, on effective pairing of interface designs to different types of graph-derived dimensions and measures. This paper presents a novel technique for pairing of user interface designs to Graph OLAP datasets, rooted in Analytic Hierarchy Process (AHP) driven comparisons. Attributes of the classification strategy are encoded through an AHP ontology, developed in our alternate work and extended to support pairwise comparison of interfaces. Specifically, according to their ability, as perceived by Subject Matter Experts, to support dimensions and measures corresponding to Graph OLAP dataset attributes. To frame this discussion, a survey is provided both on existing variations of Graph OLAP, as well as existing interface designs previously applied in multidimensional analysis settings. Following this, a review of our AHP ontology is provided, along with a listing of corresponding dataset and interface attributes applicable toward SME recommendation structuring. A walkthrough of AHP-based recommendation encoding via the ontology-based approach is then provided. The paper concludes with a short summary of proposed future directions seen as essential for this research area.

  19. An Multivariate Distance-Based Analytic Framework for Connectome-Wide Association Studies

    PubMed Central

    Shehzad, Zarrar; Kelly, Clare; Reiss, Philip T.; Craddock, R. Cameron; Emerson, John W.; McMahon, Katie; Copland, David A.; Castellanos, F. Xavier; Milham, Michael P.

    2014-01-01

    The identification of phenotypic associations in high-dimensional brain connectivity data represents the next frontier in the neuroimaging connectomics era. Exploration of brain-phenotype relationships remains limited by statistical approaches that are computationally intensive, depend on a priori hypotheses, or require stringent correction for multiple comparisons. Here, we propose a computationally efficient, data-driven technique for connectome-wide association studies (CWAS) that provides a comprehensive voxel-wise survey of brain-behavior relationships across the connectome; the approach identifies voxels whose whole-brain connectivity patterns vary significantly with a phenotypic variable. Using resting state fMRI data, we demonstrate the utility of our analytic framework by identifying significant connectivity-phenotype relationships for full-scale IQ and assessing their overlap with existent neuroimaging findings, as synthesized by openly available automated meta-analysis (www.neurosynth.org). The results appeared to be robust to the removal of nuisance covariates (i.e., mean connectivity, global signal, and motion) and varying brain resolution (i.e., voxelwise results are highly similar to results using 800 parcellations). We show that CWAS findings can be used to guide subsequent seed-based correlation analyses. Finally, we demonstrate the applicability of the approach by examining CWAS for three additional datasets, each encompassing a distinct phenotypic variable: neurotypical development, Attention-Deficit/Hyperactivity Disorder diagnostic status, and L-dopa pharmacological manipulation. For each phenotype, our approach to CWAS identified distinct connectome-wide association profiles, not previously attainable in a single study utilizing traditional univariate approaches. As a computationally efficient, extensible, and scalable method, our CWAS framework can accelerate the discovery of brain-behavior relationships in the connectome. PMID:24583255

  20. An analytical technique for approximating unsteady aerodynamics in the time domain

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1980-01-01

    An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.

  1. Heat and mass transfer in combustion - Fundamental concepts and analytical techniques

    NASA Technical Reports Server (NTRS)

    Law, C. K.

    1984-01-01

    Fundamental combustion phenomena and the associated flame structures in laminar gaseous flows are discussed on physical bases within the framework of the three nondimensional parameters of interest to heat and mass transfer in chemically-reacting flows, namely the Damkoehler number, the Lewis number, and the Arrhenius number which is the ratio of the reaction activation energy to the characteristic thermal energy. The model problems selected for illustration are droplet combustion, boundary layer combustion, and the propagation, flammability, and stability of premixed flames. Fundamental concepts discussed include the flame structures for large activation energy reactions, S-curve interpretation of the ignition and extinctin states, reaction-induced local-similarity and non-similarity in boundary layer flows, the origin and removal of the cold boundary difficulty in modeling flame propagation, and effects of flame stretch and preferential diffusion on flame extinction and stability. Analytical techniques introduced include the Shvab-Zeldovich formulation, the local Shvab-Zeldovich formulation, flame-sheet approximation and the associated jump formulation, and large activation energy matched asymptotic analysis. Potentially promising research areas are suggested.

  2. Fifty years of accelerator based physics at Chalk River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKay, John W.

    1999-04-26

    The Chalk River Laboratories of Atomic Energy of Canada Ltd. was a major centre for Accelerator based physics for the last fifty years. As early as 1946, nuclear structure studies were started on Cockroft-Walton accelerators. A series of accelerators followed, including the world's first Tandem, and the MP Tandem, Superconducting Cyclotron (TASCC) facility that was opened in 1986. The nuclear physics program was shut down in 1996. This paper will describe some of the highlights of the accelerators and the research of the laboratory.

  3. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  4. Review of online coupling of sample preparation techniques with liquid chromatography.

    PubMed

    Pan, Jialiang; Zhang, Chengjiang; Zhang, Zhuomin; Li, Gongke

    2014-03-07

    Sample preparation is still considered as the bottleneck of the whole analytical procedure, and efforts has been conducted towards the automation, improvement of sensitivity and accuracy, and low comsuption of organic solvents. Development of online sample preparation techniques (SP) coupled with liquid chromatography (LC) is a promising way to achieve these goals, which has attracted great attention. This article reviews the recent advances on the online SP-LC techniques. Various online SP techniques have been described and summarized, including solid-phase-based extraction, liquid-phase-based extraction assisted with membrane, microwave assisted extraction, ultrasonic assisted extraction, accelerated solvent extraction and supercritical fluids extraction. Specially, the coupling approaches of online SP-LC systems and the corresponding interfaces have been discussed and reviewed in detail, such as online injector, autosampler combined with transport unit, desorption chamber and column switching. Typical applications of the online SP-LC techniques have been summarized. Then the problems and expected trends in this field are attempted to be discussed and proposed in order to encourage the further development of online SP-LC techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Analytical techniques for characterization of cyclodextrin complexes in aqueous solution: a review.

    PubMed

    Mura, Paola

    2014-12-01

    Cyclodextrins are cyclic oligosaccharides endowed with a hydrophilic outer surface and a hydrophobic inner cavity, able to form inclusion complexes with a wide variety of guest molecules, positively affecting their physicochemical properties. In particular, in the pharmaceutical field, cyclodextrin complexation is mainly used to increase the aqueous solubility and dissolution rate of poorly soluble drugs, and to enhance their bioavailability and stability. Analytical characterization of host-guest interactions is of fundamental importance for fully exploiting the potential benefits of complexation, helping in selection of the most appropriate cyclodextrin. The assessment of the actual formation of a drug-cyclodextrin inclusion complex and its full characterization is not a simple task and often requires the use of different analytical methods, whose results have to be combined and examined together. The purpose of the present review is to give, as much as possible, a general overview of the main analytical tools which can be employed for the characterization of drug-cyclodextrin inclusion complexes in solution, with emphasis on their respective potential merits, disadvantages and limits. Further, the applicability of each examined technique is illustrated and discussed by specific examples from literature. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Vacuum electron acceleration by coherent dipole radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troha, A.L.; Van Meter, J.R.; Landahl, E.C.

    1999-07-01

    The validity of the concept of laser-driven vacuum acceleration has been questioned, based on an extrapolation of the well-known Lawson-Woodward theorem, which stipulates that plane electromagnetic waves cannot accelerate charged particles in vacuum. To formally demonstrate that electrons can indeed be accelerated in vacuum by focusing or diffracting electromagnetic waves, the interaction between a point charge and coherent dipole radiation is studied in detail. The corresponding four-potential exactly satisfies both Maxwell{close_quote}s equations and the Lorentz gauge condition everywhere, and is analytically tractable. It is found that in the far-field region, where the field distribution closely approximates that of a planemore » wave, we recover the Lawson-Woodward result, while net acceleration is obtained in the near-field region. The scaling of the energy gain with wave-front curvature and wave amplitude is studied systematically. {copyright} {ital 1999} {ital The American Physical Society}« less

  7. Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI

    PubMed Central

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward

    2016-01-01

    Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592

  8. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  9. Applications of nuclear analytical techniques to environmental studies

    NASA Astrophysics Data System (ADS)

    Freitas, M. C.; Pacheco, A. M. G.; Marques, A. P.; Barros, L. I. C.; Reis, M. A.

    2001-07-01

    A few examples of application of nuclear-analytical techniques to biological monitors—natives and transplants—are given herein. Parmelia sulcata Taylor transplants were set up in a heavily industrialized area of Portugal—the Setúbal peninsula, about 50 km south of Lisbon—where indigenous lichens are rare. The whole area was 10×15 km around an oil-fired power station, and a 2.5×2.5 km grid was used. In north-western Portugal, native thalli of the same epiphytes (Parmelia spp., mostly Parmelia sulcata Taylor) and bark from olive trees (Olea europaea) were sampled across an area of 50×50 km, using a 10×10 km grid. This area is densely populated and features a blend of rural, urban-industrial and coastal environments, together with the country's second-largest metro area (Porto). All biomonitors have been analyzed by INAA and PIXE. Results were put through nonparametric tests and factor analysis for trend significance and emission sources, respectively.

  10. Analytical balance-based Faraday magnetometer

    NASA Astrophysics Data System (ADS)

    Riminucci, Alberto; Uhlarz, Marc; De Santis, Roberto; Herrmannsdörfer, Thomas

    2017-03-01

    We introduce a Faraday magnetometer based on an analytical balance in which we were able to apply magnetic fields up to 0.14 T. We calibrated it with a 1 mm Ni sphere previously characterized in a superconducting quantum interference device (SQUID) magnetometer. The proposed magnetometer reached a theoretical sensitivity of 3 × 10-8 A m2. We demonstrated its operation on magnetic composite scaffolds made of poly(ɛ-caprolactone)/iron-doped hydroxyapatite. To confirm the validity of the method, we measured the same scaffold properties in a SQUID magnetometer. The agreement between the two measurements was within 5% at 0.127 T and 12% at 24 mT. With the addition, for a small cost, of a permanent magnet and computer controlled linear translators, we were thus able to assemble a Faraday magnetometer based on an analytical balance, which is a virtually ubiquitous instrument. This will make simple but effective magnetometry easily accessible to most laboratories, in particular, to life sciences ones, which are increasingly interested in magnetic materials.

  11. Spectrum Evolution of Accelerating or Slowing down Soliton at its Propagation in a Medium with Gold Nanorods

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Lysak, Tatiana M.

    2018-04-01

    We investigate both numerically and analytically the spectrum evolution of a novel type soliton - nonlinear chirped accelerating or decelerating soliton - at a femtosecond pulse propagation in a medium containing noble nanoparticles. In our consideration, we take into account one- or two-photon absorption of laser radiation by nanorods, and time-dependent nanorod aspect ratio changing due to their melting or reshaping because of laser energy absorption. The chirped solitons are formed due to the trapping of laser radiation by the nanorods reshaping fronts, if a positive or negative phase-amplitude grating is induced by laser radiation. Accelerating or slowing down chirped soliton formation is accompanied by the soliton spectrum blue or red shift. To prove our numerical results, we derived the approximate analytical law for the spectrum maximum intensity evolution along the propagation coordinate, based on earlier developed approximate analytical solutions for accelerating and decelerating solitons.

  12. SU-C-17A-07: The Development of An MR Accelerator-Enabled Planning-To-Delivery Technique for Stereotactic Palliative Radiotherapy Treatment of Spinal Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoogcarspel, S J; Kontaxis, C; Velden, J M van der

    2014-06-01

    Purpose: To develop an MR accelerator-enabled online planning-todelivery technique for stereotactic palliative radiotherapy treatment of spinal metastases. The technical challenges include; automated stereotactic treatment planning, online MR-based dose calculation and MR guidance during treatment. Methods: Using the CT data of 20 patients previously treated at our institution, a class solution for automated treatment planning for spinal bone metastases was created. For accurate dose simulation right before treatment, we fused geometrically correct online MR data with pretreatment CT data of the target volume (TV). For target tracking during treatment, a dynamic T2-weighted TSE MR sequence was developed. An in house developedmore » GPU based IMRT optimization and dose calculation algorithm was used for fast treatment planning and simulation. An automatically generated treatment plan developed with this treatment planning system was irradiated on a clinical 6 MV linear accelerator and evaluated using a Delta4 dosimeter. Results: The automated treatment planning method yielded clinically viable plans for all patients. The MR-CT fusion based dose calculation accuracy was within 2% as compared to calculations performed with original CT data. The dynamic T2-weighted TSE MR Sequence was able to provide an update of the anatomical location of the TV every 10 seconds. Dose calculation and optimization of the automatically generated treatment plans using only one GPU took on average 8 minutes. The Delta4 measurement of the irradiated plan agreed with the dose calculation with a 3%/3mm gamma pass rate of 86.4%. Conclusions: The development of an MR accelerator-enabled planning-todelivery technique for stereotactic palliative radiotherapy treatment of spinal metastases was presented. Future work will involve developing an intrafraction motion adaptation strategy, MR-only dose calculation, radiotherapy quality-assurance in a magnetic field, and streamlining the entire

  13. An Experiential Research-Focused Approach: Implementation in a Nonlaboratory-Based Graduate-Level Analytical Chemistry Course

    ERIC Educational Resources Information Center

    Toh, Chee-Seng

    2007-01-01

    A project is described which incorporates nonlaboratory research skills in a graduate level course on analytical chemistry. This project will help students to grasp the basic principles and concepts of modern analytical techniques and also help them develop relevant research skills in analytical chemistry.

  14. Estimates of effects of residual acceleration on USML-1 experiments

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J.

    1995-01-01

    The purpose of this study effort was to develop analytical models to describe the effects of residual accelerations on the experiments to be carried on the first U.S. Microgravity Lab mission (USML-1) and to test the accuracy of these models by comparing the pre-flight predicted effects with the post-flight measured effects. After surveying the experiments to be performed on USML-1, it became evident that the anticipated residual accelerations during the USML-1 mission were well below the threshold for most of the primary experiments and all of the secondary (Glovebox) experiments and that the only set of experiments that could provide quantifiable effects, and thus provide a definitive test of the analytical models, were the three melt growth experiments using the Bridgman-Stockbarger type Crystal Growth Furnace (CGF). This class of experiments is by far the most sensitive to low level quasi-steady accelerations that are unavoidable on space craft operating in low earth orbit. Because of this, they have been the drivers for the acceleration requirements imposed on the Space Station. Therefore, it is appropriate that the models on which these requirements are based are tested experimentally. Also, since solidification proceeds directionally over a long period of time, the solidified ingot provides a more or less continuous record of the effects from acceleration disturbances.

  15. Microfluidic paper-based analytical devices for potential use in quantitative and direct detection of disease biomarkers in clinical analysis.

    PubMed

    Lim, Wei Yin; Goh, Boon Tong; Khor, Sook Mei

    2017-08-15

    Clinicians, working in the health-care diagnostic systems of developing countries, currently face the challenges of rising costs, increased number of patient visits, and limited resources. A significant trend is using low-cost substrates to develop microfluidic devices for diagnostic purposes. Various fabrication techniques, materials, and detection methods have been explored to develop these devices. Microfluidic paper-based analytical devices (μPADs) have gained attention for sensing multiplex analytes, confirming diagnostic test results, rapid sample analysis, and reducing the volume of samples and analytical reagents. μPADs, which can provide accurate and reliable direct measurement without sample pretreatment, can reduce patient medical burden and yield rapid test results, aiding physicians in choosing appropriate treatment. The objectives of this review are to provide an overview of the strategies used for developing paper-based sensors with enhanced analytical performances and to discuss the current challenges, limitations, advantages, disadvantages, and future prospects of paper-based microfluidic platforms in clinical diagnostics. μPADs, with validated and justified analytical performances, can potentially improve the quality of life by providing inexpensive, rapid, portable, biodegradable, and reliable diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. JAERI R & D on accelerator-based transmutation under OMEGA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takizuka, T.; Nishida, T.; Mizumoto, M.

    1995-10-01

    The overview of the Japanese long-term research and development program on nuclide partitioning and transmutation, called {open_quotes}OMEGA,{close_quotes} is presented. Under this national program, major R&D activities are being carried out at JAERI, PNC, and CRIEPI. Accelerator-based transmutation study at JAERI is focused on a dedicated transmutor with a subcritical actinide-fueled subcritical core coupled with a spallation target driven by a high intensity proton accelerator. Two types of system concept, solid system and molten-salt system, are discussed. The solid system consists of sodium-cooled tungsten target and metallic actinide fuel. The molten-salt system is fueled with molten actinide chloride that acts alsomore » as a target material. The proposed plant transmutes about 250 kg of minor actinide per year, and generates enough electricity to power its own accelerator. JAERI is proposing the development of an intense proton linear accelerator ETA with 1.5 GeV-10 mA beam for engineering tests of accelerator-based transmutation. Recent achievements in the accelerator development are described.« less

  17. Rapid Detection of Transition Metals in Welding Fumes Using Paper-Based Analytical Devices

    PubMed Central

    Volckens, John

    2014-01-01

    Metals in particulate matter (PM) are considered a driving factor for many pathologies. Despite the hazards associated with particulate metals, personal exposures for at-risk workers are rarely assessed due to the cost and effort associated with monitoring. As a result, routine exposure assessments are performed for only a small fraction of the exposed workforce. The objective of this research was to evaluate a relatively new technology, microfluidic paper-based analytical devices (µPADs), for measuring the metals content in welding fumes. Fumes from three common welding techniques (shielded metal arc, metal inert gas, and tungsten inert gas welding) were sampled in two welding shops. Concentrations of acid-extractable Fe, Cu, Ni, and Cr were measured and independently verified using inductively coupled plasma-optical emission spectroscopy (ICP-OES). Results from the µPAD sensors agreed well with ICP-OES analysis; the two methods gave statistically similar results in >80% of the samples analyzed. Analytical costs for the µPAD technique were ~50 times lower than market-rate costs with ICP-OES. Further, the µPAD method was capable of providing same-day results (as opposed several weeks for ICP laboratory analysis). Results of this work suggest that µPAD sensors are a viable, yet inexpensive alternative to traditional analytic methods for transition metals in welding fume PM. These sensors have potential to enable substantially higher levels of hazard surveillance for a given resource cost, especially in resource-limited environments. PMID:24515892

  18. Rapid detection of transition metals in welding fumes using paper-based analytical devices.

    PubMed

    Cate, David M; Nanthasurasak, Pavisara; Riwkulkajorn, Pornpak; L'Orange, Christian; Henry, Charles S; Volckens, John

    2014-05-01

    Metals in particulate matter (PM) are considered a driving factor for many pathologies. Despite the hazards associated with particulate metals, personal exposures for at-risk workers are rarely assessed due to the cost and effort associated with monitoring. As a result, routine exposure assessments are performed for only a small fraction of the exposed workforce. The objective of this research was to evaluate a relatively new technology, microfluidic paper-based analytical devices (µPADs), for measuring the metals content in welding fumes. Fumes from three common welding techniques (shielded metal arc, metal inert gas, and tungsten inert gas welding) were sampled in two welding shops. Concentrations of acid-extractable Fe, Cu, Ni, and Cr were measured and independently verified using inductively coupled plasma-optical emission spectroscopy (ICP-OES). Results from the µPAD sensors agreed well with ICP-OES analysis; the two methods gave statistically similar results in >80% of the samples analyzed. Analytical costs for the µPAD technique were ~50 times lower than market-rate costs with ICP-OES. Further, the µPAD method was capable of providing same-day results (as opposed several weeks for ICP laboratory analysis). Results of this work suggest that µPAD sensors are a viable, yet inexpensive alternative to traditional analytic methods for transition metals in welding fume PM. These sensors have potential to enable substantially higher levels of hazard surveillance for a given resource cost, especially in resource-limited environments.

  19. Analysis of low molecular weight metabolites in tea using mass spectrometry-based analytical methods.

    PubMed

    Fraser, Karl; Harrison, Scott J; Lane, Geoff A; Otter, Don E; Hemar, Yacine; Quek, Siew-Young; Rasmussen, Susanne

    2014-01-01

    Tea is the second most consumed beverage in the world after water and there are numerous reported health benefits as a result of consuming tea, such as reducing the risk of cardiovascular disease and many types of cancer. Thus, there is much interest in the chemical composition of teas, for example; defining components responsible for contributing to reported health benefits; defining quality characteristics such as product flavor; and monitoring for pesticide residues to comply with food safety import/export requirements. Covered in this review are some of the latest developments in mass spectrometry-based analytical techniques for measuring and characterizing low molecular weight components of tea, in particular primary and secondary metabolites. The methodology; more specifically the chromatography and detection mechanisms used in both targeted and non-targeted studies, and their main advantages and disadvantages are discussed. Finally, we comment on the latest techniques that are likely to have significant benefit to analysts in the future, not merely in the area of tea research, but in the analytical chemistry of low molecular weight compounds in general.

  20. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  1. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the

  2. Analytical N beam position monitor method

    NASA Astrophysics Data System (ADS)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  3. An analytical and experimental evaluation of a Fresnel lens solar concentrator

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Allums, S. A.; Cosby, R. M.

    1976-01-01

    An analytical and experimental evaluation of line focusing Fresnel lenses with application potential in the 200 to 370 C range was studied. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves down lens. Experimentation was based on a 56 cm wide, f/1.0 lens. A Sun tracking heliostat provided a nonmoving solar source. Measured data indicated more spreading at the profile base than analytically predicted, resulting in a peak concentration 18 percent lower than the computed peak of 57. The measured and computed transmittances were 85 and 87 percent, respectively. Preliminary testing with a subsequent lens indicated that modified manufacturing techniques corrected the profile spreading problem and should enable improved analytical experimental correlation.

  4. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  5. Macro elemental analysis of food samples by nuclear analytical technique

    NASA Astrophysics Data System (ADS)

    Syahfitri, W. Y. N.; Kurniawati, S.; Adventini, N.; Damastuti, E.; Lestiani, D. D.

    2017-06-01

    Energy-dispersive X-ray fluorescence (EDXRF) spectrometry is a non-destructive, rapid, multi elemental, accurate, and environment friendly analysis compared with other detection methods. Thus, EDXRF spectrometry is applicable for food inspection. The macro elements calcium and potassium constitute important nutrients required by the human body for optimal physiological functions. Therefore, the determination of Ca and K content in various foods needs to be done. The aim of this work is to demonstrate the applicability of EDXRF for food analysis. The analytical performance of non-destructive EDXRF was compared with other analytical techniques; neutron activation analysis and atomic absorption spectrometry. Comparison of methods performed as cross checking results of the analysis and to overcome the limitations of the three methods. Analysis results showed that Ca found in food using EDXRF and AAS were not significantly different with p-value 0.9687, whereas p-value of K between EDXRF and NAA is 0.6575. The correlation between those results was also examined. The Pearson correlations for Ca and K were 0.9871 and 0.9558, respectively. Method validation using SRM NIST 1548a Typical Diet was also applied. The results showed good agreement between methods; therefore EDXRF method can be used as an alternative method for the determination of Ca and K in food samples.

  6. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  7. Speciation and Characterization of E-Waste, Using Analytical Techniques

    NASA Astrophysics Data System (ADS)

    López, C. Cortés; Cruz, V. E. Reyes; Rodríguez, M. A. Veloz; Ávila, J. Hernández; Badillo, J. Flores; Murcia, J. A. Cobos

    Electronic waste (e-waste), have a high potential as a source of precious metals, since they can contain metals like silver, gold, platinum, copper, zinc, nickel, tin and others. In this paper some e-waste were characterized using several analytical techniques as Scanning Electron Microscopy (SEM), X-ray diffraction (XRD) and inductively coupled plasma (ICP) in addition to the thermodynamic study by Pourbaix diagrams of silver (Ag), gold (Au), platinum (Pt), copper (Cu), nickel (Ni), tin (Sn) and zinc (Zn); considering an average low concentration of HNO3 (10% v/v). With results of the characterization was determined that the e-waste is an ideal source for the recovery of valuable metals. Similarly, the thermodynamic studies showed that it is possible to obtain all metallic species except Pt, in a potential window of 1.45V to 2.0V vs SCE.

  8. Comparison of a two-dimensional adaptive-wall technique with analytical wall interference correction techniques

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.

    1992-01-01

    A two dimensional airfoil model was tested in the adaptive wall test section of the NASA Langley 0.3 meter Transonic Cryogenic Tunnel (TCT) and in the ventilated test section of the National Aeronautical Establishment Two Dimensional High Reynold Number Facility (HRNF). The primary goal of the tests was to compare different techniques (adaptive test section walls and classical, analytical corrections) to account for wall interference. Tests were conducted over a Mach number range from 0.3 to 0.8 at chord Reynolds numbers of 10 x 10(exp 6), 15 x 10(exp 6), and 20 x 10(exp 6). The angle of attack was varied from about 12 degrees up to stall. Movement of the top and bottom test section walls was used to account for the wall interference in the HRNF tests. The test results are in good agreement.

  9. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  10. Eco-analytical Methodology in Environmental Problems Monitoring

    NASA Astrophysics Data System (ADS)

    Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.

    2017-01-01

    Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.

  11. Pavement Performance : Approaches Using Predictive Analytics

    DOT National Transportation Integrated Search

    2018-03-23

    Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...

  12. Identification of Microorganisms by Modern Analytical Techniques.

    PubMed

    Buszewski, Bogusław; Rogowska, Agnieszka; Pomastowski, Paweł; Złoch, Michał; Railean-Plugaru, Viorica

    2017-11-01

    Rapid detection and identification of microorganisms is a challenging and important aspect in a wide range of fields, from medical to industrial, affecting human lives. Unfortunately, classical methods of microorganism identification are based on time-consuming and labor-intensive approaches. Screening techniques require the rapid and cheap grouping of bacterial isolates; however, modern bioanalytics demand comprehensive bacterial studies at a molecular level. Modern approaches for the rapid identification of bacteria use molecular techniques, such as 16S ribosomal RNA gene sequencing based on polymerase chain reaction or electromigration, especially capillary zone electrophoresis and capillary isoelectric focusing. However, there are still several challenges with the analysis of microbial complexes using electromigration technology, such as uncontrolled aggregation and/or adhesion to the capillary surface. Thus, an approach using capillary electrophoresis of microbial aggregates with UV and matrix-assisted laser desorption ionization time-of-flight MS detection is presented.

  13. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  14. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  15. A meta-analytic review of school-based prevention for cannabis use.

    PubMed

    Porath-Waller, Amy J; Beasley, Erin; Beirness, Douglas J

    2010-10-01

    This investigation used meta-analytic techniques to evaluate the effectiveness of school-based prevention programming in reducing cannabis use among youth aged 12 to 19. It summarized the results from 15 studies published in peer-reviewed journals since 1999 and identified features that influenced program effectiveness. The results from the set of 15 studies indicated that these school-based programs had a positive impact on reducing students' cannabis use (d = 0.58, CI: 0.55, 0.62) compared to control conditions. Findings revealed that programs incorporating elements of several prevention models were significantly more effective than were those based on only a social influence model. Programs that were longer in duration (≥15 sessions) and facilitated by individuals other than teachers in an interactive manner also yielded stronger effects. The results also suggested that programs targeting high school students were more effective than were those aimed at middle-school students. Implications for school-based prevention programming are discussed.

  16. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  17. Electrostatic design and beam transport for a folded tandem electrostatic quadrupole accelerator facility for accelerator-based boron neutron capture therapy.

    PubMed

    Vento, V Thatar; Bergueiro, J; Cartelli, D; Valda, A A; Kreiner, A J

    2011-12-01

    Within the frame of an ongoing project to develop a folded Tandem-Electrostatic-Quadrupole (TESQ) accelerator facility for Accelerator-Based Boron Neutron Capture Therapy (AB-BNCT), we discuss here the electrostatic design of the machine, including the accelerator tubes with electrostatic quadrupoles and the simulations for the transport and acceleration of a high intensity beam. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Vibration-Based Method Developed to Detect Cracks in Rotors During Acceleration Through Resonance

    NASA Technical Reports Server (NTRS)

    Sawicki, Jerzy T.; Baaklini, George Y.; Gyekenyesi, Andrew L.

    2004-01-01

    In recent years, there has been an increasing interest in developing rotating machinery shaft crack-detection methodologies and online techniques. Shaft crack problems present a significant safety and loss hazard in nearly every application of modern turbomachinery. In many cases, the rotors of modern machines are rapidly accelerated from rest to operating speed, to reduce the excessive vibrations at the critical speeds. The vibration monitoring during startup or shutdown has been receiving growing attention (ref. 1), especially for machines such as aircraft engines, which are subjected to frequent starts and stops, as well as high speeds and acceleration rates. It has been recognized that the presence of angular acceleration strongly affects the rotor's maximum response to unbalance and the speed at which it occurs. Unfortunately, conventional nondestructive evaluation (NDE) methods have unacceptable limits in terms of their application for online crack detection. Some of these techniques are time consuming and inconvenient for turbomachinery service testing. Almost all of these techniques require that the vicinity of the damage be known in advance, and they can provide only local information, with no indication of the structural strength at a component or system level. In addition, the effectiveness of these experimental techniques is affected by the high measurement noise levels existing in complex turbomachine structures. Therefore, the use of vibration monitoring along with vibration analysis has been receiving increasing attention.

  19. Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yuan, Yuan; Xue, Yang

    2016-06-01

    With the rapid development of mobile devices and pervasive computing technologies, acceleration-based human activity recognition, a difficult yet essential problem in mobile apps, has received intensive attention recently. Different acceleration signals for representing different activities or even a same activity have different attributes, which causes troubles in normalizing the signals. We thus cannot directly compare these signals with each other, because they are embedded in a nonmetric space. Therefore, we present a nonmetric scheme that retains discriminative and robust frequency domain information by developing a novel ensemble manifold rank preserving (EMRP) algorithm. EMRP simultaneously considers three aspects: 1) it encodes the local geometry using the ranking order information of intraclass samples distributed on local patches; 2) it keeps the discriminative information by maximizing the margin between samples of different classes; and 3) it finds the optimal linear combination of the alignment matrices to approximate the intrinsic manifold lied in the data. Experiments are conducted on the South China University of Technology naturalistic 3-D acceleration-based activity dataset and the naturalistic mobile-devices based human activity dataset to demonstrate the robustness and effectiveness of the new nonmetric scheme for acceleration-based human activity recognition.

  20. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  1. Accelerator based epithermal neutron source

    NASA Astrophysics Data System (ADS)

    Taskaev, S. Yu.

    2015-11-01

    We review the current status of the development of accelerator sources of epithermal neutrons for boron neutron capture therapy (BNCT), a promising method of malignant tumor treatment. Particular attention is given to the source of epithermal neutrons on the basis of a new type of charged particle accelerator: tandem accelerator with vacuum insulation and lithium neutron-producing target. It is also shown that the accelerator with specialized targets makes it possible to generate fast and monoenergetic neutrons, resonance and monoenergetic gamma-rays, alpha-particles, and positrons.

  2. Comparison of Analytic Hierarchy Process, Catastrophe and Entropy techniques for evaluating groundwater prospect of hard-rock aquifer systems

    NASA Astrophysics Data System (ADS)

    Jenifer, M. Annie; Jha, Madan K.

    2017-05-01

    Groundwater is a treasured underground resource, which plays a central role in sustainable water management. However, it being hidden and dynamic in nature, its sustainable development and management calls for precise quantification of this precious resource at an appropriate scale. This study demonstrates the efficacy of three GIS-based multi-criteria decision analysis (MCDA) techniques, viz., Analytic Hierarchy Process (AHP), Catastrophe and Entropy in evaluating groundwater potential through a case study in hard-rock aquifer systems. Using satellite imagery and relevant field data, eight thematic layers (rainfall, land slope, drainage density, soil, lineament density, geology, proximity to surface water bodies and elevation) of the factors having significant influence on groundwater occurrence were prepared. These thematic layers and their features were assigned suitable weights based on the conceptual frameworks of AHP, Catastrophe and Entropy techniques and then they were integrated in the GIS environment to generate an integrated raster layer depicting groundwater potential index of the study area. The three groundwater prospect maps thus yielded by these MCDA techniques were verified using a novel approach (concept of 'Dynamic Groundwater Potential'). The validation results revealed that the groundwater potential predicted by the AHP technique has a pronounced accuracy of 87% compared to the Catastrophe (46% accuracy) and Entropy techniques (51% accuracy). It is concluded that the AHP technique is the most reliable for the assessment of groundwater resources followed by the Entropy method. The developed groundwater potential maps can serve as a scientific guideline for the cost-effective siting of wells and the effective planning of groundwater development at a catchment or basin scale.

  3. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  4. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  5. Optimized planning of in-service inspections of local flow-accelerated corrosion of pipeline elements used in the secondary coolant circuit of the VVER-440-based units at the Novovoronezh NPP

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Budanov, V. A.; Golubeva, T. N.

    2015-03-01

    Matters concerned with making efficient use of the information-analytical system on the flow-accelerated corrosion problem in setting up in-service examination of the metal of pipeline elements operating in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered. The principles used to select samples of pipeline elements in planning ultrasonic thickness measurements for timely revealing metal thinning due to flow-accelerated corrosion along with reducing the total amount of measurements in the condensate-feedwater path are discussed.

  6. Miniature penetrator (MinPen) acceleration recorder development test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franco, R.J.; Platzbecker, M.R.

    1998-08-01

    The Telemetry Technology Development Department at Sandia National Laboratories actively develops and tests acceleration recorders for penetrating weapons. This new acceleration recorder (MinPen) utilizes a microprocessor-based architecture for operational flexibility while maintaining electronics and packaging techniques developed over years of penetrator testing. MinPen has been demonstrated to function in shock environments up to 20,000 Gs. The MinPen instrumentation development has resulted in a rugged, versatile, miniature acceleration recorder and is a valuable tool for penetrator testing in a wide range of applications.

  7. Spiral trajectory design: a flexible numerical algorithm and base analytical equations.

    PubMed

    Pipe, James G; Zwart, Nicholas R

    2014-01-01

    Spiral-based trajectories for magnetic resonance imaging can be advantageous, but are often cumbersome to design or create. This work presents a flexible numerical algorithm for designing trajectories based on explicit definition of radial undersampling, and also gives several analytical expressions for charactering the base (critically sampled) class of these trajectories. Expressions for the gradient waveform, based on slew and amplitude limits, are developed such that a desired pitch in the spiral k-space trajectory is followed. The source code for this algorithm, written in C, is publicly available. Analytical expressions approximating the spiral trajectory (ignoring the radial component) are given to characterize measurement time, gradient heating, maximum gradient amplitude, and off-resonance phase for slew-limited and gradient amplitude-limited cases. Several numerically calculated trajectories are illustrated, and base Archimedean spirals are compared with analytically obtained results. Several different waveforms illustrate that the desired slew and amplitude limits are reached, as are the desired undersampling patterns, using the numerical method. For base Archimedean spirals, the results of the numerical and analytical approaches are in good agreement. A versatile numerical algorithm was developed, and was written in publicly available code. Approximate analytical formulas are given that help characterize spiral trajectories. Copyright © 2013 Wiley Periodicals, Inc.

  8. Beam by design: Laser manipulation of electrons in modern accelerators

    NASA Astrophysics Data System (ADS)

    Hemsing, Erik; Stupakov, Gennady; Xiang, Dao; Zholents, Alexander

    2014-07-01

    Accelerator-based light sources such as storage rings and free-electron lasers use relativistic electron beams to produce intense radiation over a wide spectral range for fundamental research in physics, chemistry, materials science, biology, and medicine. More than a dozen such sources operate worldwide, and new sources are being built to deliver radiation that meets with the ever-increasing sophistication and depth of new research. Even so, conventional accelerator techniques often cannot keep pace with new demands and, thus, new approaches continue to emerge. In this article, a variety of recently developed and promising techniques that rely on lasers to manipulate and rearrange the electron distribution in order to tailor the properties of the radiation are reviewed. Basic theories of electron-laser interactions, techniques to create microstructures and nanostructures in electron beams, and techniques to produce radiation with customizable waveforms are reviewed. An overview of laser-based techniques for the generation of fully coherent x rays, mode-locked x-ray pulse trains, light with orbital angular momentum, and attosecond or even zeptosecond long coherent pulses in free-electron lasers is presented. Several methods to generate femtosecond pulses in storage rings are also discussed. Additionally, various schemes designed to enhance the performance of light sources through precision beam preparation including beam conditioning, laser heating, emittance exchange, and various laser-based diagnostics are described. Together these techniques represent a new emerging concept of "beam by design" in modern accelerators, which is the primary focus of this article.

  9. Acceleration display system for aircraft zero-gravity research

    NASA Technical Reports Server (NTRS)

    Millis, Marc G.

    1987-01-01

    The features, design, calibration, and testing of Lewis Research Center's acceleration display system for aircraft zero-gravity research are described. Specific circuit schematics and system specifications are included as well as representative data traces from flown trajectories. Other observations learned from developing and using this system are mentioned where appropriate. The system, now a permanent part of the Lewis Learjet zero-gravity program, provides legible, concise, and necessary guidance information enabling pilots to routinely fly accurate zero-gravity trajectories. Regular use of this system resulted in improvements of the Learjet zero-gravity flight techniques, including a technique to minimize later accelerations. Lewis Gates Learjet trajectory data show that accelerations can be reliably sustained within 0.01 g for 5 consecutive seconds, within 0.02 g for 7 consecutive seconds, and within 0.04 g for up to 20 second. Lewis followed the past practices of acceleration measurement, yet focussed on the acceleration displays. Refinements based on flight experience included evolving the ranges, resolutions, and frequency responses to fit the pilot and the Learjet responses.

  10. Bio-analytical applications of microbial fuel cell-based biosensors for onsite water quality monitoring.

    PubMed

    ElMekawy, A; Hegab, H M; Pant, D; Saint, C P

    2018-01-01

    Globally, sustainable provision of high-quality safe water is a major challenge of the 21st century. Various chemical and biological monitoring analytics are presently utilized to guarantee the availability of high-quality water. However, these techniques still face some challenges including high costs, complex design and onsite and online limitations. The recent technology of using microbial fuel cell (MFC)-based biosensors holds outstanding potential for the rapid and real-time monitoring of water source quality. MFCs have the advantages of simplicity in design and efficiency for onsite sensing. Even though some sensing applications of MFCs were previously studied, e.g. biochemical oxygen demand sensor, recently numerous research groups around the world have presented new practical applications of this technique, which combine multidisciplinary scientific knowledge in materials science, microbiology and electrochemistry fields. This review presents the most updated research on the utilization of MFCs as potential biosensors for monitoring water quality and considers the range of potentially toxic analytes that have so far been detected using this methodology. The advantages of MFCs over established technology are also considered as well as future work required to establish their routine use. © 2017 The Society for Applied Microbiology.

  11. Plasma production for electron acceleration by resonant plasma wave

    NASA Astrophysics Data System (ADS)

    Anania, M. P.; Biagioni, A.; Chiadroni, E.; Cianchi, A.; Croia, M.; Curcio, A.; Di Giovenale, D.; Di Pirro, G. P.; Filippi, F.; Ghigo, A.; Lollo, V.; Pella, S.; Pompili, R.; Romeo, S.; Ferrario, M.

    2016-09-01

    Plasma wakefield acceleration is the most promising acceleration technique known nowadays, able to provide very high accelerating fields (10-100 GV/m), enabling acceleration of electrons to GeV energy in few centimeter. However, the quality of the electron bunches accelerated with this technique is still not comparable with that of conventional accelerators (large energy spread, low repetition rate, and large emittance); radiofrequency-based accelerators, in fact, are limited in accelerating field (10-100 MV/m) requiring therefore hundred of meters of distances to reach the GeV energies, but can provide very bright electron bunches. To combine high brightness electron bunches from conventional accelerators and high accelerating fields reachable with plasmas could be a good compromise allowing to further accelerate high brightness electron bunches coming from LINAC while preserving electron beam quality. Following the idea of plasma wave resonant excitation driven by a train of short bunches, we have started to study the requirements in terms of plasma for SPARC_LAB (Ferrario et al., 2013 [1]). In particular here we focus on hydrogen plasma discharge, and in particular on the theoretical and numerical estimates of the ionization process which are very useful to design the discharge circuit and to evaluate the current needed to be supplied to the gas in order to have full ionization. Eventually, the current supplied to the gas simulated will be compared to that measured experimentally.

  12. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  13. MS-based analytical methodologies to characterize genetically modified crops.

    PubMed

    García-Cañas, Virginia; Simó, Carolina; León, Carlos; Ibáñez, Elena; Cifuentes, Alejandro

    2011-01-01

    The development of genetically modified crops has had a great impact on the agriculture and food industries. However, the development of any genetically modified organism (GMO) requires the application of analytical procedures to confirm the equivalence of the GMO compared to its isogenic non-transgenic counterpart. Moreover, the use of GMOs in foods and agriculture faces numerous criticisms from consumers and ecological organizations that have led some countries to regulate their production, growth, and commercialization. These regulations have brought about the need of new and more powerful analytical methods to face the complexity of this topic. In this regard, MS-based technologies are increasingly used for GMOs analysis to provide very useful information on GMO composition (e.g., metabolites, proteins). This review focuses on the MS-based analytical methodologies used to characterize genetically modified crops (also called transgenic crops). First, an overview on genetically modified crops development is provided, together with the main difficulties of their analysis. Next, the different MS-based analytical approaches applied to characterize GM crops are critically discussed, and include "-omics" approaches and target-based approaches. These methodologies allow the study of intended and unintended effects that result from the genetic transformation. This information is considered to be essential to corroborate (or not) the equivalence of the GM crop with its isogenic non-transgenic counterpart. Copyright © 2010 Wiley Periodicals, Inc.

  14. Epilepsy analytic system with cloud computing.

    PubMed

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  15. Combining Acceleration and Displacement Dependent Modal Frequency Responses Using an MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1996-01-01

    Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.

  16. A new perspective on global mean sea level (GMSL) acceleration

    NASA Astrophysics Data System (ADS)

    Watson, Phil J.

    2016-06-01

    The vast body of contemporary climate change science is largely underpinned by the premise of a measured acceleration from anthropogenic forcings evident in key climate change proxies -- greenhouse gas emissions, temperature, and mean sea level. By virtue, over recent years, the issue of whether or not there is a measurable acceleration in global mean sea level has resulted in fierce, widespread professional, social, and political debate. Attempts to measure acceleration in global mean sea level (GMSL) have often used comparatively crude analysis techniques providing little temporal instruction on these key questions. This work proposes improved techniques to measure real-time velocity and acceleration based on five GMSL reconstructions spanning the time frame from 1807 to 2014 with substantially improved temporal resolution. While this analysis highlights key differences between the respective reconstructions, there is now more robust, convincing evidence of recent acceleration in the trend of GMSL.

  17. Modeling magnetic field amplification in nonlinear diffusive shock acceleration

    NASA Astrophysics Data System (ADS)

    Vladimirov, Andrey

    2009-02-01

    This research was motivated by the recent observations indicating very strong magnetic fields at some supernova remnant shocks, which suggests in-situ generation of magnetic turbulence. The dissertation presents a numerical model of collisionless shocks with strong amplification of stochastic magnetic fields, self-consistently coupled to efficient shock acceleration of charged particles. Based on a Monte Carlo simulation of particle transport and acceleration in nonlinear shocks, the model describes magnetic field amplification using the state-of-the-art analytic models of instabilities in magnetized plasmas in the presence of non-thermal particle streaming. The results help one understand the complex nonlinear connections between the thermal plasma, the accelerated particles and the stochastic magnetic fields in strong collisionless shocks. Also, predictions regarding the efficiency of particle acceleration and magnetic field amplification, the impact of magnetic field amplification on the maximum energy of accelerated particles, and the compression and heating of the thermal plasma by the shocks are presented. Particle distribution functions and turbulence spectra derived with this model can be used to calculate the emission of observable nonthermal radiation.

  18. Authentication of Kalix (N.E. Sweden) vendace caviar using inductively coupled plasma-based analytical techniques: evaluation of different approaches.

    PubMed

    Rodushkin, I; Bergman, T; Douglas, G; Engström, E; Sörlin, D; Baxter, D C

    2007-02-05

    Different analytical approaches for origin differentiation between vendace and whitefish caviars from brackish- and freshwaters were tested using inductively coupled plasma double focusing sector field mass spectrometry (ICP-SFMS) and multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). These approaches involve identifying differences in elemental concentrations or sample-specific isotopic composition (Sr and Os) variations. Concentrations of 72 elements were determined by ICP-SFMS following microwave-assisted digestion in vendace and whitefish caviar samples from Sweden (from both brackish and freshwater), Finland and USA, as well as in unprocessed vendace roe and salt used in caviar production. This data set allows identification of elements whose contents in caviar can be affected by salt addition as well as by contamination during production and packaging. Long-term method reproducibility was assessed for all analytes based on replicate caviar preparations/analyses and variations in element concentrations in caviar from different harvests were evaluated. The greatest utility for differentiation was demonstrated for elements with varying concentrations between brackish and freshwaters (e.g. As, Br, Sr). Elemental ratios, specifically Sr/Ca, Sr/Mg and Sr/Ba, are especially useful for authentication of vendace caviar processed from brackish water roe, due to the significant differences between caviar from different sources, limited between-harvest variations and relatively high concentrations in samples, allowing precise determination by modern analytical instrumentation. Variations in the 87Sr/86Sr ratio for vendace caviar from different harvests (on the order of 0.05-0.1%) is at least 10-fold less than differences between caviar processed from brackish and freshwater roe. Hence, Sr isotope ratio measurements (either by ICP-SFMS or by MC-ICP-MS) have great potential for origin differentiation. On the contrary, it was impossible to

  19. Accelerator mass spectrometry of small biological samples.

    PubMed

    Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran

    2008-12-01

    Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.

  20. Analytical techniques for mechanistic characterization of EUV photoresists

    NASA Astrophysics Data System (ADS)

    Grzeskowiak, Steven; Narasimhan, Amrit; Murphy, Michael; Ackerman, Christian; Kaminsky, Jake; Brainard, Robert L.; Denbeaux, Greg

    2017-03-01

    Extreme ultraviolet (EUV, 13.5 nm) lithography is the prospective technology for high volume manufacturing by the microelectronics industry. Significant strides towards achieving adequate EUV source power and availability have been made recently, but a limited rate of improvement in photoresist performance still delays the implementation of EUV. Many fundamental questions remain to be answered about the exposure mechanisms of even the relatively well understood chemically amplified EUV photoresists. Moreover, several groups around the world are developing revolutionary metal-based resists whose EUV exposure mechanisms are even less understood. Here, we describe several evaluation techniques to help elucidate mechanistic details of EUV exposure mechanisms of chemically amplified and metal-based resists. EUV absorption coefficients are determined experimentally by measuring the transmission through a resist coated on a silicon nitride membrane. Photochemistry can be evaluated by monitoring small outgassing reaction products to provide insight into photoacid generator or metal-based resist reactivity. Spectroscopic techniques such as thin-film Fourier transform infrared (FTIR) spectroscopy can measure the chemical state of a photoresist system pre- and post-EUV exposure. Additionally, electrolysis can be used to study the interaction between photoresist components and low energy electrons. Collectively, these techniques improve our current understanding of photomechanisms for several EUV photoresist systems, which is needed to develop new, better performing materials needed for high volume manufacturing.

  1. Comparison of Acid Titration, Conductivity, Flame Photometry, ICP-MS, and Accelerated Lamellae Formation Techniques in Determining Glass Vial Quality.

    PubMed

    Fujimori, Kiyoshi; Lee, Hans; Sloey, Christopher; Ricci, Margaret S; Wen, Zai-Qing; Phillips, Joseph; Nashed-Samuel, Yasser

    2016-01-01

    Certain types of glass vials used as primary containers for liquid formulations of biopharmaceutical drug products have been observed with delamination that produced small glass like flakes termed lamellae under certain conditions during storage. The cause of this delamination is in part related to the glass surface defects, which renders the vials susceptible to flaking, and lamellae are formed during the high-temperature melting and annealing used for vial fabrication and shaping. The current European Pharmacopoeia method to assess glass vial quality utilizes acid titration of vial extract pools to determine hydrolytic resistance or alkalinity. Four alternative techniques with improved throughput, convenience, and/or comprehension were examined by subjecting seven lots of vials to analysis by all techniques. The first three new techniques of conductivity, flame photometry, and inductively coupled plasma mass spectrometry measured the same sample pools as acid titration. All three showed good correlation with alkalinity: conductivity (R(2) = 0.9951), flame photometry sodium (R(2) = 0.9895), and several elements by inductively coupled plasma mass spectrometry [(sodium (R(2) = 0.9869), boron (R(2) = 0.9796), silicon (R(2) = 0.9426), total (R(2) = 0.9639)]. The fourth technique processed the vials under conditions that promote delamination, termed accelerated lamellae formation, and then inspected those vials visually for lamellae. The visual inspection results without the lot with different processing condition correlated well with alkalinity (R(2) = 0.9474). Due to vial processing differences affecting alkalinity measurements and delamination propensity differently, the ratio of silicon and sodium measurements from inductively coupled plasma mass spectrometry was the most informative technique to assess overall vial quality and vial propensity for lamellae formation. The other techniques of conductivity, flame photometry, and accelerated lamellae formation

  2. "Light sail" acceleration reexamined.

    PubMed

    Macchi, Andrea; Veghini, Silvia; Pegoraro, Francesco

    2009-08-21

    The dynamics of the acceleration of ultrathin foil targets by the radiation pressure of superintense, circularly polarized laser pulses is investigated by analytical modeling and particle-in-cell simulations. By addressing self-induced transparency and charge separation effects, it is shown that for "optimal" values of the foil thickness only a thin layer at the rear side is accelerated by radiation pressure. The simple "light sail" model gives a good estimate of the energy per nucleon, but overestimates the conversion efficiency of laser energy into monoenergetic ions.

  3. Green analytical chemistry--theory and practice.

    PubMed

    Tobiszewski, Marek; Mechlińska, Agata; Namieśnik, Jacek

    2010-08-01

    This tutorial review summarises the current state of green analytical chemistry with special emphasis on environmentally friendly sample preparation techniques. Green analytical chemistry is a part of the sustainable development concept; its history and origins are described. Miniaturisation of analytical devices and shortening the time elapsing between performing analysis and obtaining reliable analytical results are important aspects of green analytical chemistry. Solventless extraction techniques, the application of alternative solvents and assisted extractions are considered to be the main approaches complying with green analytical chemistry principles.

  4. SERS-based application in food analytics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Cialla-May, Dana; Radu, Andreea; Jahn, Martin; Weber, Karina; Popp, Jürgen

    2017-02-01

    To establish detection schemes in life science applications, specific and sensitive methods allowing for fast detection times are required. Due to the interaction of molecules with strong electromagnetic fields excited at metallic nanostructures, the molecular fingerprint specific Raman spectrum is increased by several orders of magnitude. This effect is described as surface-enhanced Raman spectroscopy (SERS) and became a very powerful analytical tool in many fields of application. Within this presentation, we will introduce innovative bottom-up strategies to prepare SERS-active nanostructures coated with a lipophilic sensor layer. To do so, the food colorant Sudan III, an indirect carcinogen substance found in chili powder, palm oil or spice mixtures, is detected quantitatively in the background of the competitor riboflavin as well as paprika powder extracts. The SERS-based detection of azorubine (E122) in commercial available beverages with different complexity (e.g. sugar content, alcohol concentration) illustrates the strong potential of SERS as a qualitative as well as semiquantitative prescan method in food analytics. Here, a good agreement between the estimated concentration employing SERS as well as the gold standard technique HPLC, a highly laborious method, is found. Finally, SERS is applied to detect vitamin B2 and B12 in cereals as well as the estimate the ratio of lycopene and β-carotene in tomatoes. Acknowledgement: Funding the projects "QuantiSERS" and "Jenaer Biochip Initiative 2.0" within the framework "InnoProfile Transfer - Unternehmen Region" the Federal Ministry of Education and Research, Germany (BMBF) is gratefully acknowledged.

  5. An introduction to the physics of high energy accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.A.; Syphers, J.J.

    1993-01-01

    This book is an outgrowth of a course given by the authors at various universities and particle accelerator schools. It starts from the basic physics principles governing particle motion inside an accelerator, and leads to a full description of the complicated phenomena and analytical tools encountered in the design and operation of a working accelerator. The book covers acceleration and longitudinal beam dynamics, transverse motion and nonlinear perturbations, intensity dependent effects, emittance preservation methods and synchrotron radiation. These subjects encompass the core concerns of a high energy synchrotron. The authors apparently do not assume the reader has much previous knowledgemore » about accelerator physics. Hence, they take great care to introduce the physical phenomena encountered and the concepts used to describe them. The mathematical formulae and derivations are deliberately kept at a level suitable for beginners. After mastering this course, any interested reader will not find it difficult to follow subjects of more current interests. Useful homework problems are provided at the end of each chapter. Many of the problems are based on actual activities associated with the design and operation of existing accelerators.« less

  6. Source-to-accelerator quadrupole matching section for a compact linear accelerator

    NASA Astrophysics Data System (ADS)

    Seidl, P. A.; Persaud, A.; Ghiorso, W.; Ji, Q.; Waldron, W. L.; Lal, A.; Vinayakumar, K. B.; Schenkel, T.

    2018-05-01

    Recently, we presented a new approach for a compact radio-frequency (RF) accelerator structure and demonstrated the functionality of the individual components: acceleration units and focusing elements. In this paper, we combine these units to form a working accelerator structure: a matching section between the ion source extraction grids and the RF-acceleration unit and electrostatic focusing quadrupoles between successive acceleration units. The matching section consists of six electrostatic quadrupoles (ESQs) fabricated using 3D-printing techniques. The matching section enables us to capture more beam current and to match the beam envelope to conditions for stable transport in an acceleration lattice. We present data from an integrated accelerator consisting of the source, matching section, and an ESQ doublet sandwiched between two RF-acceleration units.

  7. Muscle activation patterns in acceleration-based phases during reach-to-grasp movement.

    PubMed

    Tokuda, Keisuke; Lee, Bumsuk; Shiihara, Yasufumi; Takahashi, Kazuhiro; Wada, Naoki; Shirakura, Kenji; Watanabe, Hideomi

    2016-11-01

    [Purpose] An earlier study divided reaching activity into characteristic phases based on hand velocity profiles. By synchronizing muscle activities and the acceleration profile, a phasing approach for reaching movement, based on hand acceleration profiles, was attempted in order to elucidate the roles of individual muscle activities in the different phases of the acceleration profile in reaching movements. [Subjects and Methods] Ten healthy volunteer subjects participated in this study. The aim was to electromyographically evaluate muscles around the shoulder, the upper trapezius, the anterior deltoid, the biceps brachii, and the triceps brachii, most of which have been used to evaluate arm motion, as well as the acceleration of the upper limb during simple reaching movement in the reach-to-grasp task. [Results] Analysis showed the kinematic trajectories of the acceleration during a simple biphasic profile of the reaching movement could be divided into four phases: increasing acceleration (IA), decreasing acceleration (DA), increasing deceleration (ID), and decreasing deceleration (DD). Muscles around the shoulder showed different activity patterns, which were closely associated with these acceleration phases. [Conclusion] These results suggest the important role of the four phases, derived from the acceleration trajectory, in the elucidation of the muscular mechanisms which regulate and coordinate the muscles around the shoulder in reaching movements.

  8. Simultaneous Multislice Echo Planar Imaging With Blipped Controlled Aliasing in Parallel Imaging Results in Higher Acceleration: A Promising Technique for Accelerated Diffusion Tensor Imaging of Skeletal Muscle.

    PubMed

    Filli, Lukas; Piccirelli, Marco; Kenkel, David; Guggenberger, Roman; Andreisek, Gustav; Beck, Thomas; Runge, Val M; Boss, Andreas

    2015-07-01

    The aim of this study was to investigate the feasibility of accelerated diffusion tensor imaging (DTI) of skeletal muscle using echo planar imaging (EPI) applying simultaneous multislice excitation with a blipped controlled aliasing in parallel imaging results in higher acceleration unaliasing technique. After federal ethics board approval, the lower leg muscles of 8 healthy volunteers (mean [SD] age, 29.4 [2.9] years) were examined in a clinical 3-T magnetic resonance scanner using a 15-channel knee coil. The EPI was performed at a b value of 500 s/mm2 without slice acceleration (conventional DTI) as well as with 2-fold and 3-fold acceleration. Fractional anisotropy (FA) and mean diffusivity (MD) were measured in all 3 acquisitions. Fiber tracking performance was compared between the acquisitions regarding the number of tracks, average track length, and anatomical precision using multivariate analysis of variance and Mann-Whitney U tests. Acquisition time was 7:24 minutes for conventional DTI, 3:53 minutes for 2-fold acceleration, and 2:38 minutes for 3-fold acceleration. Overall FA and MD values ranged from 0.220 to 0.378 and 1.595 to 1.829 mm2/s, respectively. Two-fold acceleration yielded similar FA and MD values (P ≥ 0.901) and similar fiber tracking performance compared with conventional DTI. Three-fold acceleration resulted in comparable MD (P = 0.199) but higher FA values (P = 0.006) and significantly impaired fiber tracking in the soleus and tibialis anterior muscles (number of tracks, P < 0.001; anatomical precision, P ≤ 0.005). Simultaneous multislice EPI with blipped controlled aliasing in parallel imaging results in higher acceleration can remarkably reduce acquisition time in DTI of skeletal muscle with similar image quality and quantification accuracy of diffusion parameters. This may increase the clinical applicability of muscle anisotropy measurements.

  9. The Los Alamos Laser Acceleration of Particles Workshop and beginning of the advanced accelerator concepts field

    NASA Astrophysics Data System (ADS)

    Joshi, C.

    2012-12-01

    The first Advanced Acceleration of Particles-AAC-Workshop (actually named Laser Acceleration of Particles Workshop) was held at Los Alamos in January 1982. The workshop lasted a week and divided all the acceleration techniques into four categories: near field, far field, media, and vacuum. Basic theorems of particle acceleration were postulated (later proven) and specific experiments based on the four categories were formulated. This landmark workshop led to the formation of the advanced accelerator R&D program in the HEP office of the DOE that supports advanced accelerator research to this day. Two major new user facilities at Argonne and Brookhaven and several more directed experimental efforts were built to explore the advanced particle acceleration schemes. It is not an exaggeration to say that the intellectual breadth and excitement provided by the many groups who entered this new field provided the needed vitality to then recently formed APS Division of Beams and the new online journal Physical Review Special Topics-Accelerators and Beams. On this 30th anniversary of the AAC Workshops, it is worthwhile to look back at the legacy of the first Workshop at Los Alamos and the fine groundwork it laid for the field of advanced accelerator concepts that continues to flourish to this day.

  10. A New Project-Based Lab for Undergraduate Environmental and Analytical Chemistry

    ERIC Educational Resources Information Center

    Adami, Gianpiero

    2006-01-01

    A new project-based lab was developed for third year undergraduate chemistry students based on real world applications. The experience suggests that the total analytical procedure (TAP) project offers a stimulating alternative for delivering science skills and developing a greater interest for analytical chemistry and environmental sciences and…

  11. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data

  12. Does leaf chemistry differentially affect breakdown in tropical vs temperate streams? Importance of standardized analytical techniques to measure leaf chemistry

    Treesearch

    Marcelo Ard& #243; n; Catherine M. Pringle; Susan L. Eggert

    2009-01-01

    Comparisons of the effects of leaf litter chemistry on leaf breakdown rates in tropical vs temperate streams are hindered by incompatibility among studies and across sites of analytical methods used to measure leaf chemistry. We used standardized analytical techniques to measure chemistry and breakdown rate of leaves from common riparian tree species at 2 sites, 1...

  13. Big data analytics for the Future Circular Collider reliability and availability studies

    NASA Astrophysics Data System (ADS)

    Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter

    2017-10-01

    Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.

  14. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  15. An analytical technique for predicting the characteristics of a flexible wing equipped with an active flutter-suppression system and comparison with wind-tunnel data

    NASA Technical Reports Server (NTRS)

    Abel, I.

    1979-01-01

    An analytical technique for predicting the performance of an active flutter-suppression system is presented. This technique is based on the use of an interpolating function to approximate the unsteady aerodynamics. The resulting equations are formulated in terms of linear, ordinary differential equations with constant coefficients. This technique is then applied to an aeroelastic model wing equipped with an active flutter-suppression system. Comparisons between wind-tunnel data and analysis are presented for the wing both with and without active flutter suppression. Results indicate that the wing flutter characteristics without flutter suppression can be predicted very well but that a more adequate model of wind-tunnel turbulence is required when the active flutter-suppression system is used.

  16. Anomalous acceleration of ions in a plasma accelerator with an anodic layer

    NASA Astrophysics Data System (ADS)

    V, M. BARDAKOV; S, D. IVANOV; A, V. KAZANTSEV; N, A. STROKIN; A, N. STUPIN; Binhao, JIANG; Zhenyu, WANG

    2018-03-01

    In a plasma accelerator with an anodic layer (PAAL), we discovered experimentally the effect of ‘super-acceleration’ of the bulk of the ions to energies W exceeding the energy equivalent to the discharge voltage V d. The E × B discharge was ignited in an environment of atomic argon and helium and molecular nitrogen. Singly charged argon ions were accelerated most effectively in the case of the largest discharge currents and pressure P of the working gas. Helium ions with W > eV d (e being the electron charge) were only recorded at maximum pressures. Molecular nitrogen was not accelerated to energies W > eV d. Anomalous acceleration is realized in the range of radial magnetic fields on the anode 2.8 × 10 -2 ≤ B rA ≤ 4 × 10 -2 T. It was also found analytically that the cathode of the accelerator can receive anomalously accelerated ions. In this case, the value of the potential in the anodic layer becomes higher than the anode potential, and the anode current exceeds some critical value. Numerical modeling in terms of the developed theory showed qualitative agreement between modeling data and measurements.

  17. Cryogenic parallel, single phase flows: an analytical approach

    NASA Astrophysics Data System (ADS)

    Eichhorn, R.

    2017-02-01

    Managing the cryogenic flows inside a state-of-the-art accelerator cryomodule has become a demanding endeavour: In order to build highly efficient modules, all heat transfers are usually intercepted at various temperatures. For a multi-cavity module, operated at 1.8 K, this requires intercepts at 4 K and at 80 K at different locations with sometimes strongly varying heat loads which for simplicity reasons are operated in parallel. This contribution will describe an analytical approach, based on optimization theories.

  18. An Accelerated Analytical Process for the Development of STR Profiles for Casework Samples.

    PubMed

    Laurin, Nancy; Frégeau, Chantal J

    2015-07-01

    Significant efforts are being devoted to the development of methods enabling rapid generation of short tandem repeat (STR) profiles in order to reduce turnaround times for the delivery of human identification results from biological evidence. Some of the proposed solutions are still costly and low throughput. This study describes the optimization of an analytical process enabling the generation of complete STR profiles (single-source or mixed profiles) for human identification in approximately 5 h. This accelerated process uses currently available reagents and standard laboratory equipment. It includes a 30-min lysis step, a 27-min DNA extraction using the Promega Maxwell(®) 16 System, DNA quantification in <1 h using the Qiagen Investigator(®) Quantiplex HYres kit, fast amplification (<26 min) of the loci included in AmpFℓSTR(®) Identifiler(®), and analysis of the profiles on the 3500-series Genetic Analyzer. This combination of fast individual steps produces high-quality profiling results and offers a cost-effective alternative approach to rapid DNA analysis. © 2015 American Academy of Forensic Sciences.

  19. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  20. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  1. Integrated signal probe based aptasensor for dual-analyte detection.

    PubMed

    Xiang, Juan; Pi, Xiaomei; Chen, Xiaoqing; Xiang, Lei; Yang, Minghui; Ren, Hao; Shen, Xiaojuan; Qi, Ning; Deng, Chunyan

    2017-10-15

    For the multi-analyte detection, although the sensitivity has commonly met the practical requirements, the reliability, reproducibility and stability need to be further improved. In this work, two different aptamer probes labeled with redox tags were used as signal probe1 (sP1) and signal probe2 (sP2), which were integrated into one unity DNA architecture to develop the integrated signal probe (ISP). Comparing with the conventional independent signal probes for the simultaneous multi-analyte detection, the proposed ISP was more reproducible and accurate. This can be due to that ISP in one DNA structure can ensure the completely same modification condition and an equal stoichiometric ratio between sP1 and sP2, and furthermore the cross interference between sP1 and sP2 can be successfully prevented by regulating the complementary position of sP1 and sP2. The ISP-based assay system would be a great progress for the dual-analyte detection. Combining with gold nanoparticles (AuNPs) signal amplification, the ISP/AuNPs-based aptasensor for the sensitive dual-analyte detection was explored. Based on DNA structural switching induced by targets binding to aptamer, the simultaneous dual-analyte detection was simply achieved by monitoring the electrochemical responses of methylene blue (MB) and ferrocene (Fc) This proposed detection system possesses such advantages as simplicity in design, easy operation, good reproducibility and accuracy, high sensitivity and selectivity, which indicates the excellent application of this aptasensor in the field of clinical diagnosis or other molecular sensors. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  3. On-Chip Laser-Power Delivery System for Dielectric Laser Accelerators

    NASA Astrophysics Data System (ADS)

    Hughes, Tyler W.; Tan, Si; Zhao, Zhexin; Sapra, Neil V.; Leedle, Kenneth J.; Deng, Huiyang; Miao, Yu; Black, Dylan S.; Solgaard, Olav; Harris, James S.; Vuckovic, Jelena; Byer, Robert L.; Fan, Shanhui; England, R. Joel; Lee, Yun Jo; Qi, Minghao

    2018-05-01

    We propose an on-chip optical-power delivery system for dielectric laser accelerators based on a fractal "tree-network" dielectric waveguide geometry. This system replaces experimentally demanding free-space manipulations of the driving laser beam with chip-integrated techniques based on precise nanofabrication, enabling access to orders-of-magnitude increases in the interaction length and total energy gain for these miniature accelerators. Based on computational modeling, in the relativistic regime, our laser delivery system is estimated to provide 21 keV of energy gain over an acceleration length of 192 μ m with a single laser input, corresponding to a 108-MV/m acceleration gradient. The system may achieve 1 MeV of energy gain over a distance of less than 1 cm by sequentially illuminating 49 identical structures. These findings are verified by detailed numerical simulation and modeling of the subcomponents, and we provide a discussion of the main constraints, challenges, and relevant parameters with regard to on-chip laser coupling for dielectric laser accelerators.

  4. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  5. Review of recent advances in analytical techniques for the determination of neurotransmitters

    PubMed Central

    Perry, Maura; Li, Qiang; Kennedy, Robert T.

    2009-01-01

    Methods and advances for monitoring neurotransmitters in vivo or for tissue analysis of neurotransmitters over the last five years are reviewed. The review is organized primarily by neurotransmitter type. Transmitter and related compounds may be monitored by either in vivo sampling coupled to analytical methods or implanted sensors. Sampling is primarily performed using microdialysis, but low-flow push-pull perfusion may offer advantages of spatial resolution while minimizing the tissue disruption associated with higher flow rates. Analytical techniques coupled to these sampling methods include liquid chromatography, capillary electrophoresis, enzyme assays, sensors, and mass spectrometry. Methods for the detection of amino acid, monoamine, neuropeptide, acetylcholine, nucleoside, and soluable gas neurotransmitters have been developed and improved upon. Advances in the speed and sensitivity of these methods have enabled improvements in temporal resolution and increased the number of compounds detectable. Similar advances have enabled improved detection at tissue samples, with a substantial emphasis on single cell and other small samples. Sensors provide excellent temporal and spatial resolution for in vivo monitoring. Advances in application to catecholamines, indoleamines, and amino acids have been prominent. Improvements in stability, sensitivity, and selectivity of the sensors have been of paramount interest. PMID:19800472

  6. On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.

    2003-01-01

    A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.

  7. Microextraction techniques at the analytical laboratory: an efficient way for determining low amounts of residual insecticides in soils

    NASA Astrophysics Data System (ADS)

    Viñas, Pilar; Navarro, Tania; Campillo, Natalia; Fenoll, Jose; Garrido, Isabel; Cava, Juana; Hernandez-Cordoba, Manuel

    2017-04-01

    Microextraction techniques allow sensitive measurements of pollutants to be carried out by means of instrumentation commonly available at the analytical laboratory. This communication reports our studies focused to the determination of pyrethroid insecticides in polluted soils. These chemicals are synthetic analogues of pyrethrum widely used for pest control in agricultural and household applications. Because of their properties, pyrethroids tend to strongly absorb to soil particles and organic matter. Although they are considered as pesticides with a low toxicity for humans, long times exposure to them may cause damage in immune system and in the neurological system. The procedure here studied is based on dispersive liquid-liquid microextraction (DLLME), and permits the determination of fifteen pyrethroid compounds (allethrin, resmethrin, tetramethrin, bifenthrin, fenpropathrin, cyhalothrin, acrinathrin, permethrin, λ-cyfluthrin, cypermethrin, flucythrinate, fenvalerate, esfenvalerate, τ-fluvalinate, and deltamethrin) in soil samples using gas chromatography with mass spectrometry (GC-MS). The analytes were first extracted from the soil samples (4 g) by treatment with 2 mL of acetonitrile, 2 mL of water and 0.5 g of NaCl. The enriched organic phase (approximately 0.8 mL) was separated by centrifugation, and this solution used as the dispersant in a DLLME process. The analytes did not need to be derivatized before their injection into the chromatographic system, due to their volatility and thermal stability. The identification of the different pyrethroids was carried out based on their retention times and mass spectra, considering the m/z values of the different fragments and their relative abundances. The detection limits were in the 0.2-23 ng g-1 range, depending on the analyte and the sample under analysis. The authors are grateful to the Comunidad Autonóma de la Región de Murcia, Spain (Fundación Séneca, 19888/GERM/15) and to the Spanish MINECO (Project

  8. Applications of laser wakefield accelerator-based light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Felicie; Thomas, Alec G. R.

    Laser-wakefield accelerators (LWFAs) were proposed more than three decades ago, and while they promise to deliver compact, high energy particle accelerators, they will also provide the scientific community with novel light sources. In a LWFA, where an intense laser pulse focused onto a plasma forms an electromagnetic wave in its wake, electrons can be trapped and are now routinely accelerated to GeV energies. From terahertz radiation to gamma-rays, this article reviews light sources from relativistic electrons produced by LWFAs, and discusses their potential applications. Betatron motion, Compton scattering and undulators respectively produce x-rays or gamma-rays by oscillating relativistic electrons inmore » the wakefield behind the laser pulse, a counter-propagating laser field, or a magnetic undulator. Other LWFA-based light sources include bremsstrahlung and terahertz radiation. Here, we first evaluate the performance of each of these light sources, and compare them with more conventional approaches, including radio frequency accelerators or other laser-driven sources. We have then identified applications, which we discuss in details, in a broad range of fields: medical and biological applications, military, defense and industrial applications, and condensed matter and high energy density science.« less

  9. Applications of laser wakefield accelerator-based light sources

    DOE PAGES

    Albert, Felicie; Thomas, Alec G. R.

    2016-10-01

    Laser-wakefield accelerators (LWFAs) were proposed more than three decades ago, and while they promise to deliver compact, high energy particle accelerators, they will also provide the scientific community with novel light sources. In a LWFA, where an intense laser pulse focused onto a plasma forms an electromagnetic wave in its wake, electrons can be trapped and are now routinely accelerated to GeV energies. From terahertz radiation to gamma-rays, this article reviews light sources from relativistic electrons produced by LWFAs, and discusses their potential applications. Betatron motion, Compton scattering and undulators respectively produce x-rays or gamma-rays by oscillating relativistic electrons inmore » the wakefield behind the laser pulse, a counter-propagating laser field, or a magnetic undulator. Other LWFA-based light sources include bremsstrahlung and terahertz radiation. Here, we first evaluate the performance of each of these light sources, and compare them with more conventional approaches, including radio frequency accelerators or other laser-driven sources. We have then identified applications, which we discuss in details, in a broad range of fields: medical and biological applications, military, defense and industrial applications, and condensed matter and high energy density science.« less

  10. Assessment of analytical techniques for predicting solid propellant exhaust plumes and plume impingement environments

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.

    1977-01-01

    An analysis of experimental nozzle, exhaust plume, and exhaust plume impingement data is presented. The data were obtained for subscale solid propellant motors with propellant Al loadings of 2, 10 and 15% exhausting to simulated altitudes of 50,000, 100,000 and 112,000 ft. Analytical predictions were made using a fully coupled two-phase method of characteristics numerical solution and a technique for defining thermal and pressure environments experienced by bodies immersed in two-phase exhaust plumes.

  11. Laser-ablation-based ion source characterization and manipulation for laser-driven ion acceleration

    NASA Astrophysics Data System (ADS)

    Sommer, P.; Metzkes-Ng, J.; Brack, F.-E.; Cowan, T. E.; Kraft, S. D.; Obst, L.; Rehwald, M.; Schlenvoigt, H.-P.; Schramm, U.; Zeil, K.

    2018-05-01

    For laser-driven ion acceleration from thin foils (∼10 μm–100 nm) in the target normal sheath acceleration regime, the hydro-carbon contaminant layer at the target surface generally serves as the ion source and hence determines the accelerated ion species, i.e. mainly protons, carbon and oxygen ions. The specific characteristics of the source layer—thickness and relevant lateral extent—as well as its manipulation have both been investigated since the first experiments on laser-driven ion acceleration using a variety of techniques from direct source imaging to knife-edge or mesh imaging. In this publication, we present an experimental study in which laser ablation in two fluence regimes (low: F ∼ 0.6 J cm‑2, high: F ∼ 4 J cm‑2) was applied to characterize and manipulate the hydro-carbon source layer. The high-fluence ablation in combination with a timed laser pulse for particle acceleration allowed for an estimation of the relevant source layer thickness for proton acceleration. Moreover, from these data and independently from the low-fluence regime, the lateral extent of the ion source layer became accessible.

  12. Application of quality improvement analytic methodology in emergency medicine research: A comparative evaluation.

    PubMed

    Harries, Bruce; Filiatrault, Lyne; Abu-Laban, Riyad B

    2018-05-30

    Quality improvement (QI) analytic methodology is rarely encountered in the emergency medicine literature. We sought to comparatively apply QI design and analysis techniques to an existing data set, and discuss these techniques as an alternative to standard research methodology for evaluating a change in a process of care. We used data from a previously published randomized controlled trial on triage-nurse initiated radiography using the Ottawa ankle rules (OAR). QI analytic tools were applied to the data set from this study and evaluated comparatively against the original standard research methodology. The original study concluded that triage nurse-initiated radiographs led to a statistically significant decrease in mean emergency department length of stay. Using QI analytic methodology, we applied control charts and interpreted the results using established methods that preserved the time sequence of the data. This analysis found a compelling signal of a positive treatment effect that would have been identified after the enrolment of 58% of the original study sample, and in the 6th month of this 11-month study. Our comparative analysis demonstrates some of the potential benefits of QI analytic methodology. We found that had this approach been used in the original study, insights regarding the benefits of nurse-initiated radiography using the OAR would have been achieved earlier, and thus potentially at a lower cost. In situations where the overarching aim is to accelerate implementation of practice improvement to benefit future patients, we believe that increased consideration should be given to the use of QI analytic methodology.

  13. Accelerated test techniques for micro-circuits: Evaluation of high temperature (473 k - 573 K) accelerated life test techniques as effective microcircuit screening methods

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1976-01-01

    The application of high temperature accelerated test techniques was shown to be an effective method of microcircuit defect screening. Comprehensive microcircuit evaluations and a series of high temperature (473 K to 573 K) life tests demonstrated that a freak or early failure population of surface contaminated devices could be completely screened in thirty two hours of test at an ambient temperature of 523 K. Equivalent screening at 398 K, as prescribed by current Military and NASA specifications, would have required in excess of 1,500 hours of test. All testing was accomplished with a Texas Instruments' 54L10, low power triple-3 input NAND gate manufactured with a titanium- tungsten (Ti-W), Gold (Au) metallization system. A number of design and/or manufacturing anomalies were also noted with the Ti-W, Au metallization system. Further study of the exact nature and cause(s) of these anomalies is recommended prior to the use of microcircuits with Ti-W, Au metallization in long life/high reliability applications. Photomicrographs of tested circuits are included.

  14. Exploring phlebotomy technique as a pre-analytical factor in proteomic analyses by mass spectrometry.

    PubMed

    Penn, Andrew M; Lu, Linghong; Chambers, Andrew G; Balshaw, Robert F; Morrison, Jaclyn L; Votova, Kristine; Wood, Eileen; Smith, Derek S; Lesperance, Maria; del Zoppo, Gregory J; Borchers, Christoph H

    2015-12-01

    Multiple reaction monitoring mass spectrometry (MRM-MS) is an emerging technology for blood biomarker verification and validation; however, the results may be influenced by pre-analytical factors. This exploratory study was designed to determine if differences in phlebotomy techniques would significantly affect the abundance of plasma proteins in an upcoming biomarker development study. Blood was drawn from 10 healthy participants using four techniques: (1) a 20-gauge IV with vacutainer, (2) a 21-gauge direct vacutainer, (3) an 18-gauge butterfly with vacutainer, and (4) an 18-gauge butterfly with syringe draw. The abundances of a panel of 122 proteins (117 proteins, plus 5 matrix metalloproteinase (MMP) proteins) were targeted by LC/MRM-MS. In addition, complete blood count (CBC) data were also compared across the four techniques. Phlebotomy technique significantly affected 2 of the 11 CBC parameters (red blood cell count, p = 0.010; hemoglobin concentration, p = 0.035) and only 12 of the targeted 117 proteins (p < 0.05). Of the five MMP proteins, only MMP7 was detectable and its concentration was not significantly affected by different techniques. Overall, most proteins in this exploratory study were not significantly influenced by phlebotomy technique; however, a larger study with additional patients will be required for confirmation.

  15. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Ji, Q.; Feinberg, E.; Seidl, P. A.; Waldron, W. L.; Schenkel, T.; Lal, A.; Vinayakumar, K. B.; Ardanuc, S.; Hammer, D. A.

    2017-06-01

    A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.

  16. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure.

    PubMed

    Persaud, A; Ji, Q; Feinberg, E; Seidl, P A; Waldron, W L; Schenkel, T; Lal, A; Vinayakumar, K B; Ardanuc, S; Hammer, D A

    2017-06-01

    A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.

  17. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  18. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  19. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  20. Rapid and Automated Analytical Methods for Redox Species Based on Potentiometric Flow Injection Analysis Using Potential Buffers

    PubMed Central

    Ohura, Hiroki; Imato, Toshihiko

    2011-01-01

    Two analytical methods, which prove the utility of a potentiometric flow injection technique for determining various redox species, based on the use of some redox potential buffers, are reviewed. The first is a potentiometric flow injection method in which a redox couple such as Fe(III)-Fe(II), Fe(CN)6 3−-Fe(CN)(CN)6 4−, and bromide-bromine and a redox electrode or a combined platinum-bromide ion selective electrode are used. The analytical principle and advantages of the method are discussed, and several examples of its application are reported. Another example is a highly sensitive potentiometric flow injection method, in which a large transient potential change due to bromine or chlorine as an intermediate, generated during the reaction of the oxidative species with an Fe(III)-Fe(II) potential buffer containing bromide or chloride, is utilized. The analytical principle and details of the proposed method are described, and examples of several applications are described. The determination of trace amounts of hydrazine, based on the detection of a transient change in potential caused by the reaction with a Ce(IV)-Ce(III) potential buffer, is also described. PMID:21584280

  1. Accelerator-based Neutrino Physics at Fermilab

    NASA Astrophysics Data System (ADS)

    Dukes, Edmond

    2008-10-01

    The discovery of neutrino mass has excited great interest in elucidating the properties of neutrinos and their role in nature. Experiments around the world take advantage of solar, atmospheric, reactor, and accelerator sources of neutrinos. Accelerator-based sources are particularly convenient since their parameters can be tuned to optimize the measurement in question. At Fermilab an extensive neutrino program includes the MiniBooNE, SciBooNE, and MINOS experiments. Two major new experiments, MINERvA and NOvA, are being constructed, plans for a high-intensity neutrino source to DUSEL are underway, and an R&D effort towards a large liquid argon detector is being pursued. The NOvA experiment intends to search for electron neutrino appearance using a massive surface detector 811 km from Fermilab. In addition to measuring the last unknown mixing angle, theta(13), NOvA has the possibility of seeing matter-antimatter asymmetries in neutrinos and resolving the ordering of the neutrino mass states.

  2. Innovative single-shot diagnostics for electrons from laser wakefield acceleration at FLAME

    NASA Astrophysics Data System (ADS)

    Bisesto, F. G.; Anania, M. P.; Cianchi, A.; Chiadroni, E.; Curcio, A.; Ferrario, M.; Pompili, R.; Zigler, A.

    2017-07-01

    Plasma wakefield acceleration is the most promising acceleration technique known nowadays, able to provide very high accelerating fields (> 100 GV/m), enabling acceleration of electrons to GeV energy in few centimeters. Here we present all the plasma related activities currently underway at SPARC_LAB exploiting the high power laser FLAME. In particular, we will give an overview of the single shot diagnostics employed: Electro Optic Sampling (EOS) for temporal measurement and Optical Transition Radiation (OTR) for an innovative one shot emittance measurements. In detail, the EOS technique has been employed to measure for the first time the longitudinal profile of electric field of fast electrons escaping from a solid target, driving the ions and protons acceleration, and to study the impact of using different target shapes. Moreover, a novel scheme for one shot emittance measurements based on OTR, developed and tested at SPARC_LAB LINAC, used in an experiment on electrons from laser wakefield acceleration still undergoing, will be shown.

  3. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management

    PubMed Central

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-01-01

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713

  4. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management.

    PubMed

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-12-15

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  5. The Ophidia framework: toward cloud-based data analytics for climate change

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni

    2015-04-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private

  6. Analytics4Action Evaluation Framework: A Review of Evidence-Based Learning Analytics Interventions at the Open University UK

    ERIC Educational Resources Information Center

    Rienties, Bart; Boroowa, Avinash; Cross, Simon; Kubiak, Chris; Mayles, Kevin; Murphy, Sam

    2016-01-01

    There is an urgent need to develop an evidence-based framework for learning analytics whereby stakeholders can manage, evaluate, and make decisions about which types of interventions work well and under which conditions. In this article, we will work towards developing a foundation of an Analytics4Action Evaluation Framework (A4AEF) that is…

  7. Spectroscopic measurements of plasma emission light for plasma-based acceleration experiments

    NASA Astrophysics Data System (ADS)

    Filippi, F.; Anania, M. P.; Biagioni, A.; Chiadroni, E.; Cianchi, A.; Ferrario, M.; Mostacci, A.; Palumbo, L.; Zigler, A.

    2016-09-01

    Advanced particle accelerators are based on the excitation of large amplitude plasma waves driven by either electron or laser beams. Future experiments scheduled at the SPARC_LAB test facility aim to demonstrate the acceleration of high brightness electron beams through the so-called resonant Plasma Wakefield Acceleration scheme in which a train of electron bunches (drivers) resonantly excites wakefields into a preformed hydrogen plasma; the last bunch (witness) injected at the proper accelerating phase gains energy from the wake. The quality of the accelerated beam depends strongly on plasma density and its distribution along the acceleration length. The measurements of plasma density of the order of 1016-1017 cm-3 can be performed with spectroscopic measurements of the plasma-emitted light. The measured density distribution for hydrogen filled capillary discharge with both Balmer alpha and Balmer beta lines and shot-to-shot variation are here reported.

  8. Wideband Motion Control by Position and Acceleration Input Based Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Irie, Kouhei; Katsura, Seiichiro; Ohishi, Kiyoshi

    The disturbance observer can observe and suppress the disturbance torque within its bandwidth. Recent motion systems begin to spread in the society and they are required to have ability to contact with unknown environment. Such a haptic motion requires much wider bandwidth. However, since the conventional disturbance observer attains the acceleration response by the second order derivative of position response, the bandwidth is limited due to the derivative noise. This paper proposes a novel structure of a disturbance observer. The proposed disturbance observer uses an acceleration sensor for enlargement of bandwidth. Generally, the bandwidth of an acceleration sensor is from 1Hz to more than 1kHz. To cover DC range, the conventional position sensor based disturbance observer is integrated. Thus, the performance of the proposed Position and Acceleration input based disturbance observer (PADO) is superior to the conventional one. The PADO is applied to position control (infinity stiffness) and force control (zero stiffness). The numerical and experimental results show viability of the proposed method.

  9. Acceleration modules in linear induction accelerators

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Heng; Deng, Jian-Jun

    2014-05-01

    The Linear Induction Accelerator (LIA) is a unique type of accelerator that is capable of accelerating kilo-Ampere charged particle current to tens of MeV energy. The present development of LIA in MHz bursting mode and the successful application into a synchrotron have broadened LIA's usage scope. Although the transformer model is widely used to explain the acceleration mechanism of LIAs, it is not appropriate to consider the induction electric field as the field which accelerates charged particles for many modern LIAs. We have examined the transition of the magnetic cores' functions during the LIA acceleration modules' evolution, distinguished transformer type and transmission line type LIA acceleration modules, and re-considered several related issues based on transmission line type LIA acceleration module. This clarified understanding should help in the further development and design of LIA acceleration modules.

  10. Evaluation of available analytical techniques for monitoring the quality of space station potable water

    NASA Technical Reports Server (NTRS)

    Geer, Richard D.

    1989-01-01

    To assure the quality of potable water (PW) on the Space Station (SS) a number of chemical and physical tests must be conducted routinely. After reviewing the requirements for potable water, both direct and indirect analytical methods are evaluated that could make the required tests and improvements compatible with the Space Station operation. A variety of suggestions are made to improve the analytical techniques for SS operation. The most important recommendations are: (1) the silver/silver chloride electrode (SB) method of removing I sub 2/I (-) biocide from the water, since it may interfere with analytical procedures for PW and also its end uses; (2) the orbital reactor (OR) method of carrying out chemistry and electrochemistry in microgravity by using a disk shaped reactor on an orbital table to impart artificial G force to the contents, allowing solution mixing and separation of gases and liquids; and (3) a simple ultra low volume highly sensitive electrochemical/conductivity detector for use with a capillary zone electrophoresis apparatus. It is also recommended, since several different conductivity and resistance measurements are made during the analysis of PW, that the bipolar pulse measuring circuit be used in all these applications for maximum compatibility and redundancy of equipment.

  11. Accelerators for Cancer Therapy

    DOE R&D Accomplishments Database

    Lennox, Arlene J.

    2000-05-30

    The vast majority of radiation treatments for cancerous tumors are given using electron linacs that provide both electrons and photons at several energies. Design and construction of these linacs are based on mature technology that is rapidly becoming more and more standardized and sophisticated. The use of hadrons such as neutrons, protons, alphas, or carbon, oxygen and neon ions is relatively new. Accelerators for hadron therapy are far from standardized, but the use of hadron therapy as an alternative to conventional radiation has led to significant improvements and refinements in conventional treatment techniques. This paper presents the rationale for radiation therapy, describes the accelerators used in conventional and hadron therapy, and outlines the issues that must still be resolved in the emerging field of hadron therapy.

  12. Human motion planning based on recursive dynamics and optimal control techniques

    NASA Technical Reports Server (NTRS)

    Lo, Janzen; Huang, Gang; Metaxas, Dimitris

    2002-01-01

    This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.

  13. Effects of TEA·HCl hardening accelerator on the workability of cement-based materials

    NASA Astrophysics Data System (ADS)

    Pan, Wenhao; Ding, Zhaoyang; Chen, Yanwen

    2017-03-01

    The aim of the test is to research the influence rules of TEA·HCl on the workability of cement paste and concrete. Based on the features of the new hardening accelerator, an experimental analysis system were established through different dosages of hardening accelerator, and the feasibility of such accelerator to satisfy the need of practical engineering was verified. The results show that adding of the hardening accelerator can accelerate the cement hydration, and what’s more, when the dosage was 0.04%, the setting time was the shortest while the initial setting time and final setting time were 130 min and 180 min, respectively. The initial fluidity of cement paste of adding accelerator was roughly equivalent compared with that of blank. After 30 min, fluidity loss would decrease with the dosage increasing, but fluidity may increase. The application of the hardening accelerator can make the early workability of concrete enhance, especially the slump loss of 30 min can improve more significantly. The bleeding rate of concrete significantly decreases after adding TEA·HCl. The conclusion is that the new hardening accelerator can meet the need of the workability of cement-based materials in the optimum dosage range.

  14. Accelerated and Airy-Bloch oscillations

    NASA Astrophysics Data System (ADS)

    Longhi, Stefano

    2016-09-01

    A quantum particle subjected to a constant force undergoes an accelerated motion following a parabolic path, which differs from the classical motion just because of wave packet spreading (quantum diffusion). However, when a periodic potential is added (such as in a crystal) the particle undergoes Bragg scattering and an oscillatory (rather than accelerated) motion is found, corresponding to the famous Bloch oscillations (BOs). Here, we introduce an exactly-solvable quantum Hamiltonian model, corresponding to a generalized Wannier-Stark Hamiltonian Ĥ, in which a quantum particle shows an intermediate dynamical behavior, namely an oscillatory motion superimposed to an accelerated one. Such a novel dynamical behavior is referred to as accelerated BOs. Analytical expressions of the spectrum, improper eigenfunctions and propagator of the generalized Wannier-Stark Hamiltonian Ĥ are derived. Finally, it is shown that acceleration and quantum diffusion in the generalized Wannier-Stark Hamiltonian are prevented for Airy wave packets, which undergo a periodic breathing dynamics that can be referred to as Airy-Bloch oscillations.

  15. Advanced Accelerators: Particle, Photon and Plasma Wave Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Ronald L.

    2017-06-29

    The overall objective of this project was to study the acceleration of electrons to very high energies over very short distances based on trapping slowly moving electrons in the fast moving potential wells of large amplitude plasma waves, which have relativistic phase velocities. These relativistic plasma waves, or wakefields, are the basis of table-top accelerators that have been shown to accelerate electrons to the same high energies as kilometer-length linear particle colliders operating using traditional decades-old acceleration techniques. The accelerating electrostatic fields of the relativistic plasma wave accelerators can be as large as GigaVolts/meter, and our goal was to studymore » techniques for remotely measuring these large fields by injecting low energy probe electron beams across the plasma wave and measuring the beam’s deflection. Our method of study was via computer simulations, and these results suggested that the deflection of the probe electron beam was directly proportional to the amplitude of the plasma wave. This is the basis of a proposed diagnostic technique, and numerous studies were performed to determine the effects of changing the electron beam, plasma wave and laser beam parameters. Further simulation studies included copropagating laser beams with the relativistic plasma waves. New interesting results came out of these studies including the prediction that very small scale electron beam bunching occurs, and an anomalous line focusing of the electron beam occurs under certain conditions. These studies were summarized in the dissertation of a graduate student who obtained the Ph.D. in physics. This past research program has motivated ideas for further research to corroborate these results using particle-in-cell simulation tools which will help design a test-of-concept experiment in our laboratory and a scaled up version for testing at a major wakefield accelerator facility.« less

  16. Advocating for Grade-Based Acceleration

    ERIC Educational Resources Information Center

    Guilbault, Keri M.

    2014-01-01

    Parents often struggle with the decision to accelerate their child and may worry about social and emotional issues, although research indicates positive effects on the social and emotional adjustment of carefully selected accelerants. As children's advocates, parents can work effectively with a school system to secure an appropriate academic…

  17. New analytical technique for carbon dioxide absorption solvents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouryousefi, F.; Idem, R.O.

    2008-02-15

    The densities and refractive indices of two binary systems (water + MEA and water + MDEA) and three ternary systems (water + MEA + CO{sub 2}, water + MDEA + CO{sub 2}, and water + MEA + MDEA) used for carbon dioxide (CO{sub 2}) capture were measured over the range of compositions of the aqueous alkanolamine(s) used for CO{sub 2} absorption at temperatures from 295 to 338 K. Experimental densities were modeled empirically, while the experimental refractive indices were modeled using well-established models from the known values of their pure-component densities and refractive indices. The density and Gladstone-Dale refractive indexmore » models were then used to obtain the compositions of unknown samples of the binary and ternary systems by simultaneous solution of the density and refractive index equations. The results from this technique have been compared with HPLC (high-performance liquid chromatography) results, while a third independent technique (acid-base titration) was used to verify the results. The results show that the systems' compositions obtained from the simple and easy-to-use refractive index/density technique were very comparable to the expensive and laborious HPLC/titration techniques, suggesting that the refractive index/density technique can be used to replace existing methods for analysis of fresh or nondegraded, CO{sub 2}-loaded, single and mixed alkanolamine solutions.« less

  18. Analytical methods for determination of mycotoxins: a review.

    PubMed

    Turner, Nicholas W; Subrahmanyam, Sreenath; Piletsky, Sergey A

    2009-01-26

    Mycotoxins are small (MW approximately 700), toxic chemical products formed as secondary metabolites by a few fungal species that readily colonise crops and contaminate them with toxins in the field or after harvest. Ochratoxins and Aflatoxins are mycotoxins of major significance and hence there has been significant research on broad range of analytical and detection techniques that could be useful and practical. Due to the variety of structures of these toxins, it is impossible to use one standard technique for analysis and/or detection. Practical requirements for high-sensitivity analysis and the need for a specialist laboratory setting create challenges for routine analysis. Several existing analytical techniques, which offer flexible and broad-based methods of analysis and in some cases detection, have been discussed in this manuscript. There are a number of methods used, of which many are lab-based, but to our knowledge there seems to be no single technique that stands out above the rest, although analytical liquid chromatography, commonly linked with mass spectroscopy is likely to be popular. This review manuscript discusses (a) sample pre-treatment methods such as liquid-liquid extraction (LLE), supercritical fluid extraction (SFE), solid phase extraction (SPE), (b) separation methods such as (TLC), high performance liquid chromatography (HPLC), gas chromatography (GC), and capillary electrophoresis (CE) and (c) others such as ELISA. Further currents trends, advantages and disadvantages and future prospects of these methods have been discussed.

  19. A compact linear accelerator based on a scalable microelectromechanical-system RF-structure

    DOE PAGES

    Persaud, A.; Ji, Q.; Feinberg, E.; ...

    2017-06-08

    Here, a new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number ofmore » parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further red ucing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Finally, ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.« less

  20. An Undulator-Based Laser Wakefield Accelerator Electron Beam Diagnostic

    NASA Astrophysics Data System (ADS)

    Bakeman, Michael S.

    Currently particle accelerators such as the Large Hadron Collider use RF cavities with a maximum field gradient of 50-100 MV/m to accelerate particles over long distances. A new type of plasma based accelerator called a Laser Plasma Accelerator (LPA) is being investigated at the LOASIS group at Lawrence Berkeley National Laboratory which can sustain field gradients of 10-100 GV/m. This new type of accelerator offers the potential to create compact high energy accelerators and light sources. In order to investigate the feasibility of producing a compact light source an undulator-based electron beam diagnostic for use on the LOASIS LPA has been built and calibrated. This diagnostic relies on the principal that the spectral analysis of synchrotron radiation from an undulator can reveal properties of the electron beam such as emittance, energy and energy spread. The effects of electron beam energy spread upon the harmonics of undulator produced synchrotron radiation were derived from the equations of motion of the beam and numerically simulated. The diagnostic consists of quadrupole focusing magnets to collimate the electron beam, a 1.5 m long undulator to produce the synchrotron radiation, and a high resolution high gain XUV spectrometer to analyze the radiation. The undulator was aligned and tuned in order to maximize the flux of synchrotron radiation produced. The spectrometer was calibrated at the Advanced Light Source, with the results showing the ability to measure electron beam energy spreads at resolutions as low as 0.1% rms, a major improvement over conventional magnetic spectrometers. Numerical simulations show the ability to measure energy spreads on realistic LPA produced electron beams as well as the improvements in measurements made with the quadrupole magnets. Experimentally the quadrupoles were shown to stabilize and focus the electron beams at specific energies for their insertion into the undulator, with the eventual hope of producing an all optical

  1. Evaluation of analytical performance based on partial order methodology.

    PubMed

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Compact lumped circuit model of discharges in DC accelerator using partial element equivalent circuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.

    2014-07-01

    DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less

  3. Electrostatic accelerators with high energy resolution

    NASA Astrophysics Data System (ADS)

    Uchiyama, T.; Agawa, Y.; Nishihashi, T.; Takagi, K.; Yamakawa, H.; Isoya, A.; Takai, M.; Namba, S.

    1991-05-01

    Several models of electrostatic accelerators based on rotating disks (Disktron) have been manufactured for various ion beam applications like surface analyses and implantation. The high voltage terminal of the Disktron with a terminal voltage of up to 500 kV is open in air, while the generator part is enclosed in FRP (fiber reinforced plastics) or a ceramic vessel filled with sf 6 gas. The 1 MV model is completely enclosed in a steel vessel. A compact tandem accelerator of the pellet chain type with a terminal voltage of 1.5 MV has also been manufactured. The good energy stability of these accelerators, typically in the range of 10 -4, has proved to be quite favorable for applications in precise studies of material surfaces, including the use of microbeam techniques.

  4. Recent trends in sorption-based sample preparation and liquid chromatography techniques for food analysis.

    PubMed

    V Soares Maciel, Edvaldo; de Toffoli, Ana Lúcia; Lanças, Fernando Mauro

    2018-04-20

    The accelerated rising of the world's population increased the consumption of food, thus demanding more rigors in the control of residue and contaminants in food-based products marketed for human consumption. In view of the complexity of most food matrices, including fruits, vegetables, different types of meat, beverages, among others, a sample preparation step is important to provide more reliable results when combined with HPLC separations. An adequate sample preparation step before the chromatographic analysis is mandatory in obtaining higher precision and accuracy in order to improve the extraction of the target analytes, one of the priorities in analytical chemistry. The recent discovery of new materials such as ionic liquids, graphene-derived materials, molecularly imprinted polymers, restricted access media, magnetic nanoparticles, and carbonaceous nanomaterials, provided high sensitivity and selectivity results in an extensive variety of applications. These materials, as well as their several possible combinations, have been demonstrated to be highly appropriate for the extraction of different analytes in complex samples such as food products. The main characteristics and application of these new materials in food analysis will be presented and discussed in this paper. Another topic discussed in this review covers the main advantages and limitations of sample preparation microtechniques, as well as their off-line and on-line combination with HPLC for food analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. State of practice and emerging application of analytical techniques of nuclear forensic analysis: highlights from the 4th Collaborative Materials Exercise of the Nuclear Forensics International Technical Working Group (ITWG)

    DOE PAGES

    Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.

    2016-09-16

    The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques weremore » applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.« less

  6. State of practice and emerging application of analytical techniques of nuclear forensic analysis: highlights from the 4th Collaborative Materials Exercise of the Nuclear Forensics International Technical Working Group (ITWG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwantes, Jon M.; Marsden, Oliva; Pellegrini, Kristi L.

    The Nuclear Forensics International Technical Working Group (ITWG) recently completed its fourth Collaborative Materials Exercise (CMX-4) in the 21 year history of the Group. This was also the largest materials exercise to date, with participating laboratories from 16 countries or international organizations. Moreover, exercise samples (including three separate samples of low enriched uranium oxide) were shipped as part of an illicit trafficking scenario, for which each laboratory was asked to conduct nuclear forensic analyses in support of a fictitious criminal investigation. In all, over 30 analytical techniques were applied to characterize exercise materials, for which ten of those techniques weremore » applied to ITWG exercises for the first time. We performed an objective review of the state of practice and emerging application of analytical techniques of nuclear forensic analysis based upon the outcome of this most recent exercise is provided.« less

  7. Accelerator mass spectrometry.

    PubMed

    Hellborg, Ragnar; Skog, Göran

    2008-01-01

    In this overview the technique of accelerator mass spectrometry (AMS) and its use are described. AMS is a highly sensitive method of counting atoms. It is used to detect very low concentrations of natural isotopic abundances (typically in the range between 10(-12) and 10(-16)) of both radionuclides and stable nuclides. The main advantages of AMS compared to conventional radiometric methods are the use of smaller samples (mg and even sub-mg size) and shorter measuring times (less than 1 hr). The equipment used for AMS is almost exclusively based on the electrostatic tandem accelerator, although some of the newest systems are based on a slightly different principle. Dedicated accelerators as well as older "nuclear physics machines" can be found in the 80 or so AMS laboratories in existence today. The most widely used isotope studied with AMS is 14C. Besides radiocarbon dating this isotope is used in climate studies, biomedicine applications and many other fields. More than 100,000 14C samples are measured per year. Other isotopes studied include 10Be, 26Al, 36Cl, 41Ca, 59Ni, 129I, U, and Pu. Although these measurements are important, the number of samples of these other isotopes measured each year is estimated to be less than 10% of the number of 14C samples. Copyright 2008 Wiley Periodicals, Inc.

  8. A mass filter based on an accelerating traveling wave.

    PubMed

    Wiedenbeck, Michael; Kasemset, Bodin; Kasper, Manfred

    2008-01-01

    We describe a novel mass filtering concept based on the acceleration of a pulsed ion beam through a stack of electrostatic plates. A precisely controlled traveling wave generated within such an ion guide will induce a mass-selective ion acceleration, with mass separation ultimately accomplished via a simple energy-filtering system. Crucial for successful filtering is that the velocity with which the traveling wave passes through the ion guide must be dynamically controlled in order to accommodate the acceleration of the target ion species. Mass selection is determined by the velocity and acceleration with which the wave traverses the ion guide, whereby the target species will acquire a higher kinetic energy than all other lighter as well as heaver species. Finite element simulations of this design demonstrate that for small masses a mass resolution M/DeltaM approximately 1000 can be achieved within an electrode stack containing as few as 20 plates. Some of the possible advantages and drawbacks which distinguish this concept from established mass spectrometric technologies are discussed.

  9. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  10. Earth Science Data Analytics: Bridging Tools and Techniques with the Co-Analysis of Large, Heterogeneous Datasets

    NASA Technical Reports Server (NTRS)

    Kempler, Steve; Mathews, Tiffany

    2016-01-01

    The continuum of ever-evolving data management systems affords great opportunities to the enhancement of knowledge and facilitation of science research. To take advantage of these opportunities, it is essential to understand and develop methods that enable data relationships to be examined and the information to be manipulated. This presentation describes the efforts of the Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster to understand, define, and facilitate the implementation of ESDA to advance science research. As a result of the void of Earth science data analytics publication material, the cluster has defined ESDA along with 10 goals to set the framework for a common understanding of tools and techniques that are available and still needed to support ESDA.

  11. Multi-technique characterisation of commercial alizarin-based lakes

    NASA Astrophysics Data System (ADS)

    Pronti, Lucilla; Mazzitelli, Jean-Baptiste; Bracciale, Maria Paola; Massini Rosati, Lorenzo; Vieillescazes, Cathy; Santarelli, Maria Laura; Felici, Anna Candida

    2018-07-01

    The characterization of ancient and modern alizarin-based lakes is a largely studied topic in the literature. Analytical data on contemporary alizarin-based lakes, however, are still poor, though of primary importance, since these lakes might be indeed present in contemporary and fake paintings as well as in retouchings. In this work we systematically investigate the chemical composition and the optical features of fifteen alizarin-based lakes, by a multi-analytical technique approach combining spectroscopic methods (i.e. Energy Dispersive X-ray Fluorescence Spectroscopy, EDXRF; Attenuated Total Reflectance Fourier-Transform Infrared Spectroscopy, ATR-FTIR; X-ray Powder Diffraction, XRD; UV induced fluorescence and reflectance spectroscopies) and chromatography (i.e. High-performance Liquid Chromatography coupled with a Photodiode Array Detector, HPLC-PDA). Most of the samples contain typical compounds from the natural roots of madder, as occurring in ancient and modern lakes, but in two samples (23600-Kremer-Pigmente and alizarin crimson-Zecchi) any anthraquinonic structures were identified, thus leading to hypothesize the presence of synthetic dyes. The detection of lucidin primeveroside and ruberythrique acid in some lakes suggest the use of Rubia tinctorum. One sample (23610-Kremer-Pigmente) presents alizarin as the sole compound, thereby revealing to be a synthetic dye. Moreover, gibbsite, alunite and kaolinite were found to be used as substrates and/or mordants. Visible absorption spectra of the anthraquinonic lakes show two main absorption bands at about 494-511 nm and 537-564 nm, along with a shoulder at about 473-479 nm in presence of high amounts of purpurin. Finally, from the results obtained by UV induced fluorescence spectroscopy it is possible to figure out that, although it is commonly assumed that the madder lake presents an orange-pink fluorescence, the inorganic compounds, added to the recipe, could induce a quenching phenomenon or an inhibition

  12. Retail video analytics: an overview and survey

    NASA Astrophysics Data System (ADS)

    Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang

    2013-03-01

    Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.

  13. GPU-accelerated computational tool for studying the effectiveness of asteroid disruption techniques

    NASA Astrophysics Data System (ADS)

    Zimmerman, Ben J.; Wie, Bong

    2016-10-01

    This paper presents the development of a new Graphics Processing Unit (GPU) accelerated computational tool for asteroid disruption techniques. Numerical simulations are completed using the high-order spectral difference (SD) method. Due to the compact nature of the SD method, it is well suited for implementation with the GPU architecture, hence solutions are generated at orders of magnitude faster than the Central Processing Unit (CPU) counterpart. A multiphase model integrated with the SD method is introduced, and several asteroid disruption simulations are conducted, including kinetic-energy impactors, multi-kinetic energy impactor systems, and nuclear options. Results illustrate the benefits of using multi-kinetic energy impactor systems when compared to a single impactor system. In addition, the effectiveness of nuclear options is observed.

  14. Feasibility of Optical Transition Radiation Imaging for Laser-driven Plasma Accelerator Electron-Beam Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Rule, D. W.; Downer, M. C.

    We report the initial considerations of using linearly polarized optical transition radiation (OTR) to characterize the electron beams of laser plasma accelerators (LPAs) such as at the Univ. of Texas at Austin. The two LPAs operate at 100 MeV and 2-GeV, and they currently have estimated normalized emittances at ~ 1-mm mrad regime with beam divergences less than 1/γ and beam sizes to be determined at the micron level. Analytical modeling results indicate the feasibility of using these OTR techniques for the LPA applications.

  15. An analytical and experimental evaluation of the plano-cylindrical Fresnel lens solar concentrator

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Allums, S. L.; Cosby, R. M.

    1976-01-01

    Plastic Fresnel lenses for solar concentration are attractive because of potential for low-cost mass production. An analytical and experimental evaluation of line-focusing Fresnel lenses with application potential in the 200 to 370 C range is reported. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves-down lens. Experimentation was based primarily on a 56 cm-wide lens with f-number 1.0. A sun-tracking heliostat provided a non-moving solar source. Measured data indicated more spreading at the profile base than analytically predicted. The measured and computed transmittances were 85 and 87% respectively. Preliminary testing with a second lens (1.85 m) indicated that modified manufacturing techniques corrected the profile spreading problem.

  16. Analytical Challenges in Biotechnology.

    ERIC Educational Resources Information Center

    Glajch, Joseph L.

    1986-01-01

    Highlights five major analytical areas (electrophoresis, immunoassay, chromatographic separations, protein and DNA sequencing, and molecular structures determination) and discusses how analytical chemistry could further improve these techniques and thereby have a major impact on biotechnology. (JN)

  17. Treatment Planning for Accelerator-Based Boron Neutron Capture Therapy

    NASA Astrophysics Data System (ADS)

    Herrera, María S.; González, Sara J.; Minsky, Daniel M.; Kreiner, Andrés J.

    2010-08-01

    Glioblastoma multiforme and metastatic melanoma are frequent brain tumors in adults and presently still incurable diseases. Boron Neutron Capture Therapy (BNCT) is a promising alternative for this kind of pathologies. Accelerators have been proposed for BNCT as a way to circumvent the problem of siting reactors in hospitals and for their relative simplicity and lower cost among other advantages. Considerable effort is going into the development of accelerator-based BNCT neutron sources in Argentina. Epithermal neutron beams will be produced through appropriate proton-induced nuclear reactions and optimized beam shaping assemblies. Using these sources, computational dose distributions were evaluated in a real patient with diagnosed glioblastoma treated with BNCT. The simulated irradiation was delivered in order to optimize dose to the tumors within the normal tissue constraints. Using Monte Carlo radiation transport calculations, dose distributions were generated for brain, skin and tumor. Also, the dosimetry was studied by computing cumulative dose-volume histograms for volumes of interest. The results suggest acceptable skin average dose and a significant dose delivered to tumor with low average whole brain dose for irradiation times less than 60 minutes, indicating a good performance of an accelerator-based BNCT treatment.

  18. Treatment Planning for Accelerator-Based Boron Neutron Capture Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrera, Maria S.; Gonzalez, Sara J.; Minsky, Daniel M.

    2010-08-04

    Glioblastoma multiforme and metastatic melanoma are frequent brain tumors in adults and presently still incurable diseases. Boron Neutron Capture Therapy (BNCT) is a promising alternative for this kind of pathologies. Accelerators have been proposed for BNCT as a way to circumvent the problem of siting reactors in hospitals and for their relative simplicity and lower cost among other advantages. Considerable effort is going into the development of accelerator-based BNCT neutron sources in Argentina. Epithermal neutron beams will be produced through appropriate proton-induced nuclear reactions and optimized beam shaping assemblies. Using these sources, computational dose distributions were evaluated in a realmore » patient with diagnosed glioblastoma treated with BNCT. The simulated irradiation was delivered in order to optimize dose to the tumors within the normal tissue constraints. Using Monte Carlo radiation transport calculations, dose distributions were generated for brain, skin and tumor. Also, the dosimetry was studied by computing cumulative dose-volume histograms for volumes of interest. The results suggest acceptable skin average dose and a significant dose delivered to tumor with low average whole brain dose for irradiation times less than 60 minutes, indicating a good performance of an accelerator-based BNCT treatment.« less

  19. Analysis of secondary particle behavior in multiaperture, multigrid accelerator for the ITER neutral beam injector.

    PubMed

    Mizuno, T; Taniguchi, M; Kashiwagi, M; Umeda, N; Tobari, H; Watanabe, K; Dairaku, M; Sakamoto, K; Inoue, T

    2010-02-01

    Heat load on acceleration grids by secondary particles such as electrons, neutrals, and positive ions, is a key issue for long pulse acceleration of negative ion beams. Complicated behaviors of the secondary particles in multiaperture, multigrid (MAMuG) accelerator have been analyzed using electrostatic accelerator Monte Carlo code. The analytical result is compared to experimental one obtained in a long pulse operation of a MeV accelerator, of which second acceleration grid (A2G) was removed for simplification of structure. The analytical results show that relatively high heat load on the third acceleration grid (A3G) since stripped electrons were deposited mainly on A3G. This heat load on the A3G can be suppressed by installing the A2G. Thus, capability of MAMuG accelerator is demonstrated for suppression of heat load due to secondary particles by the intermediate grids.

  20. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  1. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  2. Dielectrophoretic label-free immunoassay for rare-analyte quantification in biological samples

    NASA Astrophysics Data System (ADS)

    Velmanickam, Logeeshan; Laudenbach, Darrin; Nawarathna, Dharmakeerthi

    2016-10-01

    The current gold standard for detecting or quantifying target analytes from blood samples is the ELISA (enzyme-linked immunosorbent assay). The detection limit of ELISA is about 250 pg/ml. However, to quantify analytes that are related to various stages of tumors including early detection requires detecting well below the current limit of the ELISA test. For example, Interleukin 6 (IL-6) levels of early oral cancer patients are <100 pg/ml and the prostate specific antigen level of the early stage of prostate cancer is about 1 ng/ml. Further, it has been reported that there are significantly less than 1 pg /mL of analytes in the early stage of tumors. Therefore, depending on the tumor type and the stage of the tumors, it is required to quantify various levels of analytes ranging from ng/ml to pg/ml. To accommodate these critical needs in the current diagnosis, there is a need for a technique that has a large dynamic range with an ability to detect extremely low levels of target analytes (technique based on dielectrophoresis. This technique is capable of quantifying target analytes down to a few thousands of molecules (˜zmoles ).

  3. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Astrophysics Data System (ADS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-05-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  4. Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science

    NASA Technical Reports Server (NTRS)

    Riedel, Morris; Ramachandran, Rahul; Baumann, Peter

    2014-01-01

    The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.

  5. Generating clock signals for a cycle accurate, cycle reproducible FPGA based hardware accelerator

    DOEpatents

    Asaad, Sameth W.; Kapur, Mohit

    2016-01-05

    A method, system and computer program product are disclosed for generating clock signals for a cycle accurate FPGA based hardware accelerator used to simulate operations of a device-under-test (DUT). In one embodiment, the DUT includes multiple device clocks generating multiple device clock signals at multiple frequencies and at a defined frequency ratio; and the FPG hardware accelerator includes multiple accelerator clocks generating multiple accelerator clock signals to operate the FPGA hardware accelerator to simulate the operations of the DUT. In one embodiment, operations of the DUT are mapped to the FPGA hardware accelerator, and the accelerator clock signals are generated at multiple frequencies and at the defined frequency ratio of the frequencies of the multiple device clocks, to maintain cycle accuracy between the DUT and the FPGA hardware accelerator. In an embodiment, the FPGA hardware accelerator may be used to control the frequencies of the multiple device clocks.

  6. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate ('dynamic fatigue') testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rate in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  7. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate (dynamic fatigue) testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rates in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  8. Ultra-High Sensitivity Techniques for the Determination of 3 He /4 He Abundances in Helium by Accelerator Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Mumm, H. P.; Huber, M.; Bauder, W.; Abrams, N.; Deibel, C.; Huffer, C.; Huffman, P.; Schelhammer, K.; Janssens, R.; Jiang, C.; Scott, R.; Pardo, R.; Rehm, K.; Vondrasek, R.; Swank, C.; O'Shaughnessy, C.; Paul, M.; Yang, L.

    2017-01-01

    We report the development of an Accelerator Mass Spectrometry technique to measure the 3He/4He isotopic ratio using a radio frequency (RF) discharge source and the ATLAS facility at Argonne National Laboratory. Control over 3He/4He ratio in helium several orders of magnitude lower than natural abundance is critical for neutron lifetime and source experiments using liquid helium. Due to low ultimate beam currents, the ATLAS accelerator and beam line were tuned using a succession of species of the same M/q. A unique RF source was developed for the experiment due to large natural 3He backgrounds. Analog H_3 + and DH + molecular ions are eliminated by dissociation via a gold stripper foil near the detector. The stripped ions were dispersed in a magnetic spectrograph and 3He2 + ions counted in the focal plane detector. This technique is sensitive to 3 He /4 He ratios in the regime of 10-12 with backgrounds that appear to be below 10-14. The techniques used to reduce the backgrounds and remaining outstanding problems will be presented along with results from measurements on high purity 4He samples.

  9. An integrated approach using orthogonal analytical techniques to characterize heparan sulfate structure.

    PubMed

    Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Gunay, Nur Sibel; Wang, Jing; Sun, Elaine Y; Pradines, Joël R; Farutin, Victor; Shriver, Zachary; Kaundinya, Ganesh V; Capila, Ishan

    2017-02-01

    Heparan sulfate (HS), a glycosaminoglycan present on the surface of cells, has been postulated to have important roles in driving both normal and pathological physiologies. The chemical structure and sulfation pattern (domain structure) of HS is believed to determine its biological function, to vary across tissue types, and to be modified in the context of disease. Characterization of HS requires isolation and purification of cell surface HS as a complex mixture. This process may introduce additional chemical modification of the native residues. In this study, we describe an approach towards thorough characterization of bovine kidney heparan sulfate (BKHS) that utilizes a variety of orthogonal analytical techniques (e.g. NMR, IP-RPHPLC, LC-MS). These techniques are applied to characterize this mixture at various levels including composition, fragment level, and overall chain properties. The combination of these techniques in many instances provides orthogonal views into the fine structure of HS, and in other instances provides overlapping / confirmatory information from different perspectives. Specifically, this approach enables quantitative determination of natural and modified saccharide residues in the HS chains, and identifies unusual structures. Analysis of partially digested HS chains allows for a better understanding of the domain structures within this mixture, and yields specific insights into the non-reducing end and reducing end structures of the chains. This approach outlines a useful framework that can be applied to elucidate HS structure and thereby provides means to advance understanding of its biological role and potential involvement in disease progression. In addition, the techniques described here can be applied to characterization of heparin from different sources.

  10. Peptidomics: the integrated approach of MS, hyphenated techniques and bioinformatics for neuropeptide analysis.

    PubMed

    Boonen, Kurt; Landuyt, Bart; Baggerman, Geert; Husson, Steven J; Huybrechts, Jurgen; Schoofs, Liliane

    2008-02-01

    MS is currently one of the most important analytical techniques in biological and medical research. ESI and MALDI launched the field of MS into biology. The performance of mass spectrometers increased tremendously over the past decades. Other technological advances increased the analytical power of biological MS even more. First, the advent of the genome projects allowed an automated analysis of mass spectrometric data. Second, improved separation techniques, like nanoscale HPLC, are essential for MS analysis of biomolecules. The recent progress in bioinformatics is the third factor that accelerated the biochemical analysis of macromolecules. The first part of this review will introduce the basics of these techniques. The field that integrates all these techniques to identify endogenous peptides is called peptidomics and will be discussed in the last section. This integrated approach aims at identifying all the present peptides in a cell, organ or organism (the peptidome). Today, peptidomics is used by several fields of research. Special emphasis will be given to the identification of neuropeptides, a class of short proteins that fulfil several important intercellular signalling functions in every animal. MS imaging techniques and biomarker discovery will also be discussed briefly.

  11. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  12. Particle acceleration in laser-driven magnetic reconnection

    DOE PAGES

    Totorica, S. R.; Abel, T.; Fiuza, F.

    2017-04-03

    Particle acceleration induced by magnetic reconnection is thought to be a promising candidate for producing the nonthermal emissions associated with explosive phenomena such as solar flares, pulsar wind nebulae, and jets from active galactic nuclei. Laboratory experiments can play an important role in the study of the detailed microphysics of magnetic reconnection and the dominant particle acceleration mechanisms. We have used two- and three-dimensional particle-in-cell simulations to study particle acceleration in high Lundquist number reconnection regimes associated with laser-driven plasma experiments. For current experimental conditions, we show that nonthermal electrons can be accelerated to energies more than an order ofmore » magnitude larger than the initial thermal energy. The nonthermal electrons gain their energy mainly from the reconnection electric field near the X points, and particle injection into the reconnection layer and escape from the finite system establish a distribution of energies that resembles a power-law spectrum. Energetic electrons can also become trapped inside the plasmoids that form in the current layer and gain additional energy from the electric field arising from the motion of the plasmoid. We compare simulations for finite and infinite periodic systems to demonstrate the importance of particle escape on the shape of the spectrum. Based on our findings, we provide an analytical estimate of the maximum electron energy and threshold condition for observing suprathermal electron acceleration in terms of experimentally tunable parameters. We also discuss experimental signatures, including the angular distribution of the accelerated particles, and construct synthetic detector spectra. Finally, these results open the way for novel experimental studies of particle acceleration induced by reconnection.« less

  13. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  14. Innovative single-shot diagnostics for electrons accelerated through laser-plasma interaction at FLAME

    NASA Astrophysics Data System (ADS)

    Bisesto, F. G.; Anania, M. P.; Chiadroni, E.; Cianchi, A.; Costa, G.; Curcio, A.; Ferrario, M.; Galletti, M.; Pompili, R.; Schleifer, E.; Zigler, A.

    2017-05-01

    Plasma wakefield acceleration is the most promising acceleration technique known nowadays, able to provide very high accelerating fields (> 100 GV/m), enabling acceleration of electrons to GeV energy in few centimeters. Here we present all the plasma related activities currently underway at SPARC LAB exploiting the high power laser FLAME. In particular, we will give an overview of the single shot diagnostics employed: Electro Optic Sampling (EOS) for temporal measurement and optical transition radiation (OTR) for an innovative one shot emittance measurements. In detail, the EOS technique has been employed to measure for the first time the longitudinal profile of electric field of fast electrons escaping from a solid target, driving the ions and protons acceleration, and to study the impact of using different target shapes. Moreover, a novel scheme for one shot emittance measurements based on OTR, developed and tested at SPARC LAB LINAC, will be shown.

  15. Multimodal system planning technique : an analytical approach to peak period operation

    DOT National Transportation Integrated Search

    1995-11-01

    The multimodal system planning technique described in this report is an improvement of the methodology used in the Dallas System Planning Study. The technique includes a spreadsheet-based process to identify the costs of congestion, construction, and...

  16. Assessment of ground-based monitoring techniques applied to landslide investigations

    NASA Astrophysics Data System (ADS)

    Uhlemann, S.; Smith, A.; Chambers, J.; Dixon, N.; Dijkstra, T.; Haslam, E.; Meldrum, P.; Merritt, A.; Gunn, D.; Mackay, J.

    2016-01-01

    A landslide complex in the Whitby Mudstone Formation at Hollin Hill, North Yorkshire, UK is periodically re-activated in response to rainfall-induced pore-water pressure fluctuations. This paper compares long-term measurements (i.e., 2009-2014) obtained from a combination of monitoring techniques that have been employed together for the first time on an active landslide. The results highlight the relative performance of the different techniques, and can provide guidance for researchers and practitioners for selecting and installing appropriate monitoring techniques to assess unstable slopes. Particular attention is given to the spatial and temporal resolutions offered by the different approaches that include: Real Time Kinematic-GPS (RTK-GPS) monitoring of a ground surface marker array, conventional inclinometers, Shape Acceleration Arrays (SAA), tilt meters, active waveguides with Acoustic Emission (AE) monitoring, and piezometers. High spatial resolution information has allowed locating areas of stability and instability across a large slope. This has enabled identification of areas where further monitoring efforts should be focused. High temporal resolution information allowed the capture of 'S'-shaped slope displacement-time behaviour (i.e. phases of slope acceleration, deceleration and stability) in response to elevations in pore-water pressures. This study shows that a well-balanced suite of monitoring techniques that provides high temporal and spatial resolutions on both measurement and slope scale is necessary to fully understand failure and movement mechanisms of slopes. In the case of the Hollin Hill landslide it enabled detailed interpretation of the geomorphological processes governing landslide activity. It highlights the benefit of regularly surveying a network of GPS markers to determine areas for installation of movement monitoring techniques that offer higher resolution both temporally and spatially. The small sensitivity of tilt meter measurements

  17. Fisher information of accelerated two-qubit systems

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2018-02-01

    In this paper, Fisher information for an accelerated system initially prepared in the X-state is discussed. An analytical solution, which consists of three parts: classical, the average over all pure states and a mixture of pure states, is derived for the general state and for Werner state. It is shown that the Unruh acceleration has a depleting effect on the Fisher information. This depletion depends on the degree of entanglement of the initial state settings. For the X-state, for some intervals of Unruh acceleration, the Fisher information remains constant, irrespective to the Unruh acceleration. In general, the possibility of estimating the state’s parameters decreases as the acceleration increases. However, the precision of estimation can be maximized for certain values of the Unruh acceleration. We also investigate the contribution of the different parts of the Fisher information on the dynamics of the total Fisher information.

  18. Speculation and replication in temperature accelerated dynamics

    DOE PAGES

    Zamora, Richard J.; Perez, Danny; Voter, Arthur F.

    2018-02-12

    Accelerated Molecular Dynamics (AMD) is a class of MD-based algorithms for the long-time scale simulation of atomistic systems that are characterized by rare-event transitions. Temperature-Accelerated Dynamics (TAD), a traditional AMD approach, hastens state-to-state transitions by performing MD at an elevated temperature. Recently, Speculatively-Parallel TAD (SpecTAD) was introduced, allowing the TAD procedure to exploit parallel computing systems by concurrently executing in a dynamically generated list of speculative future states. Although speculation can be very powerful, it is not always the most efficient use of parallel resources. In this paper, we compare the performance of speculative parallelism with a replica-based technique, similarmore » to the Parallel Replica Dynamics method. A hybrid SpecTAD approach is also presented, in which each speculation process is further accelerated by a local set of replicas. Finally and overall, this work motivates the use of hybrid parallelism whenever possible, as some combination of speculation and replication is typically most efficient.« less

  19. Speculation and replication in temperature accelerated dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamora, Richard J.; Perez, Danny; Voter, Arthur F.

    Accelerated Molecular Dynamics (AMD) is a class of MD-based algorithms for the long-time scale simulation of atomistic systems that are characterized by rare-event transitions. Temperature-Accelerated Dynamics (TAD), a traditional AMD approach, hastens state-to-state transitions by performing MD at an elevated temperature. Recently, Speculatively-Parallel TAD (SpecTAD) was introduced, allowing the TAD procedure to exploit parallel computing systems by concurrently executing in a dynamically generated list of speculative future states. Although speculation can be very powerful, it is not always the most efficient use of parallel resources. In this paper, we compare the performance of speculative parallelism with a replica-based technique, similarmore » to the Parallel Replica Dynamics method. A hybrid SpecTAD approach is also presented, in which each speculation process is further accelerated by a local set of replicas. Finally and overall, this work motivates the use of hybrid parallelism whenever possible, as some combination of speculation and replication is typically most efficient.« less

  20. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware.

    PubMed

    Kole, J S; Beekman, F J

    2006-02-21

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.

  1. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  2. An Experimental Study on the Fabrication of Glass-based Acceleration Sensor Body Using Micro Powder Blasting Method

    PubMed Central

    Park, Dong-Sam; Yun, Dae-Jin; Cho, Myeong-Woo; Shin, Bong-Cheol

    2007-01-01

    This study investigated the feasibility of the micro powder blasting technique for the micro fabrication of sensor structures using the Pyrex glass to replace the existing silicon-based acceleration sensor fabrication processes. As the preliminary experiments, the effects of the blasting pressure, the mass flow rate of abrasive and the number of nozzle scanning times on erosion depth of the Pyrex and the soda lime glasses were examined. From the experimental results, optimal blasting conditions were selected for the Pyrex glass machining. The dimensions of the designed glass sensor was 1.7×1.7×0.6mm for the vibrating mass, and 2.9×0.7×0.2mm for the cantilever beam. The machining results showed that the dimensional errors of the machined glass sensor ranged from 3 μm in minimum to 20 μm in maximum. These results imply that the micro powder blasting method can be applied for the micromachining of glass-based acceleration sensors to replace the exiting method.

  3. Product identification techniques used as training aids for analytical chemists

    NASA Technical Reports Server (NTRS)

    Grillo, J. P.

    1968-01-01

    Laboratory staff assistants are trained to use data and observations of routine product analyses performed by experienced analytical chemists when analyzing compounds for potential toxic hazards. Commercial products are used as examples in teaching the analytical approach to unknowns.

  4. Tandem-ESQ for Accelerator-Based Boron Neutron Capture Therapy (AB-BNCT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreiner, A. J.; Escuela de Ciencia y Tecnologia, Universidad de Gral San Martin; CONICET,

    2007-02-12

    A folded tandem, with 1.25 MV terminal voltage, combined with an ElectroStatic Quadrupole (ESQ) chain is being proposed as a machine for Accelerator-Based Boron Neutron Capture Therapy (AB-BNCT). The machine is shown to be capable of accelerating a 30 mA proton beam to 2.5 MeV. These are the specifications needed to produce sufficiently intense and clean epithermal neutron beams, based on the on the 7Li(p,n)7Be reaction, to perform BNCT treatment for deep seated tumors in less than an hour.

  5. Accelerated construction

    DOT National Transportation Integrated Search

    2004-01-01

    Accelerated Construction Technology Transfer (ACTT) is a strategic process that uses various innovative techniques, strategies, and technologies to minimize actual construction time, while enhancing quality and safety on today's large, complex multip...

  6. Ion beams provided by small accelerators for material synthesis and characterization

    NASA Astrophysics Data System (ADS)

    Mackova, Anna; Havranek, Vladimir

    2017-06-01

    The compact, multipurpose electrostatic tandem accelerators are extensively used for production of ion beams with energies in the range from 400 keV to 24 MeV of almost all elements of the periodic system for the trace element analysis by means of nuclear analytical methods. The ion beams produced by small accelerators have a broad application, mainly for material characterization (Rutherford Back-Scattering spectrometry, Particle Induced X ray Emission analysis, Nuclear Reaction Analysis and Ion-Microprobe with 1 μm lateral resolution among others) and for high-energy implantation. Material research belongs to traditionally progressive fields of technology. Due to the continuous miniaturization, the underlying structures are far beyond the analytical limits of the most conventional methods. Ion Beam Analysis (IBA) techniques provide this possibility as they use probes of similar or much smaller dimensions (particles, radiation). Ion beams can be used for the synthesis of new progressive functional nanomaterials for optics, electronics and other applications. Ion beams are extensively used in studies of the fundamental energetic ion interaction with matter as well as in the novel nanostructure synthesis using ion beam irradiation in various amorphous and crystalline materials in order to get structures with extraordinary functional properties. IBA methods serve for investigation of materials coming from material research, industry, micro- and nano-technology, electronics, optics and laser technology, chemical, biological and environmental investigation in general. Main research directions in laboratories employing small accelerators are also the preparation and characterization of micro- and nano-structured materials which are of interest for basic and oriented research in material science, and various studies of biological, geological, environmental and cultural heritage artefacts are provided too.

  7. Various extraction and analytical techniques for isolation and identification of secondary metabolites from Nigella sativa seeds.

    PubMed

    Liu, X; Abd El-Aty, A M; Shim, J-H

    2011-10-01

    Nigella sativa L. (black cumin), commonly known as black seed, is a member of the Ranunculaceae family. This seed is used as a natural remedy in many Middle Eastern and Far Eastern countries. Extracts prepared from N. sativa have, for centuries, been used for medical purposes. Thus far, the organic compounds in N. sativa, including alkaloids, steroids, carbohydrates, flavonoids, fatty acids, etc. have been fairly well characterized. Herein, we summarize some new extraction techniques, including microwave assisted extraction (MAE) and supercritical extraction techniques (SFE), in addition to the classical method of hydrodistillation (HD), which have been employed for isolation and various analytical techniques used for the identification of secondary metabolites in black seed. We believe that some compounds contained in N. sativa remain to be identified, and that high-throughput screening could help to identify new compounds. A study addressing environmentally-friendly techniques that have minimal or no environmental effects is currently underway in our laboratory.

  8. Aerodynamic measurement techniques. [laser based diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Hunter, W. W., Jr.

    1976-01-01

    Laser characteristics of intensity, monochromatic, spatial coherence, and temporal coherence were developed to advance laser based diagnostic techniques for aerodynamic related research. Two broad categories of visualization and optical measurements were considered, and three techniques received significant attention. These are holography, laser velocimetry, and Raman scattering. Examples of the quantitative laser velocimeter and Raman scattering measurements of velocity, temperature, and density indicated the potential of these nonintrusive techniques.

  9. On accelerated flow of MHD powell-eyring fluid via homotopy analysis method

    NASA Astrophysics Data System (ADS)

    Salah, Faisal; Viswanathan, K. K.; Aziz, Zainal Abdul

    2017-09-01

    The aim of this article is to obtain the approximate analytical solution for incompressible magnetohydrodynamic (MHD) flow for Powell-Eyring fluid induced by an accelerated plate. Both constant and variable accelerated cases are investigated. Approximate analytical solution in each case is obtained by using the Homotopy Analysis Method (HAM). The resulting nonlinear analysis is carried out to generate the series solution. Finally, Graphical outcomes of different values of the material constants parameters on the velocity flow field are discussed and analyzed.

  10. Tsallis entropy and complexity theory in the understanding of physics of precursory accelerating seismicity.

    NASA Astrophysics Data System (ADS)

    Vallianatos, Filippos; Chatzopoulos, George

    2014-05-01

    Strong observational indications support the hypothesis that many large earthquakes are preceded by accelerating seismic release rates which described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We derive the time-to-failure power-law of: a) cumulative number of earthquakes, b) cumulative Benioff strain and c) cumulative energy released in a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. Considering the analytic conditions near the time of failure, we derive from first principles the time-to-failure power-law and show that a common critical exponent m(q) exists, which is a function of the non-extensive entropic parameter q. We conclude that the cumulative precursory parameters are function of the energy supplied to the system and the size of the precursory volume. In addition the q-exponential distribution which describes the fault system is a crucial factor on the appearance of power-law acceleration in the seismicity. Our results based on Tsallis entropy and the energy conservation gives a new view on the empirical laws derived by other researchers. Examples and applications of this technique to observations of accelerating seismicity will also be presented and discussed. This work was implemented through the project IMPACT-ARC in the framework of action "ARCHIMEDES III-Support of Research Teams at TEI of Crete" (MIS380353) of the Operational Program "Education and Lifelong Learning" and is co-financed by the European Union (European Social Fund) and Greek national funds

  11. Second International Conference on Accelerating Biopharmaceutical Development: March 9-12, 2009, Coronado, CA USA.

    PubMed

    Reichert, Janice M; Jacob, Nitya; Amanullah, Ashraf

    2009-01-01

    The Second International Conference on Accelerating Biopharmaceutical Development was held in Coronado, California. The meeting was organized by the Society for Biological Engineering (SBE) and the American Institute of Chemical Engineers (AIChE); SBE is a technological community of the AIChE. Bob Adamson (Wyeth) and Chuck Goochee (Centocor) were co-chairs of the event, which had the theme "Delivering cost-effective, robust processes and methods quickly and efficiently." The first day focused on emerging disruptive technologies and cutting-edge analytical techniques. Day two featured presentations on accelerated cell culture process development, critical quality attributes, specifications and comparability, and high throughput protein formulation development. The final day was dedicated to discussion of technology options and new analysis methods provided by emerging disruptive technologies; functional interaction, integration and synergy in platform development; and rapid and economic purification process development.

  12. Second International Conference on Accelerating Biopharmaceutical Development: March 9-12, 2009, Coronado, CA, USA.

    PubMed

    Reichert, Janice M; Jacob, Nitya M; Amanullah, Ashraf

    2009-01-01

    The Second International Conference on Accelerating Biopharmaceutical Development was held in Coronado, California. The meeting was organized by the Society for Biological Engineering (SBE) and the American Institute of Chemical Engineers (AIChE); SBE is a technological community of the AIChE. Bob Adamson (Wyeth) and Chuck Goochee (Centocor) were co-chairs of the event, which had the theme "Delivering cost-effective, robust processes and methods quickly and efficiently." The first day focused on emerging disruptive technologies and cutting-edge analytical techniques. Day two featured presentations on accelerated cell culture process development, critical quality attributes, specifications and comparability, and high throughput protein formulation development. The final day was dedicated to discussion of technology options and new analysis methods provided by emerging disruptive technologies; functional interaction, integration and synergy in platform development; and rapid and economic purification process development.

  13. A comparison of public datasets for acceleration-based fall detection.

    PubMed

    Igual, Raul; Medrano, Carlos; Plaza, Inmaculada

    2015-09-01

    Falls are one of the leading causes of mortality among the older population, being the rapid detection of a fall a key factor to mitigate its main adverse health consequences. In this context, several authors have conducted studies on acceleration-based fall detection using external accelerometers or smartphones. The published detection rates are diverse, sometimes close to a perfect detector. This divergence may be explained by the difficulties in comparing different fall detection studies in a fair play since each study uses its own dataset obtained under different conditions. In this regard, several datasets have been made publicly available recently. This paper presents a comparison, to the best of our knowledge for the first time, of these public fall detection datasets in order to determine whether they have an influence on the declared performances. Using two different detection algorithms, the study shows that the performances of the fall detection techniques are affected, to a greater or lesser extent, by the specific datasets used to validate them. We have also found large differences in the generalization capability of a fall detector depending on the dataset used for training. In fact, the performance decreases dramatically when the algorithms are tested on a dataset different from the one used for training. Other characteristics of the datasets like the number of training samples also have an influence on the performance while algorithms seem less sensitive to the sampling frequency or the acceleration range. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. "Dip-and-read" paper-based analytical devices using distance-based detection with color screening.

    PubMed

    Yamada, Kentaro; Citterio, Daniel; Henry, Charles S

    2018-05-15

    An improved paper-based analytical device (PAD) using color screening to enhance device performance is described. Current detection methods for PADs relying on the distance-based signalling motif can be slow due to the assay time being limited by capillary flow rates that wick fluid through the detection zone. For traditional distance-based detection motifs, analysis can take up to 45 min for a channel length of 5 cm. By using a color screening method, quantification with a distance-based PAD can be achieved in minutes through a "dip-and-read" approach. A colorimetric indicator line deposited onto a paper substrate using inkjet-printing undergoes a concentration-dependent colorimetric response for a given analyte. This color intensity-based response has been converted to a distance-based signal by overlaying a color filter with a continuous color intensity gradient matching the color of the developed indicator line. As a proof-of-concept, Ni quantification in welding fume was performed as a model assay. The results of multiple independent user testing gave mean absolute percentage error and average relative standard deviations of 10.5% and 11.2% respectively, which were an improvement over analysis based on simple visual color comparison with a read guide (12.2%, 14.9%). In addition to the analytical performance comparison, an interference study and a shelf life investigation were performed to further demonstrate practical utility. The developed system demonstrates an alternative detection approach for distance-based PADs enabling fast (∼10 min), quantitative, and straightforward assays.

  15. Towards ion beam therapy based on laser plasma accelerators.

    PubMed

    Karsch, Leonhard; Beyreuther, Elke; Enghardt, Wolfgang; Gotz, Malte; Masood, Umar; Schramm, Ulrich; Zeil, Karl; Pawelke, Jörg

    2017-11-01

    Only few ten radiotherapy facilities worldwide provide ion beams, in spite of their physical advantage of better achievable tumor conformity of the dose compared to conventional photon beams. Since, mainly the large size and high costs hinder their wider spread, great efforts are ongoing to develop more compact ion therapy facilities. One promising approach for smaller facilities is the acceleration of ions on micrometre scale by high intensity lasers. Laser accelerators deliver pulsed beams with a low pulse repetition rate, but a high number of ions per pulse, broad energy spectra and high divergences. A clinical use of a laser based ion beam facility requires not only a laser accelerator providing beams of therapeutic quality, but also new approaches for beam transport, dosimetric control and tumor conformal dose delivery procedure together with the knowledge of the radiobiological effectiveness of laser-driven beams. Over the last decade research was mainly focused on protons and progress was achieved in all important challenges. Although currently the maximum proton energy is not yet high enough for patient irradiation, suggestions and solutions have been reported for compact beam transport and dose delivery procedures, respectively, as well as for precise dosimetric control. Radiobiological in vitro and in vivo studies show no indications of an altered biological effectiveness of laser-driven beams. Laser based facilities will hardly improve the availability of ion beams for patient treatment in the next decade. Nevertheless, there are possibilities for a need of laser based therapy facilities in future.

  16. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  17. Research on Acceleration Compensation Strategy of Electric Vehicle Based on Fuzzy Control Theory

    NASA Astrophysics Data System (ADS)

    Zhu, Tianjun; Li, Bin; Zong, Changfu; Wei, Zhicheng

    2017-09-01

    Nowadays, the driving technology of electric vehicle is developing rapidly. There are many kinds of methods in driving performance control technology. The paper studies the acceleration performance of electric vehicle. Under the premise of energy management, an acceleration power compensation method by fuzzy control theory based on driver intention recognition is proposed, which can meet the driver’s subjective feelings better. It avoids the problem that the pedal opening and power output are single correspondence when the traditional vehicle accelerates. Through the simulation test, this method can significantly improve the performance of acceleration and output torque smoothly in non-emergency acceleration to ensure vehicle comfortable and stable.

  18. Visualization of TlBr ionic transport mechanism by the Accelerated Device Degradation technique

    NASA Astrophysics Data System (ADS)

    Datta, Amlan; Becla, Piotr; Motakef, Shariar

    2015-06-01

    Thallium Bromide (TlBr) is a promising gamma radiation semiconductor detector material. However, it is an ionic semiconductor and suffers from polarization. As a result, TlBr devices degrade rapidly at room temperature. Polarization is associated with the flow of ionic current in the crystal under electrical bias, leading to the accumulation of charged ions at the device's electrical contacts. We report a fast and reliable direct characterization technique to identify the effects of various growth and post-growth process modifications on the polarization process. The Accelerated Device Degradation (ADD) characterization technique allows direct observation of nucleation and propagation of ionic transport channels within the TlBr crystals under applied bias. These channels are observed to be initiated both directly under the electrode as well as away from it. The propagation direction is always towards the anode indicating that Br- is the mobile diffusing species within the defect channels. The effective migration energy of the Br- ions was calculated to be 0.33±0.03 eV, which is consistent with other theoretical and experimental results.

  19. Low-picomolar, label-free procalcitonin analytical detection with an electrolyte-gated organic field-effect transistor based electronic immunosensor.

    PubMed

    Seshadri, Preethi; Manoli, Kyriaki; Schneiderhan-Marra, Nicole; Anthes, Uwe; Wierzchowiec, Piotr; Bonrad, Klaus; Di Franco, Cinzia; Torsi, Luisa

    2018-05-01

    Herein a label-free immunosensor based on electrolyte-gated organic field-effect transistor (EGOFET) was developed for the detection of procalcitonin (PCT), a sepsis marker. Antibodies specific to PCT were immobilized on the poly-3-hexylthiophene (P3HT) organic semiconductor surface through direct physical adsorption followed by a post-treatment with bovine serum albumin (BSA) which served as the blocking agent to prevent non-specific adsorption. Antibodies together with BSA (forming the whole biorecognition layer) served to selectively capture the procalcitonin target analyte. The entire immunosensor fabrication process was fast, requiring overall 45min to be completed before analyte sensing. The EGOFET immunosensor showed excellent electrical properties, comparable to those of bare P3HT based EGOFET confirming reliable biosensing with bio-functional EGOFET immunosensor. The detection limit of the immunosensor was as low as 2.2pM and within a range of clinical relevance. The relative standard deviation of the individual calibration data points, measured on immunosensors fabricated on different chips (reproducibility error) was below 7%. The developed immunosensor showed high selectivity to the PCT analyte which was evident through control experiments. This report of PCT detection is first of its kind among the electronic sensors based on EGOFETs. The developed sensor is versatile and compatible with low-cost fabrication techniques. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Improved Magnetron Stability and Reduced Noise in Efficient Transmitters for Superconducting Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazakevich, G.; Johnson, R.; Lebedev, V.

    State of the art high-current superconducting accelerators require efficient RF sources with a fast dynamic phase and power control. This allows for compensation of the phase and amplitude deviations of the accelerating voltage in the Superconducting RF (SRF) cavities caused by microphonics, etc. Efficient magnetron transmitters with fast phase and power control are attractive RF sources for this application. They are more cost effective than traditional RF sources such as klystrons, IOTs and solid-state amplifiers used with large scale accelerator projects. However, unlike traditional RF sources, controlled magnetrons operate as forced oscillators. Study of the impact of the controlling signalmore » on magnetron stability, noise and efficiency is therefore important. This paper discusses experiments with 2.45 GHz, 1 kW tubes and verifies our analytical model which is based on the charge drift approximation.« less

  1. Analytical and Numerical Solutions of Generalized Fokker-Planck Equations - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prinja, Anil K.

    The overall goal of this project was to develop advanced theoretical and numerical techniques to quantitatively describe the spreading of a collimated beam of charged particles in space, in angle, and in energy, as a result of small deflection, small energy transfer Coulomb collisions with the target nuclei and electrons. Such beams arise in several applications of great interest in nuclear engineering, and include electron and ion radiotherapy, ion beam modification of materials, accelerator transmutation of waste, and accelerator production of tritium, to name some important candidates. These applications present unique and difficult modeling challenges, but from the outset aremore » amenable to the language of ''transport theory'', which is very familiar to nuclear engineers and considerably less-so to physicists and material scientists. Thus, our approach has been to adopt a fundamental description based on transport equations, but the forward peakedness associated with charged particle interactions precludes a direct application of solution methods developed for neutral particle transport. Unique problem formulations and solution techniques are necessary to describe the transport and interaction of charged particles. In particular, we have developed the Generalized Fokker-Planck (GFP) approach to describe the angular and radial spreading of a collimated beam and a renormalized transport model to describe the energy-loss straggling of an initially monoenergetic distribution. Both analytic and numerical solutions have been investigated and in particular novel finite element numerical methods have been developed. In the first phase of the project, asymptotic methods were used to develop closed form solutions to the GFP equation for different orders of expansion, and was described in a previous progress report. In this final report we present a detailed description of (i) a novel energy straggling model based on a Fokker-Planck approximation but which is adapted

  2. A Review on Microfluidic Paper-Based Analytical Devices for Glucose Detection

    PubMed Central

    Liu, Shuopeng; Su, Wenqiong; Ding, Xianting

    2016-01-01

    Glucose, as an essential substance directly involved in metabolic processes, is closely related to the occurrence of various diseases such as glucose metabolism disorders and islet cell carcinoma. Therefore, it is crucial to develop sensitive, accurate, rapid, and cost effective methods for frequent and convenient detections of glucose. Microfluidic Paper-based Analytical Devices (μPADs) not only satisfying the above requirements but also occupying the advantages of portability and minimal sample consumption, have exhibited great potential in the field of glucose detection. This article reviews and summarizes the most recent improvements in glucose detection in two aspects of colorimetric and electrochemical μPADs. The progressive techniques for fabricating channels on μPADs are also emphasized in this article. With the growth of diabetes and other glucose indication diseases in the underdeveloped and developing countries, low-cost and reliably commercial μPADs for glucose detection will be in unprecedentedly demand. PMID:27941634

  3. Multi-scale analytical investigation of fly ash in concrete

    NASA Astrophysics Data System (ADS)

    Aboustait, Mohammed B.

    Much research has been conducted to find an acceptable concrete ingredient that would act as cement replacement. One promising material is fly ash. Fly ash is a by-product from coal-fired power plants. Throughout this document work on the characterization of fly ash structure and composition will be explored. This effort was conducted through a mixture of cutting edge multi-scale analytical X-ray based techniques that use both bulk experimentation and nano/micro analytical techniques. Furtherly, this examination was coupled by performing Physical/Mechanical ASTM based testing on fly ash-enrolled-concrete to examine the effects of fly ash introduction. The most exotic of the cutting edge characterization techniques endorsed in this work uses the Nano-Computed Tomography and the Nano X-ray Fluorescence at Argonne National Laboratory to investigate single fly ash particles. Additional Work on individual fly ash particles was completed by laboratory-based Micro-Computed Tomography and Scanning Electron Microscopy. By combining the results of individual particles and bulk property tests, a compiled perspective is introduced, and accessed to try and make new insights into the reactivity of fly ash within concrete.

  4. Analytical techniques for measuring hydrocarbon emissions from the manufacture of fiberglass-reinforced plastics. Report for June 1995--March 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.S.; Kong, E.J.; Bahner, M.A.

    The paper discusses several projects to measure hydrocarbon emissions associated with the manufacture of fiberglass-reinforced plastics. The main purpose of the projects was to evaluate pollution prevention techniques to reduce emissions by altering raw materials, application equipment, and operator technique. Analytical techniques were developed to reduce the cost of these emission measurements. Emissions from a small test mold in a temporary total enclosure (TTE) correlated with emissions from full-size production molds in a separate TTE. Gravimetric mass balance measurements inside the TTE generally agreed to within +/-30% with total hydrocarbon (THC) measurements in the TTE exhaust duct.

  5. Student Writing Accepted as High-Quality Responses to Analytic Text-Based Writing Tasks

    ERIC Educational Resources Information Center

    Wang, Elaine; Matsumura, Lindsay Clare; Correnti, Richard

    2018-01-01

    Literacy standards increasingly emphasize the importance of analytic text-based writing. Little consensus exists, however, around what high-quality student responses should look like in this genre. In this study, we investigated fifth-grade students' writing in response to analytic text-based writing tasks (15 teachers, 44 writing tasks, 88 pieces…

  6. Nine-analyte detection using an array-based biosensor

    NASA Technical Reports Server (NTRS)

    Taitt, Chris Rowe; Anderson, George P.; Lingerfelt, Brian M.; Feldstein, s. Mark. J.; Ligler, Frances S.

    2002-01-01

    A fluorescence-based multianalyte immunosensor has been developed for simultaneous analysis of multiple samples. While the standard 6 x 6 format of the array sensor has been used to analyze six samples for six different analytes, this same format has the potential to allow a single sample to be tested for 36 different agents. The method described herein demonstrates proof of principle that the number of analytes detectable using a single array can be increased simply by using complementary mixtures of capture and tracer antibodies. Mixtures were optimized to allow detection of closely related analytes without significant cross-reactivity. Following this facile modification of patterning and assay procedures, the following nine targets could be detected in a single 3 x 3 array: Staphylococcal enterotoxin B, ricin, cholera toxin, Bacillus anthracis Sterne, Bacillus globigii, Francisella tularensis LVS, Yersiniapestis F1 antigen, MS2 coliphage, and Salmonella typhimurium. This work maximizes the efficiency and utility of the described array technology, increasing only reagent usage and cost; production and fabrication costs are not affected.

  7. Ring-oven based preconcentration technique for microanalysis: simultaneous determination of Na, Fe, and Cu in fuel ethanol by laser induced breakdown spectroscopy.

    PubMed

    Cortez, Juliana; Pasquini, Celio

    2013-02-05

    The ring-oven technique, originally applied for classical qualitative analysis in the years 1950s to 1970s, is revisited to be used in a simple though highly efficient and green procedure for analyte preconcentration prior to its determination by the microanalytical techniques presently available. The proposed preconcentration technique is based on the dropwise delivery of a small volume of sample to a filter paper substrate, assisted by a flow-injection-like system. The filter paper is maintained in a small circular heated oven (the ring oven). Drops of the sample solution diffuse by capillarity from the center to a circular area of the paper substrate. After the total sample volume has been delivered, a ring with a sharp (c.a. 350 μm) circular contour, of about 2.0 cm diameter, is formed on the paper to contain most of the analytes originally present in the sample volume. Preconcentration coefficients of the analyte can reach 250-fold (on a m/m basis) for a sample volume as small as 600 μL. The proposed system and procedure have been evaluated to concentrate Na, Fe, and Cu in fuel ethanol, followed by simultaneous direct determination of these species in the ring contour, employing the microanalytical technique of laser induced breakdown spectroscopy (LIBS). Detection limits of 0.7, 0.4, and 0.3 μg mL(-1) and mean recoveries of (109 ± 13)%, (92 ± 18)%, and (98 ± 12)%, for Na, Fe, and Cu, respectively, were obtained in fuel ethanol. It is possible to anticipate the application of the technique, coupled to modern microanalytical and multianalyte techniques, to several analytical problems requiring analyte preconcentration and/or sample stabilization.

  8. Analytical method for the identification and assay of 12 phthalates in cosmetic products: application of the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques".

    PubMed

    Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine

    2012-08-31

    Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices. Copyright © 2012

  9. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  10. Beam shaping assembly optimization for (7)Li(p,n)(7)Be accelerator based BNCT.

    PubMed

    Minsky, D M; Kreiner, A J

    2014-06-01

    Within the framework of accelerator-based BNCT, a project to develop a folded Tandem-ElectroStatic-Quadrupole accelerator is under way at the Atomic Energy Commission of Argentina. The proposed accelerator is conceived to deliver a proton beam of 30mA at about 2.5MeV. In this work we explore a Beam Shaping Assembly (BSA) design based on the (7)Li(p,n)(7)Be neutron production reaction to obtain neutron beams to treat deep seated tumors. © 2013 Elsevier Ltd. All rights reserved.

  11. More IMPATIENT: A Gridding-Accelerated Toeplitz-based Strategy for Non-Cartesian High-Resolution 3D MRI on GPUs

    PubMed Central

    Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.

    2013-01-01

    Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203

  12. Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Chan; Mori, W.

    2013-10-21

    This is the final report on the DOE grant number DE-FG02-92ER40727 titled, “Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators.” During this grant period the UCLA program on Advanced Plasma Based Accelerators, headed by Professor C. Joshi has made many key scientific advances and trained a generation of students, many of whom have stayed in this research field and even started research programs of their own. In this final report however, we will focus on the last three years of the grant and report on the scientific progress made in each of the four tasksmore » listed under this grant. Four tasks are focused on: Plasma Wakefield Accelerator Research at FACET, SLAC National Accelerator Laboratory, In House Research at UCLA’s Neptune and 20 TW Laser Laboratories, Laser-Wakefield Acceleration (LWFA) in Self Guided Regime: Experiments at the Callisto Laser at LLNL, and Theory and Simulations. Major scientific results have been obtained in each of the four tasks described in this report. These have led to publications in the prestigious scientific journals, graduation and continued training of high quality Ph.D. level students and have kept the U.S. at the forefront of plasma-based accelerators research field.« less

  13. Analytical Electrochemistry: Theory and Instrumentation of Dynamic Techniques.

    ERIC Educational Resources Information Center

    Johnson, Dennis C.

    1980-01-01

    Emphasizes trends in the development of six topics concerning analytical electrochemistry, including books and reviews (34 references cited), mass transfer (59), charge transfer (25), surface effects (33), homogeneous reactions (21), and instrumentation (31). (CS)

  14. Accelerator based fusion reactor

    NASA Astrophysics Data System (ADS)

    Liu, Keh-Fei; Chao, Alexander Wu

    2017-08-01

    A feasibility study of fusion reactors based on accelerators is carried out. We consider a novel scheme where a beam from the accelerator hits the target plasma on the resonance of the fusion reaction and establish characteristic criteria for a workable reactor. We consider the reactions d+t\\to n+α,d+{{}3}{{H}\\text{e}}\\to p+α , and p+{{}11}B\\to 3α in this study. The critical temperature of the plasma is determined from overcoming the stopping power of the beam with the fusion energy gain. The needed plasma lifetime is determined from the width of the resonance, the beam velocity and the plasma density. We estimate the critical beam flux by balancing the energy of fusion production against the plasma thermo-energy and the loss due to stopping power for the case of an inert plasma. The product of critical flux and plasma lifetime is independent of plasma density and has a weak dependence on temperature. Even though the critical temperatures for these reactions are lower than those for the thermonuclear reactors, the critical flux is in the range of {{10}22}-{{10}24}~\\text{c}{{\\text{m}}-2}~{{\\text{s}}-1} for the plasma density {ρt}={{10}15}~\\text{c}{{\\text{m}}-3} in the case of an inert plasma. Several approaches to control the growth of the two-stream instability are discussed. We have also considered several scenarios for practical implementation which will require further studies. Finally, we consider the case where the injected beam at the resonance energy maintains the plasma temperature and prolongs its lifetime to reach a steady state. The equations for power balance and particle number conservation are given for this case.

  15. Plasmon-driven acceleration in a photo-excited nanotube

    DOE PAGES

    Shin, Young -Min

    2017-02-21

    A plasmon-assisted channeling acceleration can be realized with a large channel, possibly at the nanometer scale. Carbon nanotubes (CNTs) are the most typical example of nano-channels that can confine a large number of channeled particles in a photon-plasmon coupling condition. This paper presents a theoretical and numerical study on the concept of high-field charge acceleration driven by photo-excited Luttinger-liquid plasmons in a nanotube. An analytic description of the plasmon-assisted laser acceleration is detailed with practical acceleration parameters, in particular, with the specifications of a typical tabletop femtosecond laser system. Lastly, the maximally achievable acceleration gradients and energy gains within dephasingmore » lengths and CNT lengths are discussed with respect to laser-incident angles and CNT-filling ratios.« less

  16. Project-Based Curriculum for Teaching Analytical Design to Freshman Engineering Students via Reconfigurable Trebuchets

    ERIC Educational Resources Information Center

    Herber, Daniel R.; Deshmukh, Anand P.; Mitchell, Marlon E.; Allison, James T.

    2016-01-01

    This paper presents an effort to revitalize a large introductory engineering course for incoming freshman students that teaches them analytical design through a project-based curriculum. This course was completely transformed from a seminar-based to a project-based course that integrates hands-on experimentation with analytical work. The project…

  17. A reference web architecture and patterns for real-time visual analytics on large streaming data

    NASA Astrophysics Data System (ADS)

    Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer

    2013-12-01

    Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.

  18. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  19. Accelerated computer generated holography using sparse bases in the STFT domain.

    PubMed

    Blinder, David; Schelkens, Peter

    2018-01-22

    Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.

  20. Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.

    PubMed

    Stolper, Charles D; Perer, Adam; Gotz, David

    2014-12-01

    As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.

  1. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1991-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  2. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.

    1992-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  3. Sensor failure detection for jet engines using analytical redundance

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1984-01-01

    Analytical redundant sensor failure detection, isolation and accommodation techniques for gas turbine engines are surveyed. Both the theoretical technology base and demonstrated concepts are discussed. Also included is a discussion of current technology needs and ongoing Government sponsored programs to meet those needs.

  4. Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion

    PubMed Central

    Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza

    2013-01-01

    Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free

  5. 1985 Particle Accelerator Conference: Accelerator Engineering and Technology, 11th, Vancouver, Canada, May 13-16, 1985, Proceedings

    NASA Astrophysics Data System (ADS)

    Strathdee, A.

    1985-10-01

    The topics discussed are related to high-energy accelerators and colliders, particle sources and electrostatic accelerators, controls, instrumentation and feedback, beam dynamics, low- and intermediate-energy circular accelerators and rings, RF and other acceleration systems, beam injection, extraction and transport, operations and safety, linear accelerators, applications of accelerators, radiation sources, superconducting supercolliders, new acceleration techniques, superconducting components, cryogenics, and vacuum. Accelerator and storage ring control systems are considered along with linear and nonlinear orbit theory, transverse and longitudinal instabilities and cures, beam cooling, injection and extraction orbit theory, high current dynamics, general beam dynamics, and medical and radioisotope applications. Attention is given to superconducting RF structures, magnet technology, superconducting magnets, and physics opportunities with relativistic heavy ion accelerators.

  6. BIOCONAID System (Bionic Control of Acceleration Induced Dimming). Final Report.

    ERIC Educational Resources Information Center

    Rogers, Dana B.; And Others

    The system described represents a new technique for enhancing the fidelity of flight simulators during high acceleration maneuvers. This technique forces the simulator pilot into active participation and energy expenditure similar to the aircraft pilot undergoing actual accelerations. The Bionic Control of Acceleration Induced Dimming (BIOCONAID)…

  7. Recent trends in analytical methods and separation techniques for drugs of abuse in hair.

    PubMed

    Baciu, T; Borrull, F; Aguilar, C; Calull, M

    2015-01-26

    Hair analysis of drugs of abuse has been a subject of growing interest from a clinical, social and forensic perspective for years because of the broad time detection window after intake in comparison to urine and blood analysis. Over the last few years, hair analysis has gained increasing attention and recognition for the retrospective investigation of drug abuse in a wide variety of contexts, shown by the large number of applications developed. This review aims to provide an overview of the state of the art and the latest trends used in the literature from 2005 to the present in the analysis of drugs of abuse in hair, with a special focus on separation analytical techniques and their hyphenation with mass spectrometry detection. The most recently introduced sample preparation techniques are also addressed in this paper. The main strengths and weaknesses of all of these approaches are critically discussed by means of relevant applications. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  9. The effect of team accelerated instruction on students’ mathematics achievement and learning motivation

    NASA Astrophysics Data System (ADS)

    Sri Purnami, Agustina; Adi Widodo, Sri; Charitas Indra Prahmana, Rully

    2018-01-01

    This study aimed to know the improvement of achievement and motivation of learning mathematics by using Team Accelerated Instruction. The research method used was the experiment with descriptive pre-test post-test experiment. The population in this study was all students of class VIII junior high school in Jogjakarta. The sample was taken using cluster random sampling technique. The instrument used in this research was questionnaire and test. Data analysis technique used was Wilcoxon test. It concluded that there was an increase in motivation and student achievement of class VII on linear equation system material by using the learning model of Team Accelerated Instruction. Based on the results of the learning model Team Accelerated Instruction can be used as a variation model in learning mathematics.

  10. Incorporating Students' Self-Designed, Research-Based Analytical Chemistry Projects into the Instrumentation Curriculum

    ERIC Educational Resources Information Center

    Gao, Ruomei

    2015-01-01

    In a typical chemistry instrumentation laboratory, students learn analytical techniques through a well-developed procedure. Such an approach, however, does not engage students in a creative endeavor. To foster the intrinsic motivation of students' desire to learn, improve their confidence in self-directed learning activities and enhance their…

  11. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  12. Recent Applications of Carbon-Based Nanomaterials in Analytical Chemistry: Critical Review

    PubMed Central

    Scida, Karen; Stege, Patricia W.; Haby, Gabrielle; Messina, Germán A.; García, Carlos D.

    2011-01-01

    The objective of this review is to provide a broad overview of the advantages and limitations of carbon-based nanomaterials with respect to analytical chemistry. Aiming to illustrate the impact of nanomaterials on the development of novel analytical applications, developments reported in the 2005–2010 period have been included and divided into sample preparation, separation, and detection. Within each section, fullerenes, carbon nanotubes, graphene, and composite materials will be addressed specifically. Although only briefly discussed, included is a section highlighting nanomaterials with interesting catalytic properties that can be used in the design of future devices for analytical chemistry. PMID:21458626

  13. On Establishing Big Data Wave Breakwaters with Analytics (Invited)

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2013-12-01

    The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods

  14. Analytic Solution of the Electromagnetic Eigenvalues Problem in a Cylindrical Resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Checchin, Mattia; Martinello, Martina

    Resonant accelerating cavities are key components in modern particles accelerating facilities. These take advantage of electromagnetic fields resonating at microwave frequencies to accelerate charged particles. Particles gain finite energy at each passage through a cavity if in phase with the resonating field, reaching energies even of the order of $TeV$ when a cascade of accelerating resonators are present. In order to understand how a resonant accelerating cavity transfers energy to charged particles, it is important to determine how the electromagnetic modes are exited into such resonators. In this paper we present a complete analytical calculation of the resonating fields formore » a simple cylindrical-shaped cavity.« less

  15. A Survey of Shape Parameterization Techniques

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.

  16. Plug-in module acceleration feedback control for fast steering mirror-based beam stabilization systems

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Ren, Wei; Mao, Yao; Ren, Ge

    2017-08-01

    A plug-in module acceleration feedback control (Plug-In AFC) strategy based on the disturbance observer (DOB) principle is proposed for charge-coupled device (CCD)-based fast steering mirror (FSM) stabilization systems. In classical FSM tracking systems, dual-loop control (DLC), including velocity feedback and position feedback, is usually utilized to enhance the closed-loop performance. Due to the mechanical resonance of the system and CCD time delay, the closed-loop bandwidth is severely restricted. To solve this problem, cascade acceleration feedback control (AFC), which is a kind of high-precision robust control method, is introduced to strengthen the disturbance rejection property. However, in practical applications, it is difficult to realize an integral algorithm in an acceleration controller to compensate for the quadratic differential contained in the FSM acceleration model, resulting in a challenging controller design and a limited improvement. To optimize the acceleration feedback framework in the FSM system, different from the cascade AFC, the accelerometers are used to construct DOB to compensate for the platform vibrations directly. The acceleration nested loop can be plugged into the velocity loop without changing the system stability, and the controller design is quite simple. A series of comparative experimental results demonstrate that the disturbance rejection property of the CCD-based FSM can be effectively improved by the proposed approach.

  17. Techniques for hazard analysis and their use at CERN.

    PubMed

    Nuttall, C; Schönbacher, H

    2001-01-01

    CERN, The European Organisation for Nuclear Research is situated near Geneva and has its accelerators and experimental facilities astride the Swiss and French frontiers attracting physicists from all over the world to this unique laboratory. The main accelerator is situated in a 27 km underground ring and the experiments take place in huge underground caverns in order to detect the fragments resulting from the collision of subatomic particles at speeds approaching that of light. These detectors contain many hundreds of tons of flammable materials, mainly plastics in cables and structural components, flammable gases in the detectors themselves, and cryogenic fluids such as helium and argon. The experiments consume high amounts of electrical power, thus the dangers involved have necessitated the use of analytical techniques to identify the hazards and quantify the risks to personnel and the infrastructure. The techniques described in the paper have been developed in the process industries where they have been to be of great value. They have been successfully applied to CERN industrial and experimental installations and, in some cases, have been instrumental in changing the philosophy of the experimentalists and their detectors.

  18. Extraction Techniques for Polycyclic Aromatic Hydrocarbons in Soils

    PubMed Central

    Lau, E. V.; Gan, S.; Ng, H. K.

    2010-01-01

    This paper aims to provide a review of the analytical extraction techniques for polycyclic aromatic hydrocarbons (PAHs) in soils. The extraction technologies described here include Soxhlet extraction, ultrasonic and mechanical agitation, accelerated solvent extraction, supercritical and subcritical fluid extraction, microwave-assisted extraction, solid phase extraction and microextraction, thermal desorption and flash pyrolysis, as well as fluidised-bed extraction. The influencing factors in the extraction of PAHs from soil such as temperature, type of solvent, soil moisture, and other soil characteristics are also discussed. The paper concludes with a review of the models used to describe the kinetics of PAH desorption from soils during solvent extraction. PMID:20396670

  19. SU-F-T-201: Acceleration of Dose Optimization Process Using Dual-Loop Optimization Technique for Spot Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujimoto, R

    Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less

  20. OpenFOAM Modeling of Particle Heating and Acceleration in Cold Spraying

    NASA Astrophysics Data System (ADS)

    Leitz, K.-H.; O'Sullivan, M.; Plankensteiner, A.; Kestler, H.; Sigl, L. S.

    2018-01-01

    In cold spraying, a powder material is accelerated and heated in the gas flow of a supersonic nozzle to velocities and temperatures that are sufficient to obtain cohesion of the particles to a substrate. The deposition efficiency of the particles is significantly determined by their velocity and temperature. Particle velocity correlates with the amount of kinetic energy that is converted to plastic deformation and thermal heating. The initial particle temperature significantly influences the mechanical properties of the particle. Velocity and temperature of the particles have nonlinear dependence on the pressure and temperature of the gas at the nozzle entrance. In this contribution, a simulation model based on the reactingParcelFoam solver of OpenFOAM is presented and applied for an analysis of particle velocity and temperature in the cold spray nozzle. The model combines a compressible description of the gas flow in the nozzle with a Lagrangian particle tracking. The predictions of the simulation model are verified based on an analytical description of the gas flow, the particle acceleration and heating in the nozzle. Based on experimental data, the drag model according to Plessis and Masliyah is identified to be best suited for OpenFOAM modeling particle heating and acceleration in cold spraying.

  1. The beat in laser-accelerated ion beams

    NASA Astrophysics Data System (ADS)

    Schnürer, M.; Andreev, A. A.; Abicht, F.; Bränzel, J.; Koschitzki, Ch.; Platonov, K. Yu.; Priebe, G.; Sandner, W.

    2013-10-01

    Regular modulation in the ion velocity distribution becomes detectable if intense femtosecond laser pulses with very high temporal contrast are used for target normal sheath acceleration of ions. Analytical and numerical analysis of the experimental observation associates the modulation with the half-cycle of the driving laser field period. In processes like ion acceleration, the collective and laser-frequency determined electron dynamics creates strong fields in plasma to accelerate the ions. Even the oscillatory motion of electrons and its influence on the acceleration field can dominate over smoothing effects in plasma if a high temporal contrast of the driving laser pulse is given. Acceleration parameters can be directly concluded out of the experimentally observed modulation period in ion velocity spectra. The appearance of the phenomenon at a temporal contrast of ten orders between the intensity of the pulse peak and the spontaneous amplified emission background as well as remaining intensity wings at picosecond time-scale might trigger further parameter studies with even higher contrast.

  2. A Toolbox of Metrology-Based Techniques for Optical System Alignment

    NASA Technical Reports Server (NTRS)

    Coulter, Phillip; Ohl, Raymond G.; Blake, Peter N.; Bos, Brent J.; Casto, Gordon V.; Eichhorn, William L.; Gum, Jeffrey S.; Hadjimichael, Theodore J.; Hagopian, John G.; Hayden, Joseph E.; hide

    2016-01-01

    The NASA Goddard Space Flight Center (GSFC) and its partners have broad experience in the alignment of flight optical instruments and spacecraft structures. Over decades, GSFC developed alignment capabilities and techniques for a variety of optical and aerospace applications. In this paper, we provide an overview of a subset of the capabilities and techniques used on several recent projects in a toolbox format. We discuss a range of applications, from small-scale optical alignment of sensors to mirror and bench examples that make use of various large-volume metrology techniques. We also discuss instruments and analytical tools.

  3. A Toolbox of Metrology-Based Techniques for Optical System Alignment

    NASA Technical Reports Server (NTRS)

    Coulter, Phillip; Ohl, Raymond G.; Blake, Peter N.; Bos, Brent J.; Eichhorn, William L.; Gum, Jeffrey S.; Hadjimichael, Theodore J.; Hagopian, John G.; Hayden, Joseph E.; Hetherington, Samuel E.; hide

    2016-01-01

    The NASA Goddard Space Flight Center (GSFC) and its partners have broad experience in the alignment of flight optical instruments and spacecraft structures. Over decades, GSFC developed alignment capabilities and techniques for a variety of optical and aerospace applications. In this paper, we provide an overview of a subset of the capabilities and techniques used on several recent projects in a "toolbox" format. We discuss a range of applications, from small-scale optical alignment of sensors to mirror and bench examples that make use of various large-volume metrology techniques. We also discuss instruments and analytical tools.

  4. Deriving Earth Science Data Analytics Requirements

    NASA Technical Reports Server (NTRS)

    Kempler, Steven J.

    2015-01-01

    Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists.Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics toolstechniques requirements that would support specific ESDA type goals. Representative existing data analytics toolstechniques relevant to ESDA will also be addressed.

  5. [Clinical Application of Analytical and Medical Instruments Mainly Using MS Techniques].

    PubMed

    Tanaka, Koichi

    2016-02-01

    Analytical instruments for clinical use are commonly required to confirm the compounds and forms related to diseases with the highest possible sensitivity, quantitative performance, and specificity and minimal invasiveness within a short time, easily, and at a low cost. Advancements of technical innovation for Mass Spectrometer (MS) have led to techniques that meet such requirements. Besides confirming known substances, other purposes and advantages of MS that are not fully known to the public are using MS as a tool to discover unknown phenomena and compounds. An example is clarifying the mechanisms of human diseases. The human body has approximately 100 thousand types of protein, and there may be more than several million types of protein and their metabolites. Most of them have yet to be discovered, and their discovery may give birth to new academic fields and lead to the clarification of diseases, development of new medicines, etc. For example, using the MS system developed under "Contribution to drug discovery and diagnosis by next generation of advanced mass spectrometry system," one of the 30 projects of the "Funding Program for World-Leading Innovative R&D on Science and Technology" (FIRST program), and other individual basic technologies, we succeeded in discovering new disease biomarker candidates for Alzheimer's disease, cancer, etc. Further contribution of MS to clinical medicine can be expected through the development and improvement of new techniques, efforts to verify discoveries, and communications with the medical front.

  6. Advanced induction accelerator designs for ground based and space based FELs

    NASA Astrophysics Data System (ADS)

    Birx, Daniel

    1994-04-01

    The primary goal of this program was to improve the performance of induction accelerators with particular regards to their being used to drive Free Electron Lasers (FEL's). It is hoped that FEL's operating at visible wavelengths might someday be used to beam power from earth to extraterrestrial locations. One application of this technology might be strategic theater defense, but this power source might be used to propel vehicles or supplement solar energized systems. Our path toward achieving this goal was directed first toward optimization of the nonlinear magnetic material used in induction accelerator construction and secondly at the overall design in terms of cost, size and efficiency. We began this research effort with an in depth study into the properties of various nonlinear magnetic materials. With the data on nonlinear magnetic materials, so important to the optimization of efficiency, in hand, we envisioned a new induction accelerator design where all of the components were packaged together in one container. This induction accelerator module would combine an /ll-solid-state, nonlinear magnetic driver and the induction accelerator cells all in one convenient package. Each accelerator module (denoted SNOMAD-IVB) would produce 1.0 MeV of acceleration with the exception of the SNOMAD-IV injector module which would produce 0.5 MeV of acceleration for an electron beam current up to 1000 amperes.

  7. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps,more » then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.« less

  8. Eutectic-based wafer-level-packaging technique for piezoresistive MEMS accelerometers and bond characterization using molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Aono, T.; Kazama, A.; Okada, R.; Iwasaki, T.; Isono, Y.

    2018-03-01

    We developed a eutectic-based wafer-level-packaging (WLP) technique for piezoresistive micro-electromechanical systems (MEMS) accelerometers on the basis of molecular dynamics analyses and shear tests of WLP accelerometers. The bonding conditions were experimentally and analytically determined to realize a high shear strength without solder material atoms diffusing to adhesion layers. Molecular dynamics (MD) simulations and energy dispersive x-ray (EDX) spectrometry done after the shear tests clarified the eutectic reaction of the solder materials used in this research. Energy relaxation calculations in MD showed that the diffusion of solder material atoms into the adhesive layer was promoted at a higher temperature. Tensile creep MD simulations also suggested that the local potential energy in a solder material model determined the fracture points of the model. These numerical results were supported by the shear tests and EDX analyses for WLP accelerometers. Consequently, a bonding load of 9.8 kN and temperature of 300 °C were found to be rational conditions because the shear strength was sufficient to endure the polishing process after the WLP process and there was little diffusion of solder material atoms to the adhesion layer. Also, eutectic-bonding-based WLP was effective for controlling the attenuation of the accelerometers by determining the thickness of electroplated solder materials that played the role of a cavity between the accelerometers and lids. If the gap distance between the two was less than 6.2 µm, the signal gains for x- and z-axis acceleration were less than 20 dB even at the resonance frequency due to air-damping.

  9. Application of thin layer activation technique for surface wear studies in Zr based materials using charged particle induced nuclear reactions

    NASA Astrophysics Data System (ADS)

    Chowdhury, D. P.; Pal, Sujit; Parthasarathy, R.; Mathur, P. K.; Kohli, A. K.; Limaye, P. K.

    1998-09-01

    Thin layer activation (TLA) technique has been developed in Zr based alloy materials, e.g., zircaloy II, using 40 MeV α-particles from Variable Energy Cyclotron Centre at Calcutta. A brief description of the methodology of TLA technique is presented to determine the surface wear. The sensitivity of the measurement of surface wear in zircaloy material is found to be 0.22±0.05 μm. The surface wear is determined by TLA technique in zircaloy material which is used in pressurised heavy water reactor and the values have been compared with that obtained by conventional technique for the analytical validation of the TLA technique.

  10. Optimized operation of dielectric laser accelerators: Multibunch

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-06-01

    We present a self-consistent analysis to determine the optimal charge, gradient, and efficiency for laser driven accelerators operating with a train of microbunches. Specifically, we account for the beam loading reduction on the material occurring at the dielectric-vacuum interface. In the case of a train of microbunches, such beam loading effect could be detrimental due to energy spread, however this may be compensated by a tapered laser pulse. We ultimately propose an optimization procedure with an analytical solution for group velocity which equals to half the speed of light. This optimization results in a maximum efficiency 20% lower than the single bunch case, and a total accelerated charge of 1 06 electrons in the train. The approach holds promise for improving operations of dielectric laser accelerators and may have an impact on emerging laser accelerators driven by high-power optical lasers.

  11. Beam-driven acceleration in ultra-dense plasma media

    DOE PAGES

    Shin, Young-Min

    2014-09-15

    Accelerating parameters of beam-driven wakefield acceleration in an extremely dense plasma column has been analyzed with the dynamic framed particle-in-cell plasma simulator, and compared with analytic calculations. In the model, a witness beam undergoes a TeV/m scale alternating potential gradient excited by a micro-bunched drive beam in a 10 25 m -3 and 1.6 x 10 28 m -3 plasma column. The acceleration gradient, energy gain, and transformer ratio have been extensively studied in quasi-linear, linear-, and blowout-regimes. The simulation analysis indicated that in the beam-driven acceleration system a hollow plasma channel offers 20 % higher acceleration gradient by enlargingmore » the channel radius (r) from 0.2 Ap to 0.6 .Ap in a blowout regime. This paper suggests a feasibility of TeV/m scale acceleration with a hollow crystalline structure (e.g. nanotubes) of high electron plasma density.« less

  12. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  13. Recent applications of carbon-based nanomaterials in analytical chemistry: critical review.

    PubMed

    Scida, Karen; Stege, Patricia W; Haby, Gabrielle; Messina, Germán A; García, Carlos D

    2011-04-08

    The objective of this review is to provide a broad overview of the advantages and limitations of carbon-based nanomaterials with respect to analytical chemistry. Aiming to illustrate the impact of nanomaterials on the development of novel analytical applications, developments reported in the 2005-2010 period have been included and divided into sample preparation, separation, and detection. Within each section, fullerenes, carbon nanotubes, graphene, and composite materials will be addressed specifically. Although only briefly discussed, included is a section highlighting nanomaterials with interesting catalytic properties that can be used in the design of future devices for analytical chemistry. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Accelerator-based epithermal neutron sources for boron neutron capture therapy of brain tumors.

    PubMed

    Blue, Thomas E; Yanch, Jacquelyn C

    2003-01-01

    This paper reviews the development of low-energy light ion accelerator-based neutron sources (ABNSs) for the treatment of brain tumors through an intact scalp and skull using boron neutron capture therapy (BNCT). A major advantage of an ABNS for BNCT over reactor-based neutron sources is the potential for siting within a hospital. Consequently, light-ion accelerators that are injectors to larger machines in high-energy physics facilities are not considered. An ABNS for BNCT is composed of: (1) the accelerator hardware for producing a high current charged particle beam, (2) an appropriate neutron-producing target and target heat removal system (HRS), and (3) a moderator/reflector assembly to render the flux energy spectrum of neutrons produced in the target suitable for patient irradiation. As a consequence of the efforts of researchers throughout the world, progress has been made on the design, manufacture, and testing of these three major components. Although an ABNS facility has not yet been built that has optimally assembled these three components, the feasibility of clinically useful ABNSs has been clearly established. Both electrostatic and radio frequency linear accelerators of reasonable cost (approximately 1.5 M dollars) appear to be capable of producing charged particle beams, with combinations of accelerated particle energy (a few MeV) and beam currents (approximately 10 mA) that are suitable for a hospital-based ABNS for BNCT. The specific accelerator performance requirements depend upon the charged particle reaction by which neutrons are produced in the target and the clinical requirements for neutron field quality and intensity. The accelerator performance requirements are more demanding for beryllium than for lithium as a target. However, beryllium targets are more easily cooled. The accelerator performance requirements are also more demanding for greater neutron field quality and intensity. Target HRSs that are based on submerged-jet impingement and

  15. Accelerator Based Tools of Stockpile Stewardship

    NASA Astrophysics Data System (ADS)

    Seestrom, Susan

    2017-01-01

    The Manhattan Project had to solve difficult challenges in physics and materials science. During the cold war a large nuclear stockpile was developed. In both cases, the approach was largely empirical. Today that stockpile must be certified without nuclear testing, a task that becomes more difficult as the stockpile ages. I will discuss the role of modern accelerator based experiments, such as x-ray radiography, proton radiography, neutron and nuclear physics experiments, in stockpile stewardship. These new tools provide data of exceptional sensitivity and are answering questions about the stockpile, improving our scientific understanding, and providing validation for the computer simulations that are relied upon to certify todays' stockpile.

  16. Screening of synthetic PDE-5 inhibitors and their analogues as adulterants: analytical techniques and challenges.

    PubMed

    Patel, Dhavalkumar Narendrabhai; Li, Lin; Kee, Chee-Leong; Ge, Xiaowei; Low, Min-Yong; Koh, Hwee-Ling

    2014-01-01

    The popularity of phosphodiesterase type 5 (PDE-5) enzyme inhibitors for the treatment of erectile dysfunction has led to the increase in prevalence of illicit sexual performance enhancement products. PDE-5 inhibitors, namely sildenafil, tadalafil and vardenafil, and their unapproved designer analogues are being increasingly used as adulterants in the herbal products and health supplements marketed for sexual performance enhancement. To date, more than 50 unapproved analogues of prescription PDE-5 inhibitors were found as adulterants in the literature. To avoid detection of such adulteration by standard screening protocols, the perpetrators of such illegal products are investing time and resources to synthesize exotic analogues and devise novel means for adulteration. A comprehensive review of conventional and advance analytical techniques to detect and characterize the adulterants is presented. The rapid identification and structural elucidation of unknown analogues as adulterants is greatly enhanced by the wide myriad of analytical techniques employed, including high performance liquid chromatography (HPLC), gas chromatography-mass spectrometry (GC-MS), liquid chromatography mass-spectrometry (LC-MS), nuclear magnetic resonance (NMR) spectroscopy, vibrational spectroscopy, liquid chromatography-Fourier transform ion cyclotron resonance-mass spectrometry (LC-FT-ICR-MS), liquid chromatograph-hybrid triple quadrupole linear ion trap mass spectrometer with information dependent acquisition, ultra high performance liquid chromatography-time of flight-mass spectrometry (UHPLC-TOF-MS), ion mobility spectroscopy (IMS) and immunoassay methods. The many challenges in detecting and characterizing such adulterants, and the need for concerted effort to curb adulteration in order to safe guard public safety and interest are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. An Analytical Technique to Elucidate Field Impurities From Manufacturing Uncertainties of an Double Pancake Type HTS Insert for High Field LTS/HTS NMR Magnets

    PubMed Central

    Hahn, Seung-yong; Ahn, Min Cheol; Bobrov, Emanuel Saul; Bascuñán, Juan; Iwasa, Yukikazu

    2010-01-01

    This paper addresses adverse effects of dimensional uncertainties of an HTS insert assembled with double-pancake coils on spatial field homogeneity. Each DP coil was wound with Bi2223 tapes having dimensional tolerances larger than one order of magnitude of those accepted for LTS wires used in conventional NMR magnets. The paper presents: 1) dimensional variations measured in two LTS/HTS NMR magnets, 350 MHz (LH350) and 700 MHz (LH700), both built and operated at the Francis Bitter Magnet Laboratory; and 2) an analytical technique and its application to elucidate the field impurities measured with the two LTS/HTS magnets. Field impurities computed with the analytical model and those measured with the two LTS/HTS magnets agree quite well, demonstrating that this analytical technique is applicable to design a DP-assembled HTS insert with an improved field homogeneity for a high-field LTS/HTS NMR magnet. PMID:20407595

  18. Spatially inhomogeneous acceleration of electrons in solar flares

    NASA Astrophysics Data System (ADS)

    Stackhouse, Duncan J.; Kontar, Eduard P.

    2018-04-01

    The imaging spectroscopy capabilities of the Reuven Ramaty high energy solar spectroscopic imager (RHESSI) enable the examination of the accelerated electron distribution throughout a solar flare region. In particular, it has been revealed that the energisation of these particles takes place over a region of finite size, sometimes resolved by RHESSI observations. In this paper, we present, for the first time, a spatially distributed acceleration model and investigate the role of inhomogeneous acceleration on the observed X-ray emission properties. We have modelled transport explicitly examining scatter-free and diffusive transport within the acceleration region and compare with the analytic leaky-box solution. The results show the importance of including this spatial variation when modelling electron acceleration in solar flares. The presence of an inhomogeneous, extended acceleration region produces a spectral index that is, in most cases, different from the simple leaky-box prediction. In particular, it results in a generally softer spectral index than predicted by the leaky-box solution, for both scatter-free and diffusive transport, and thus should be taken into account when modelling stochastic acceleration in solar flares.

  19. Analytical Chemistry: A Literary Approach.

    ERIC Educational Resources Information Center

    Lucy, Charles A.

    2000-01-01

    Provides an anthology of references to descriptions of analytical chemistry techniques from history, popular fiction, and film which can be used to capture student interest and frame discussions of chemical techniques. (WRM)

  20. Single-Step Reagentless Laser Scribing Fabrication of Electrochemical Paper-Based Analytical Devices.

    PubMed

    de Araujo, William R; Frasson, Carolina M R; Ameku, Wilson A; Silva, José R; Angnes, Lúcio; Paixão, Thiago R L C

    2017-11-20

    A single-step laser scribing process is used to pattern nanostructured electrodes on paper-based devices. The facile and low-cost technique eliminates the need for chemical reagents or controlled conditions. This process involves the use of a CO 2 laser to pyrolyze the surface of the paperboard, producing a conductive porous non-graphitizing carbon material composed of graphene sheets and composites with aluminosilicate nanoparticles. The new electrode material was extensively characterized, and it exhibits high conductivity and an enhanced active/geometric area ratio; it is thus well-suited for electrochemical purposes. As a proof-of-concept, the devices were successfully employed for different analytical applications in the clinical, pharmaceutical, food, and forensic fields. The scalable and green fabrication method associated with the features of the new material is highly promising for the development of portable electrochemical devices. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Residual acceleration data on IML-1: Development of a data reduction and dissemination plan

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.; Wolf, Randy

    1992-01-01

    The main thrust of our work in the third year of contract NAG8-759 was the development and analysis of various data processing techniques that may be applicable to residual acceleration data. Our goal is the development of a data processing guide that low gravity principal investigators can use to assess their need for accelerometer data and then formulate an acceleration data analysis strategy. The work focused on the flight of the first International Microgravity Laboratory (IML-1) mission. We are also developing a data base management system to handle large quantities of residual acceleration data. This type of system should be an integral tool in the detailed analysis of accelerometer data. The system will manage a large graphics data base in the support of supervised and unsupervised pattern recognition. The goal of the pattern recognition phase is to identify specific classes of accelerations so that these classes can be easily recognized in any data base. The data base management system is being tested on the Spacelab 3 (SL3) residual acceleration data.

  2. A big data geospatial analytics platform - Physical Analytics Integrated Repository and Services (PAIRS)

    NASA Astrophysics Data System (ADS)

    Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.

    2015-12-01

    A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data

  3. BUPT_PRIS at TREC 2014 Knowledge Base Acceleration Track

    DTIC Science & Technology

    2014-11-01

    BUPT_PRIS at TREC 2014 Knowledge Base Acceleration Track Yuanyuan Qi, Ye Xu, Dongxu Zhang, Weiran Xu qiyuanyuan@bupt.edu.cn,bob.ye.xu@gmail.com...Filtering for Entity Profile Updates for TREC 2013 [2]Yan Li, Zhaozhao Wang , Baojin Yu, Yong Zhang, Ruiyang Luo,Weiran Xu, Guang Chen, Jun Guo. PRIS

  4. Theoretical and Observational Analysis of Particle Acceleration Mechanisms at Astrophysical Shocks

    NASA Astrophysics Data System (ADS)

    Lever, Edward Lawrence

    We analytically and numerically investigate the viability of Shock Surfing as a pre-injection mechanism for Diffusive Shock Acceleration, believed to be responsible for the production of Cosmic Rays. We demonstrate mathematically and from computer simulations that four critical conditions must be satisfied for Shock Surfing to function; the shock ramp must be narrow, the shock front must be smooth, the magnetic field angle must be very nearly perpendicular and, finally, these conditions must persist without interruption over substantial time periods and spatial scales. We quantify these necessary conditions, exhibit predictive functions for velocity maxima and accelerated ion fluxes based on observable shock parameters, and show unequivocally from current observational evidence that all of these necessary conditions are violated at shocks within the heliosphere, at the heliospheric Termination Shock, and also at Supernovae.

  5. An in situ accelerator-based diagnostic for plasma-material interactions science on magnetic fusion devices.

    PubMed

    Hartwig, Zachary S; Barnard, Harold S; Lanza, Richard C; Sorbom, Brandon N; Stahle, Peter W; Whyte, Dennis G

    2013-12-01

    This paper presents a novel particle accelerator-based diagnostic that nondestructively measures the evolution of material surface compositions inside magnetic fusion devices. The diagnostic's purpose is to contribute to an integrated understanding of plasma-material interactions in magnetic fusion, which is severely hindered by a dearth of in situ material surface diagnosis. The diagnostic aims to remotely generate isotopic concentration maps on a plasma shot-to-shot timescale that cover a large fraction of the plasma-facing surface inside of a magnetic fusion device without the need for vacuum breaks or physical access to the material surfaces. Our instrument uses a compact (~1 m), high-current (~1 milliamp) radio-frequency quadrupole accelerator to inject 0.9 MeV deuterons into the Alcator C-Mod tokamak at MIT. We control the tokamak magnetic fields--in between plasma shots--to steer the deuterons to material surfaces where the deuterons cause high-Q nuclear reactions with low-Z isotopes ~5 μm into the material. The induced neutrons and gamma rays are measured with scintillation detectors; energy spectra analysis provides quantitative reconstruction of surface compositions. An overview of the diagnostic technique, known as accelerator-based in situ materials surveillance (AIMS), and the first AIMS diagnostic on the Alcator C-Mod tokamak is given. Experimental validation is shown to demonstrate that an optimized deuteron beam is injected into the tokamak, that low-Z isotopes such as deuterium and boron can be quantified on the material surfaces, and that magnetic steering provides access to different measurement locations. The first AIMS analysis, which measures the relative change in deuterium at a single surface location at the end of the Alcator C-Mod FY2012 plasma campaign, is also presented.

  6. Rapid acceleration of protons upstream of earthward propagating dipolarization fronts

    PubMed Central

    Ukhorskiy, AY; Sitnov, MI; Merkin, VG; Artemyev, AV

    2013-01-01

    [1] Transport and acceleration of ions in the magnetotail largely occurs in the form of discrete impulsive events associated with a steep increase of the tail magnetic field normal to the neutral plane (Bz), which are referred to as dipolarization fronts. The goal of this paper is to investigate how protons initially located upstream of earthward moving fronts are accelerated at their encounter. According to our analytical analysis and simplified two-dimensional test-particle simulations of equatorially mirroring particles, there are two regimes of proton acceleration: trapping and quasi-trapping, which are realized depending on whether the front is preceded by a negative depletion in Bz. We then use three-dimensional test-particle simulations to investigate how these acceleration processes operate in a realistic magnetotail geometry. For this purpose we construct an analytical model of the front which is superimposed onto the ambient field of the magnetotail. According to our numerical simulations, both trapping and quasi-trapping can produce rapid acceleration of protons by more than an order of magnitude. In the case of trapping, the acceleration levels depend on the amount of time particles stay in phase with the front which is controlled by the magnetic field curvature ahead of the front and the front width. Quasi-trapping does not cause particle scattering out of the equatorial plane. Energization levels in this case are limited by the number of encounters particles have with the front before they get magnetized behind it. PMID:26167430

  7. Applying nonlinear diffusion acceleration to the neutron transport k-Eigenvalue problem with anisotropic scattering

    DOE PAGES

    Willert, Jeffrey; Park, H.; Taitano, William

    2015-11-01

    High-order/low-order (or moment-based acceleration) algorithms have been used to significantly accelerate the solution to the neutron transport k-eigenvalue problem over the past several years. Recently, the nonlinear diffusion acceleration algorithm has been extended to solve fixed-source problems with anisotropic scattering sources. In this paper, we demonstrate that we can extend this algorithm to k-eigenvalue problems in which the scattering source is anisotropic and a significant acceleration can be achieved. Lastly, we demonstrate that the low-order, diffusion-like eigenvalue problem can be solved efficiently using a technique known as nonlinear elimination.

  8. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  9. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    PubMed

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  10. Opportunity and Challenges for Migrating Big Data Analytics in Cloud

    NASA Astrophysics Data System (ADS)

    Amitkumar Manekar, S.; Pradeepini, G., Dr.

    2017-08-01

    Big Data Analytics is a big word now days. As per demanding and more scalable process data generation capabilities, data acquisition and storage become a crucial issue. Cloud storage is a majorly usable platform; the technology will become crucial to executives handling data powered by analytics. Now a day’s trend towards “big data-as-a-service” is talked everywhere. On one hand, cloud-based big data analytics exactly tackle in progress issues of scale, speed, and cost. But researchers working to solve security and other real-time problem of big data migration on cloud based platform. This article specially focused on finding possible ways to migrate big data to cloud. Technology which support coherent data migration and possibility of doing big data analytics on cloud platform is demanding in natute for new era of growth. This article also gives information about available technology and techniques for migration of big data in cloud.

  11. Social Data Analytics Using Tensors and Sparse Techniques

    ERIC Educational Resources Information Center

    Zhang, Miao

    2014-01-01

    The development of internet and mobile technologies is driving an earthshaking social media revolution. They bring the internet world a huge amount of social media content, such as images, videos, comments, etc. Those massive media content and complicate social structures require the analytic expertise to transform those flood of information into…

  12. GPU accelerated manifold correction method for spinning compact binaries

    NASA Astrophysics Data System (ADS)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  13. Laser-driven three-stage heavy-ion acceleration from relativistic laser-plasma interaction.

    PubMed

    Wang, H Y; Lin, C; Liu, B; Sheng, Z M; Lu, H Y; Ma, W J; Bin, J H; Schreiber, J; He, X T; Chen, J E; Zepf, M; Yan, X Q

    2014-01-01

    A three-stage heavy ion acceleration scheme for generation of high-energy quasimonoenergetic heavy ion beams is investigated using two-dimensional particle-in-cell simulation and analytical modeling. The scheme is based on the interaction of an intense linearly polarized laser pulse with a compound two-layer target (a front heavy ion layer + a second light ion layer). We identify that, under appropriate conditions, the heavy ions preaccelerated by a two-stage acceleration process in the front layer can be injected into the light ion shock wave in the second layer for a further third-stage acceleration. These injected heavy ions are not influenced by the screening effect from the light ions, and an isolated high-energy heavy ion beam with relatively low-energy spread is thus formed. Two-dimensional particle-in-cell simulations show that ∼100MeV/u quasimonoenergetic Fe24+ beams can be obtained by linearly polarized laser pulses at intensities of 1.1×1021W/cm2.

  14. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  15. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  16. Damage detection based on acceleration data using artificial immune system

    NASA Astrophysics Data System (ADS)

    Chartier, Sandra; Mita, Akira

    2009-03-01

    Nowadays, Structural Health Monitoring (SHM) is essential in order to prevent damages occurrence in civil structures. This is a particularly important issue as the number of aged structures is increasing. Damage detection algorithms are often based on changes in the modal properties like natural frequencies, modal shapes and modal damping. In this paper, damage detection is completed by using Artificial Immune System (AIS) theory directly on acceleration data. Inspired from the biological immune system, AIS is composed of several models like negative selection which has a great potential for this study. The negative selection process relies on the fact that T-cells, after their maturation, are sensitive to non self cells and can not detect self cells. Acceleration data were provided by using the numerical model of a 3-story frame structure. Damages were introduced, at particular times, by reduction of story's stiffness. Based on these acceleration data, undamaged data (equivalent to self data) and damaged data (equivalent to non self data) can be obtained and represented in the Hamming shape-space with a binary representation. From the undamaged encoded data, detectors (equivalent to T-cells) are derived and are able to detect damaged encoded data really efficiently by using the rcontiguous bits matching rule. Indeed, more than 95% of detection can be reached when efficient combinations of parameters are used. According to the number of detected data, the localization of damages can even be determined by using the differences between story's relative accelerations. Thus, the difference which presents the highest detection rate, generally up to 89%, is directly linked to the location of damage.

  17. Factors Affecting the Location of Road Emergency Bases in Iran Using Analytical Hierarchy Process (AHP).

    PubMed

    Bahadori, Mohammadkarim; Hajebrahimi, Ahmad; Alimohammadzadeh, Khalil; Ravangard, Ramin; Hosseini, Seyed Mojtaba

    2017-10-01

    To identify and prioritize factors affecting the location of road emergency bases in Iran using Analytical Hierarchy Process (AHP). This was a mixed method (quantitative-qualitative) study conducted in 2016. The participants in this study included the professionals and experts in the field of pre-hospital and road emergency services issues working in the Health Deputy of Iran Ministry of Health and Medical Education, which were selected using purposive sampling method. In this study at first, the factors affecting the location of road emergency bases in Iran were identified using literature review and conducting interviews with the experts. Then, the identified factors were scored and prioritized using the studied professionals and experts' viewpoints through using the analytic hierarchy process (AHP) technique and its related pair-wise questionnaire. The collected data were analyzed using MAXQDA 10.0 software to analyze the answers given to the open question and Expert Choice 10.0 software to determine the weights and priorities of the identified factors. The results showed that eight factors were effective in locating the road emergency bases in Iran from the viewpoints of the studied professionals and experts in the field of pre-hospital and road emergency services issues, including respectively distance from the next base, region population, topography and geographical situation of the region, the volume of road traffic, the existence of amenities such as water, electricity, gas, etc. and proximity to the village, accident-prone sites, University ownership of the base site, and proximity to toll-house. Among the eight factors which were effective in locating the road emergency bases from the studied professionals and experts' perspectives, "distance from the next base" and "region population" were respectively the most important ones which had great differences with other factors.

  18. Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ruf, Joe

    2007-01-01

    As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.

  19. Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity

    NASA Astrophysics Data System (ADS)

    Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad

    2017-12-01

    A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.

  20. Understanding changes over time in workers' compensation claim rates using time series analytical techniques.

    PubMed

    Moore, Ian C; Tompa, Emile

    2011-11-01

    The objective of this study is to better understand the inter-temporal variation in workers' compensation claim rates using time series analytical techniques not commonly used in the occupational health and safety literature. We focus specifically on the role of unemployment rates in explaining claim rate variations. The major components of workers' compensation claim rates are decomposed using data from a Canadian workers' compensation authority for the period 1991-2007. Several techniques are used to undertake the decomposition and assess key factors driving rates: (i) the multitaper spectral estimator, (ii) the harmonic F test, (iii) the Kalman smoother and (iv) ordinary least squares. The largest component of the periodic behaviour in workers' compensation claim rates is seasonal variation. Business cycle fluctuations in workers' compensation claim rates move inversely to unemployment rates. The analysis suggests that workers' compensation claim rates between 1991 and 2008 were driven by (in order of magnitude) a strong negative long term growth trend, periodic seasonal trends and business cycle fluctuations proxied by the Ontario unemployment rate.

  1. Geophysical technique for mineral exploration and discrimination based on electromagnetic methods and associated systems

    DOEpatents

    Zhdanov,; Michael, S [Salt Lake City, UT

    2008-01-29

    Mineral exploration needs a reliable method to distinguish between uneconomic mineral deposits and economic mineralization. A method and system includes a geophysical technique for subsurface material characterization, mineral exploration and mineral discrimination. The technique introduced in this invention detects induced polarization effects in electromagnetic data and uses remote geophysical observations to determine the parameters of an effective conductivity relaxation model using a composite analytical multi-phase model of the rock formations. The conductivity relaxation model and analytical model can be used to determine parameters related by analytical expressions to the physical characteristics of the microstructure of the rocks and minerals. These parameters are ultimately used for the discrimination of different components in underground formations, and in this way provide an ability to distinguish between uneconomic mineral deposits and zones of economic mineralization using geophysical remote sensing technology.

  2. From Data to Knowledge – Promising Analytical Tools and Techniques for Capture and Reuse of Corporate Knowledge and to Aid in the State Evaluation Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danielson, Gary R.; Augustenborg, Elsa C.; Beck, Andrew E.

    2010-10-29

    The IAEA is challenged with limited availability of human resources for inspection and data analysis while proliferation threats increase. PNNL has a variety of IT solutions and techniques (at varying levels of maturity and development) that take raw data closer to useful knowledge, thereby assisting with and standardizing the analytical processes. This paper highlights some PNNL tools and techniques which are applicable to the international safeguards community, including: • Intelligent in-situ triage of data prior to reliable transmission to an analysis center resulting in the transmission of smaller and more relevant data sets • Capture of expert knowledge in re-usablemore » search strings tailored to specific mission outcomes • Image based searching fused with text based searching • Use of gaming to discover unexpected proliferation scenarios • Process modeling (e.g. Physical Model) as the basis for an information integration portal, which links to data storage locations along with analyst annotations, categorizations, geographic data, search strings and visualization outputs.« less

  3. Accelerometer Data Analysis and Presentation Techniques

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Hrovat, Kenneth; McPherson, Kevin; Moskowitz, Milton E.; Reckart, Timothy

    1997-01-01

    The NASA Lewis Research Center's Principal Investigator Microgravity Services project analyzes Orbital Acceleration Research Experiment and Space Acceleration Measurement System data for principal investigators of microgravity experiments. Principal investigators need a thorough understanding of data analysis techniques so that they can request appropriate analyses to best interpret accelerometer data. Accelerometer data sampling and filtering is introduced along with the related topics of resolution and aliasing. Specific information about the Orbital Acceleration Research Experiment and Space Acceleration Measurement System data sampling and filtering is given. Time domain data analysis techniques are discussed and example environment interpretations are made using plots of acceleration versus time, interval average acceleration versus time, interval root-mean-square acceleration versus time, trimmean acceleration versus time, quasi-steady three dimensional histograms, and prediction of quasi-steady levels at different locations. An introduction to Fourier transform theory and windowing is provided along with specific analysis techniques and data interpretations. The frequency domain analyses discussed are power spectral density versus frequency, cumulative root-mean-square acceleration versus frequency, root-mean-square acceleration versus frequency, one-third octave band root-mean-square acceleration versus frequency, and power spectral density versus frequency versus time (spectrogram). Instructions for accessing NASA Lewis Research Center accelerometer data and related information using the internet are provided.

  4. New calibration technique for KCD-based megavoltage imaging

    NASA Astrophysics Data System (ADS)

    Samant, Sanjiv S.; Zheng, Wei; DiBianca, Frank A.; Zeman, Herbert D.; Laughter, Joseph S.

    1999-05-01

    In megavoltage imaging, current commercial electronic portal imaging devices (EPIDs), despite having the advantage of immediate digital imaging over film, suffer from poor image contrast and spatial resolution. The feasibility of using a kinestatic charge detector (KCD) as an EPID to provide superior image contrast and spatial resolution for portal imaging has already been demonstrated in a previous paper. The KCD system had the additional advantage of requiring an extremely low dose per acquired image, allowing for superior imaging to be reconstructed form a single linac pulse per image pixel. The KCD based images utilized a dose of two orders of magnitude less that for EPIDs and film. Compared with the current commercial EPIDs and film, the prototype KCD system exhibited promising image qualities, despite being handicapped by the use of a relatively simple image calibration technique, and the performance limits of medical linacs on the maximum linac pulse frequency and energy flux per pulse delivered. This image calibration technique fixed relative image pixel values based on a linear interpolation of extrema provided by an air-water calibration, and accounted only for channel-to-channel variations. The counterpart of this for area detectors is the standard flat fielding method. A comprehensive calibration protocol has been developed. The new technique additionally corrects for geometric distortions due to variations in the scan velocity, and timing artifacts caused by mis-synchronization between the linear accelerator and the data acquisition system (DAS). The role of variations in energy flux (2 - 3%) on imaging is demonstrated to be not significant for the images considered. The methodology is presented, and the results are discussed for simulated images. It also allows for significant improvements in the signal-to- noise ratio (SNR) by increasing the dose using multiple images without having to increase the linac pulse frequency or energy flux per pulse. The

  5. Optical trapping for analytical biotechnology.

    PubMed

    Ashok, Praveen C; Dholakia, Kishan

    2012-02-01

    We describe the exciting advances of using optical trapping in the field of analytical biotechnology. This technique has opened up opportunities to manipulate biological particles at the single cell or even at subcellular levels which has allowed an insight into the physical and chemical mechanisms of many biological processes. The ability of this technique to manipulate microparticles and measure pico-Newton forces has found several applications such as understanding the dynamics of biological macromolecules, cell-cell interactions and the micro-rheology of both cells and fluids. Furthermore we may probe and analyse the biological world when combining trapping with analytical techniques such as Raman spectroscopy and imaging. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Culture-Sensitive Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Vandenberghe, L.

    2008-01-01

    Functional analytic psychotherapy (FAP) is defined as behavior-analytically conceptualized talk therapy. In contrast to the technique-oriented educational format of cognitive behavior therapy and the use of structural mediational models, FAP depends on the functional analysis of the moment-to-moment stream of interactions between client and…

  7. Assessing the service quality of Iran military hospitals: Joint Commission International standards and Analytic Hierarchy Process (AHP) technique

    PubMed Central

    Bahadori, Mohammadkarim; Ravangard, Ramin; Yaghoubi, Maryam; Alimohammadzadeh, Khalil

    2014-01-01

    Background: Military hospitals are responsible for preserving, restoring and improving the health of not only armed forces, but also other people. According to the military organizations strategy, which is being a leader and pioneer in all areas, providing quality health services is one of the main goals of the military health care organizations. This study was aimed to evaluate the service quality of selected military hospitals in Iran based on the Joint Commission International (JCI) standards and comparing these hospitals with each other and ranking them using the analytic hierarchy process (AHP) technique in 2013. Materials and Methods: This was a cross-sectional and descriptive study conducted on five military hospitals, selected using the purposive sampling method, in 2013. Required data collected using checklists of accreditation standards and nominal group technique. AHP technique was used for prioritizing. Furthermore, Expert Choice 11.0 was used to analyze the collected data. Results: Among JCI standards, the standards of access to care and continuity of care (weight = 0.122), quality improvement and patient safety (weight = 0.121) and leadership and management (weight = 0.117) had the greatest importance, respectively. Furthermore, in the overall ranking, BGT (weight = 0.369), IHM (0.238), SAU (0.202), IHK (weight = 0.125) and SAB (weight = 0.066) ranked first to fifth, respectively. Conclusion: AHP is an appropriate technique for measuring the overall performance of hospitals and their quality of services. It is a holistic approach that takes all hospital processes into consideration. The results of the present study can be used to improve hospitals performance through identifying areas, which are in need of focus for quality improvement and selecting strategies to improve service quality. PMID:25250364

  8. Diffusive Cosmic-Ray Acceleration at Shock Waves of Arbitrary Speed with Magnetostatic Turbulence. I. General Theory and Correct Nonrelativistic Speed Limit

    NASA Astrophysics Data System (ADS)

    Schlickeiser, R.; Oppotsch, J.

    2017-12-01

    The analytical theory of diffusive acceleration of cosmic rays at parallel stationary shock waves of arbitrary speed with magnetostatic turbulence is developed from first principles. The theory is based on the diffusion approximation to the gyrotropic cosmic-ray particle phase-space distribution functions in the respective rest frames of the up- and downstream medium. We derive the correct cosmic-ray jump conditions for the cosmic-ray current and density, and match the up- and downstream distribution functions at the position of the shock. It is essential to account for the different particle momentum coordinates in the up- and downstream media. Analytical expressions for the momentum spectra of shock-accelerated cosmic rays are calculated. These are valid for arbitrary shock speeds including relativistic shocks. The correctly taken limit for nonrelativistic shock speeds leads to a universal broken power-law momentum spectrum of accelerated particles with velocities well above the injection velocity threshold, where the universal power-law spectral index q≃ 2-{γ }1-4 is independent of the flow compression ratio r. For nonrelativistic shock speeds, we calculate for the first time the injection velocity threshold, settling the long-standing injection problem for nonrelativistic shock acceleration.

  9. Integrating Internet Video Conferencing Techniques and Online Delivery Systems with Hybrid Classes to Enhance Student Interaction and Learning in Accelerated Programs

    ERIC Educational Resources Information Center

    Beckwith, E. George; Cunniff, Daniel T.

    2009-01-01

    Online course enrollment has increased dramatically over the past few years. The authors cite the reasons for this rapid growth and the opportunities open for enhancing teaching/learning techniques such as video conferencing and hybrid class combinations. The authors outlined an example of an accelerated learning, eight-class session course…

  10. Experimental and analytical studies on the vibration serviceability of long-span prestressed concrete floor

    NASA Astrophysics Data System (ADS)

    Cao, Liang; Liu, Jiepeng; Li, Jiang; Zhang, Ruizhi

    2018-04-01

    An extensive experimental and theoretical research study was undertaken to study the vibration serviceability of a long-span prestressed concrete floor system to be used in the lounge of a major airport. Specifically, jumping impact tests were carried out to obtain the floor's modal parameters, followed by an analysis of the distribution of peak accelerations. Running tests were also performed to capture the acceleration responses. The prestressed concrete floor was found to have a low fundamental natural frequency (≈ 8.86 Hz) corresponding to the average modal damping ratio of ≈ 2.17%. A coefficients β rp is proposed for convenient calculation of the maximum root-mean-square acceleration for running. In the theoretical analysis, the prestressed concrete floor under running excitation is treated as a two-span continuous anisotropic rectangular plate with simply-supported edges. The calculated analytical results (natural frequencies and root-mean-square acceleration) agree well with the experimental ones. The analytical approach is thus validated.

  11. Continuous Metabolic Monitoring Based on Multi-Analyte Biomarkers to Predict Exhaustion

    PubMed Central

    Kastellorizios, Michail; Burgess, Diane J.

    2015-01-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject’s perception. PMID:26028477

  12. Continuous metabolic monitoring based on multi-analyte biomarkers to predict exhaustion.

    PubMed

    Kastellorizios, Michail; Burgess, Diane J

    2015-06-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject's perception.

  13. Overview of graduate training program of John Adams Institute for Accelerator Science

    NASA Astrophysics Data System (ADS)

    Seryi, Andrei

    The John Adams Institute for Accelerator Science is a center of excellence in the UK for advanced and novel accelerator technology, providing expertise, research, development and training in accelerator techniques, and promoting advanced accelerator applications in science and society. We work in JAI on design of novel light sources upgrades of 3-rd generation and novel FELs, on plasma acceleration and its application to industrial and medical fields, on novel energy recovery compact linacs and advanced beam diagnostics, and many other projects. The JAI is based on three universities - University of Oxford, Imperial College London and Royal Holloway University of London. Every year 6 to 10 accelerators science experts, trained via research on cutting edge projects, defend their PhD thesis in JAI partner universities. In this presentation we will overview the research and in particular the highly successful graduate training program in JAI.

  14. Prognostics of Power Mosfets Under Thermal Stress Accelerated Aging Using Data-Driven and Model-Based Methodologies

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Saxena, Abhinav; Saha, Sankalita; Goebel, Kai F.

    2011-01-01

    An approach for predicting remaining useful life of power MOSFETs (metal oxide field effect transistor) devices has been developed. Power MOSFETs are semiconductor switching devices that are instrumental in electronics equipment such as those used in operation and control of modern aircraft and spacecraft. The MOSFETs examined here were aged under thermal overstress in a controlled experiment and continuous performance degradation data were collected from the accelerated aging experiment. Dieattach degradation was determined to be the primary failure mode. The collected run-to-failure data were analyzed and it was revealed that ON-state resistance increased as die-attach degraded under high thermal stresses. Results from finite element simulation analysis support the observations from the experimental data. Data-driven and model based prognostics algorithms were investigated where ON-state resistance was used as the primary precursor of failure feature. A Gaussian process regression algorithm was explored as an example for a data-driven technique and an extended Kalman filter and a particle filter were used as examples for model-based techniques. Both methods were able to provide valid results. Prognostic performance metrics were employed to evaluate and compare the algorithms.

  15. Rayleigh-Taylor mixing with time-dependent acceleration

    NASA Astrophysics Data System (ADS)

    Abarzhi, Snezhana

    2016-10-01

    We extend the momentum model to describe Rayleigh-Taylor (RT) mixing driven by a time-dependent acceleration. The acceleration is a power-law function of time, similarly to astrophysical and plasma fusion applications. In RT flow the dynamics of a fluid parcel is driven by a balance per unit mass of the rates of momentum gain and loss. We find analytical solutions in the cases of balanced and imbalanced gains and losses, and identify their dependence on the acceleration exponent. The existence is shown of two typical regimes of self-similar RT mixing-acceleration-driven Rayleigh-Taylor-type and dissipation-driven Richtymer-Meshkov-type with the latter being in general non-universal. Possible scenarios are proposed for transitions from the balanced dynamics to the imbalanced self-similar dynamics. Scaling and correlations properties of RT mixing are studied on the basis of dimensional analysis. Departures are outlined of RT dynamics with time-dependent acceleration from canonical cases of homogeneous turbulence as well as blast waves with first and second kind self-similarity. The work is supported by the US National Science Foundation.

  16. Depth-resolved monitoring of analytes diffusion in ocular tissues

    NASA Astrophysics Data System (ADS)

    Larin, Kirill V.; Ghosn, Mohamad G.; Tuchin, Valery V.

    2007-02-01

    Optical coherence tomography (OCT) is a noninvasive imaging technique with high in-depth resolution. We employed OCT technique for monitoring and quantification of analyte and drug diffusion in cornea and sclera of rabbit eyes in vitro. Different analytes and drugs such as metronidazole, dexamethasone, ciprofloxacin, mannitol, and glucose solution were studied and whose permeability coefficients were calculated. Drug diffusion monitoring was performed as a function of time and as a function of depth. Obtained results suggest that OCT technique might be used for analyte diffusion studies in connective and epithelial tissues.

  17. Accelerated Slice Encoding for Metal Artifact Correction

    PubMed Central

    Hargreaves, Brian A.; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T.; Gold, Garry E.; Brau, Anja C. S.; Pauly, John M.; Pauly, Kim Butts

    2010-01-01

    Purpose To demonstrate accelerated imaging with artifact reduction near metallic implants and different contrast mechanisms. Materials and Methods Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The SNR effects of all reconstructions were quantified in one subject. 10 subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. Results The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. Conclusion SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. PMID:20373445

  18. Accelerated slice encoding for metal artifact correction.

    PubMed

    Hargreaves, Brian A; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T; Gold, Garry E; Brau, Anja C S; Pauly, John M; Pauly, Kim Butts

    2010-04-01

    To demonstrate accelerated imaging with both artifact reduction and different contrast mechanisms near metallic implants. Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The signal-to-noise ratio (SNR) effects of all reconstructions were quantified in one subject. Ten subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging, and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. (c) 2010 Wiley-Liss, Inc.

  19. Active cleaning technique device

    NASA Technical Reports Server (NTRS)

    Shannon, R. L.; Gillette, R. B.

    1973-01-01

    The objective of this program was to develop a laboratory demonstration model of an active cleaning technique (ACT) device. The principle of this device is based primarily on the technique for removing contaminants from optical surfaces. This active cleaning technique involves exposing contaminated surfaces to a plasma containing atomic oxygen or combinations of other reactive gases. The ACT device laboratory demonstration model incorporates, in addition to plasma cleaning, the means to operate the device as an ion source for sputtering experiments. The overall ACT device includes a plasma generation tube, an ion accelerator, a gas supply system, a RF power supply and a high voltage dc power supply.

  20. Double temporal sparsity based accelerated reconstruction of compressively sensed resting-state fMRI.

    PubMed

    Aggarwal, Priya; Gupta, Anubha

    2017-12-01

    A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The role of analytical chemistry in Niger Delta petroleum exploration: a review.

    PubMed

    Akinlua, Akinsehinwa

    2012-06-12

    Petroleum and organic matter from which the petroleum is derived are composed of organic compounds with some trace elements. These compounds give an insight into the origin, thermal maturity and paleoenvironmental history of petroleum, which are essential elements in petroleum exploration. The main tool to acquire the geochemical data is analytical techniques. Due to progress in the development of new analytical techniques, many hitherto petroleum exploration problems have been resolved. Analytical chemistry has played a significant role in the development of petroleum resources of Niger Delta. Various analytical techniques that have aided the success of petroleum exploration in the Niger Delta are discussed. The analytical techniques that have helped to understand the petroleum system of the basin are also described. Recent and emerging analytical methodologies including green analytical methods as applicable to petroleum exploration particularly Niger Delta petroleum province are discussed in this paper. Analytical chemistry is an invaluable tool in finding the Niger Delta oils. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Determination of Ca content of coral skeleton by analyte additive method using the LIBS technique

    NASA Astrophysics Data System (ADS)

    Haider, A. F. M. Y.; Khan, Z. H.

    2012-09-01

    Laser-induced breakdown spectroscopic (LIBS) technique was used to study the elemental profile of coral skeletons. Apart from calcium and carbon, which are the main elemental constituents of coral skeleton, elements like Sr, Na, Mg, Li, Si, Cu, Ti, K, Mn, Zn, Ba, Mo, Br and Fe were detected in the coral skeletons from the Inani Beach and the Saint Martin's island of Bangladesh and the coral from the Philippines. In addition to the qualitative analysis, the quantitative analysis of the main elemental constituent, calcium (Ca), was done. The result shows the presence of (36.15±1.43)% by weight of Ca in the coral skeleton collected from the Inani Beach, Cox's Bazar, Bangladesh. It was determined by using six calibration curves, drawn for six emission lines of Ca I (428.301 nm, 428.936 nm, 431.865 nm, 443.544 nm, 443.569 nm, and 445.589 nm), by standard analyte additive method. Also from AAS measurement the percentage content of Ca in the same sample of coral skeleton obtained was 39.87% by weight which compares fairly well with the result obtained by the analyte additive method.

  3. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  4. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  5. Acceleration techniques for dependability simulation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  6. SFC-MS/MS as an orthogonal technique for improved screening of polar analytes in anti-doping control.

    PubMed

    Parr, Maria Kristina; Wuest, Bernhard; Naegele, Edgar; Joseph, Jan F; Wenzel, Maxi; Schmidt, Alexander H; Stanic, Mijo; de la Torre, Xavier; Botrè, Francesco

    2016-09-01

    HPLC is considered the method of choice for the separation of various classes of drugs. However, some analytes are still challenging as HPLC shows limited resolution capabilities for highly polar analytes as they interact insufficiently on conventional reversed-phase (RP) columns. Especially in combination with mass spectrometric detection, limitations apply for alterations of stationary phases. Some highly polar sympathomimetic drugs and their metabolites showed almost no retention on different RP columns. Their retention remains poor even on phenylhexyl phases that show different selectivity due to π-π interactions. Supercritical fluid chromatography (SFC) as an orthogonal separation technique to HPLC may help to overcome these issues. Selected polar drugs and metabolites were analyzed utilizing SFC separation. All compounds showed sharp peaks and good retention even for the very polar analytes, such as sulfoconjugates. Retention times and elution orders in SFC are different to both RP and HILIC separations as a result of the orthogonality. Short cycle times could be realized. As temperature and pressure strongly influence the polarity of supercritical fluids, precise regulation of temperature and backpressure is required for the stability of the retention times. As CO2 is the main constituent of the mobile phase in SFC, solvent consumption and solvent waste are considerably reduced. Graphical Abstract SFC-MS/MS vs. LC-MS/MS.

  7. Accelerator on a Chip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    England, Joel

    2014-06-30

    SLAC's Joel England explains how the same fabrication techniques used for silicon computer microchips allowed their team to create the new laser-driven particle accelerator chips. (SLAC Multimedia Communications)

  8. Accelerator on a Chip

    ScienceCinema

    England, Joel

    2018-01-16

    SLAC's Joel England explains how the same fabrication techniques used for silicon computer microchips allowed their team to create the new laser-driven particle accelerator chips. (SLAC Multimedia Communications)

  9. Ionization oscillations in Hall accelerators

    NASA Astrophysics Data System (ADS)

    Barral, S.; Peradzyński, Z.

    2010-01-01

    The underlying mechanism of low-frequency oscillations in Hall accelerators is investigated theoretically. It is shown that relaxation oscillations arise from a competition between avalanche ionization and the advective transport of the working gas. The model derived recovers the slow progression and fast recession of the ionization front. Analytical approximations of the shape of current pulses and of the oscillation frequency are provided for the case of large amplitude oscillations.

  10. Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1982-01-01

    Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…

  11. Timescale Correlation between Marine Atmospheric Exposure and Accelerated Corrosion Testing - Part 2

    NASA Technical Reports Server (NTRS)

    Montgomery, Eliza L.; Calle, Luz Marina; Curran, Jerome C.; Kolody, Mark R.

    2012-01-01

    Evaluation of metals to predict service life of metal-based structures in corrosive environments has long relied on atmospheric exposure test sites. Traditional accelerated corrosion testing relies on mimicking the exposure conditions, often incorporating salt spray and ultraviolet (UV) radiation, and exposing the metal to continuous or cyclic conditions similar to those of the corrosive environment. Their reliability to correlate to atmospheric exposure test results is often a concern when determining the timescale to which the accelerated tests can be related. Accelerated corrosion testing has yet to be universally accepted as a useful tool in predicting the long-term service life of a metal, despite its ability to rapidly induce corrosion. Although visual and mass loss methods of evaluating corrosion are the standard, and their use is crucial, a method that correlates timescales from accelerated testing to atmospheric exposure would be very valuable. This paper presents work that began with the characterization of the atmospheric environment at the Kennedy Space Center (KSC) Beachside Corrosion Test Site. The chemical changes that occur on low carbon steel, during atmospheric and accelerated corrosion conditions, were investigated using surface chemistry analytical methods. The corrosion rates and behaviors of panels subjected to long-term and accelerated corrosion conditions, involving neutral salt fog and alternating seawater spray, were compared to identify possible timescale correlations between accelerated and long-term corrosion performance. The results, as well as preliminary findings on the correlation investigation, are presented.

  12. Noise-immune cavity-enhanced analytical atomic spectrometry - NICE-AAS - A technique for detection of elements down to zeptogram amounts

    NASA Astrophysics Data System (ADS)

    Axner, Ove; Ehlers, Patrick; Hausmaninger, Thomas; Silander, Isak; Ma, Weiguang

    2014-10-01

    Noise-immune cavity-enhanced optical heterodyne molecular spectroscopy (NICE-OHMS) is a powerful technique for detection of molecular compounds in gas phase that is based on a combination of two important concepts: frequency modulation spectroscopy (FMS) for reduction of noise, and cavity enhancement, for prolongation of the interaction length between the light and the sample. Due to its unique properties, it has demonstrated unparalleled detection sensitivity when it comes to detection of molecular constituents in the gas phase. However, despite these, it has so far not been used for detection of atoms, i.e. for elemental analysis. The present work presents an assessment of the expected performance of Doppler-broadened (Db) NICE-OHMS for analytical atomic spectrometry, then referred to as noise-immune cavity-enhanced analytical atomic spectrometry (NICE-AAS). After a description of the basic principles of Db-NICE-OHMS, the modulation and detection conditions for optimum performance are identified. Based on a previous demonstrated detection sensitivity of Db-NICE-OHMS of 5 × 10- 12 cm- 1 Hz- 1/2 (corresponding to a single-pass absorbance of 7 × 10- 11 over 10 s), the expected limits of detection (LODs) of Hg and Na by NICE-AAS are estimated. Hg is assumed to be detected in gas phase directly while Na is considered to be atomized in a graphite furnace (GF) prior to detection. It is shown that in the absence of spectral interferences, contaminated sample compartments, and optical saturation, it should be feasible to detect Hg down to 10 zg/cm3 (10 fg/m3 or 10- 5 ng/m3), which corresponds to 25 atoms/cm3, and Na down to 0.5 zg (zg = zeptogram = 10- 21 g), representing 50 zg/mL (parts-per-sextillion, pps, 1:1021) in liquid solution (assuming a sample of 10 μL) or solely 15 atoms injected into the GF, respectively. These LODs are several orders of magnitude lower (better) than any previous laser-based absorption technique previously demonstrated under atmospheric

  13. Degradation of glass artifacts: application of modern surface analytical techniques.

    PubMed

    Melcher, Michael; Wiesinger, Rita; Schreiner, Manfred

    2010-06-15

    A detailed understanding of the stability of glasses toward liquid or atmospheric attack is of considerable importance for preserving numerous objects of our cultural heritage. Glasses produced in the ancient periods (Egyptian, Greek, or Roman glasses), as well as modern glass, can be classified as soda-lime-silica glasses. In contrast, potash was used as a flux in medieval Northern Europe for the production of window panes for churches and cathedrals. The particular chemical composition of these potash-lime-silica glasses (low in silica and rich in alkali and alkaline earth components), in combination with increased levels of acidifying gases (such as SO(2), CO(2), NO(x), or O(3)) and airborne particulate matter in today's urban or industrial atmospheres, has resulted in severe degradation of important cultural relics, particularly over the last century. Rapid developments in the fields of microelectronics and computer sciences, however, have contributed to the development of a variety of nondestructive, surface analytical techniques for the scientific investigation and material characterization of these unique and valuable objects. These methods include scanning electron microscopy in combination with energy- or wavelength-dispersive spectrometry (SEM/EDX or SEM/WDX), secondary ion mass spectrometry (SIMS), and atomic force microscopy (AFM). In this Account, we address glass analysis and weathering mechanisms, exploring the possibilities (and limitations) of modern analytical techniques. Corrosion by liquid substances is well investigated in the glass literature. In a tremendous number of case studies, the basic reaction between aqueous solutions and the glass surfaces was identified as an ion-exchange reaction between hydrogen-bearing species of the attacking liquid and the alkali and alkaline earth ions in the glass, causing a depletion of the latter in the outermost surface layers. Although mechanistic analogies to liquid corrosion are obvious, atmospheric

  14. Accelerated test design

    NASA Technical Reports Server (NTRS)

    Mcdermott, P. P.

    1980-01-01

    The design of an accelerated life test program for electric batteries is discussed. A number of observations and suggestions on the procedures and objectives for conducting an accelerated life test program are presented. Equations based on nonlinear regression analysis for predicting the accelerated life test parameters are discussed.

  15. Highly accelerated acquisition and homogeneous image reconstruction with rotating RF coil array at 7T-A phantom based study.

    PubMed

    Li, Mingyan; Zuo, Zhentao; Jin, Jin; Xue, Rong; Trakic, Adnan; Weber, Ewald; Liu, Feng; Crozier, Stuart

    2014-03-01

    Parallel imaging (PI) is widely used for imaging acceleration by means of coil spatial sensitivities associated with phased array coils (PACs). By employing a time-division multiplexing technique, a single-channel rotating radiofrequency coil (RRFC) provides an alternative method to reduce scan time. Strategically combining these two concepts could provide enhanced acceleration and efficiency. In this work, the imaging acceleration ability and homogeneous image reconstruction strategy of 4-element rotating radiofrequency coil array (RRFCA) was numerically investigated and experimental validated at 7T with a homogeneous phantom. Each coil of RRFCA was capable of acquiring a large number of sensitivity profiles, leading to a better acceleration performance illustrated by the improved geometry-maps that have lower maximum values and more uniform distributions compared to 4- and 8-element stationary arrays. A reconstruction algorithm, rotating SENSitivity Encoding (rotating SENSE), was proposed to provide image reconstruction. Additionally, by optimally choosing the angular sampling positions and transmit profiles under the rotating scheme, phantom images could be faithfully reconstructed. The results indicate that, the proposed technique is able to provide homogeneous reconstructions with overall higher and more uniform signal-to-noise ratio (SNR) distributions at high reduction factors. It is hoped that, by employing the high imaging acceleration and homogeneous imaging reconstruction ability of RRFCA, the proposed method will facilitate human imaging for ultra high field MRI. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and

  17. A -100 kV Power Supply for Ion Acceleration in Space-based Mass Spectrometers

    NASA Astrophysics Data System (ADS)

    Gilbert, J. A.; Zurbuchen, T.; Battel, S.

    2017-12-01

    High voltage power supplies are used in many space-based time-of-flight (TOF) mass spectrometer designs to accelerate incoming ions and increase the probability of their measurement and proper identification. Ions are accelerated in proportion to their charge state, so singly charged ions such as pickup ions are accelerated less than their multiple-charge state solar wind counterparts. This lack of acceleration results in pickup ion measurements with lower resolution and without determinations of absolute energy. Acceleration reduces the effects of angular scattering and energy straggling when ions pass through thin membranes such as carbon foils, and it brings ion energies above the detection threshold of traditional solid state detectors. We have developed a power supply capable of operating at -100 kV for ion acceleration while also delivering up to 10 W of power for the operation of a floating TOF system. We also show results of benchtop calibration and ion beam tests to demonstrate the functionality and success of this approach.

  18. Traumatic stress and accelerated DNA methylation age: A meta-analysis.

    PubMed

    Wolf, Erika J; Maniates, Hannah; Nugent, Nicole; Maihofer, Adam X; Armstrong, Don; Ratanatharathorn, Andrew; Ashley-Koch, Allison E; Garrett, Melanie; Kimbrel, Nathan A; Lori, Adriana; Va Mid-Atlantic Mirecc Workgroup; Aiello, Allison E; Baker, Dewleen G; Beckham, Jean C; Boks, Marco P; Galea, Sandro; Geuze, Elbert; Hauser, Michael A; Kessler, Ronald C; Koenen, Karestan C; Miller, Mark W; Ressler, Kerry J; Risbrough, Victoria; Rutten, Bart P F; Stein, Murray B; Ursano, Robert J; Vermetten, Eric; Vinkers, Christiaan H; Uddin, Monica; Smith, Alicia K; Nievergelt, Caroline M; Logue, Mark W

    2018-06-01

    Recent studies examining the association between posttraumatic stress disorder (PTSD) and accelerated aging, as defined by DNA methylation-based estimates of cellular age that exceed chronological age, have yielded mixed results. We conducted a meta-analysis of trauma exposure and PTSD diagnosis and symptom severity in association with accelerated DNA methylation age using data from 9 cohorts contributing to the Psychiatric Genomics Consortium PTSD Epigenetics Workgroup (combined N = 2186). Associations between demographic and cellular variables and accelerated DNA methylation age were also examined, as was the moderating influence of demographic variables. Meta-analysis of regression coefficients from contributing cohorts revealed that childhood trauma exposure (when measured with the Childhood Trauma Questionnaire) and lifetime PTSD severity evidenced significant, albeit small, meta-analytic associations with accelerated DNA methylation age (ps = 0.028 and 0.016, respectively). Sex, CD4T cell proportions, and natural killer cell proportions were also significantly associated with accelerated DNA methylation age (all ps < 0.02). PTSD diagnosis and lifetime trauma exposure were not associated with advanced DNA methylation age. There was no evidence of moderation of the trauma or PTSD variables by demographic factors. Results suggest that traumatic stress is associated with advanced epigenetic age and raise the possibility that cells integral to immune system maintenance and responsivity play a role in this. This study highlights the need for additional research into the biological mechanisms linking traumatic stress to accelerated DNA methylation age and the importance of furthering our understanding of the neurobiological and health consequences of PTSD. Published by Elsevier Ltd.

  19. The Role of a Reference Synthetic Data Generator within the Field of Learning Analytics

    ERIC Educational Resources Information Center

    Berg, Alan\tM.; Mol, Stefan T.; Kismihók, Gábor; Sclater, Niall

    2016-01-01

    This paper details the anticipated impact of synthetic "big" data on learning analytics (LA) infrastructures, with a particular focus on data governance, the acceleration of service development, and the benchmarking of predictive models. By reviewing two cases, one at the sector-wide level (the Jisc learning analytics architecture) and…

  20. Characterisation of an accelerator-based neutron source for BNCT versus beam energy

    NASA Astrophysics Data System (ADS)

    Agosteo, S.; Curzio, G.; d'Errico, F.; Nath, R.; Tinti, R.

    2002-01-01

    Neutron capture in 10B produces energetic alpha particles that have a high linear energy transfer in tissue. This results in higher cell killing and a higher relative biological effectiveness compared to photons. Using suitably designed boron compounds which preferentially localize in cancerous cells instead of healthy tissues, boron neutron capture therapy (BNCT) has the potential of providing a higher tumor cure rate within minimal toxicity to normal tissues. This clinical approach requires a thermal neutron source, generally a nuclear reactor, with a fluence rate sufficient to deliver tumorcidal doses within a reasonable treatment time (minutes). Thermal neutrons do not penetrate deeply in tissue, therefore BNCT is limited to lesions which are either superficial or otherwise accessible. In this work, we investigate the feasibility of an accelerator-based thermal neutron source for the BNCT of skin melanomas. The source was designed via MCNP Monte Carlo simulations of the thermalization of a fast neutron beam, generated by 7 MeV deuterons impinging on a thick target of beryllium. The neutron field was characterized at several deuteron energies (3.0-6.5 MeV) in an experimental structure installed at the Van De Graaff accelerator of the Laboratori Nazionali di Legnaro, in Italy. Thermal and epithermal neutron fluences were measured with activation techniques and fast neutron spectra were determined with superheated drop detectors (SDD). These neutron spectrometry and dosimetry studies indicated that the fast neutron dose is unacceptably high in the current design. Modifications to the current design to overcome this problem are presented.