Science.gov

Sample records for fast analytical methods

  1. Fast optical proximity correction: analytical method

    NASA Astrophysics Data System (ADS)

    Shioiri, Satomi; Tanabe, Hiroyoshi

    1995-05-01

    In automating optical proximity correction, calculation speed becomes important. In this paper we present a novel method for calculating proximity corrected features analytically. The calculation will take only several times the amount it takes to calculate intensity of one point on wafer. Therefore, the calculation will become extremely faster than conventional repetitive aerial image calculations. This method is applied to a simple periodic pattern. The simulated results show great improvement on linearity after correction and have proved the effectiveness of this analytical method.

  2. Fast Analytical Methods for Macroscopic Electrostatic Models in Biomolecular Simulations*

    PubMed Central

    Xu, Zhenli; Cai, Wei

    2013-01-01

    We review recent developments of fast analytical methods for macroscopic electrostatic calculations in biological applications, including the Poisson–Boltzmann (PB) and the generalized Born models for electrostatic solvation energy. The focus is on analytical approaches for hybrid solvation models, especially the image charge method for a spherical cavity, and also the generalized Born theory as an approximation to the PB model. This review places much emphasis on the mathematical details behind these methods. PMID:23745011

  3. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  4. Density functional theory for molecular and periodic systems using density fitting and continuous fast multipole method: Analytical gradients.

    PubMed

    Łazarski, Roman; Burow, Asbjörn Manfred; Grajciar, Lukáš; Sierka, Marek

    2016-10-30

    A full implementation of analytical energy gradients for molecular and periodic systems is reported in the TURBOMOLE program package within the framework of Kohn-Sham density functional theory using Gaussian-type orbitals as basis functions. Its key component is a combination of density fitting (DF) approximation and continuous fast multipole method (CFMM) that allows for an efficient calculation of the Coulomb energy gradient. For exchange-correlation part the hierarchical numerical integration scheme (Burow and Sierka, Journal of Chemical Theory and Computation 2011, 7, 3097) is extended to energy gradients. Computational efficiency and asymptotic O(N) scaling behavior of the implementation is demonstrated for various molecular and periodic model systems, with the largest unit cell of hematite containing 640 atoms and 19,072 basis functions. The overall computational effort of energy gradient is comparable to that of the Kohn-Sham matrix formation. © 2016 Wiley Periodicals, Inc.

  5. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  6. Analytical solutions of the planar cyclic voltammetry process for two soluble species with equal diffusivities and fast electron transfer using the method of eigenfunction expansions

    SciTech Connect

    Samin, Adib; Lahti, Erik; Zhang, Jinsuo

    2015-08-15

    Cyclic voltammetry is a powerful tool that is used for characterizing electrochemical processes. Models of cyclic voltammetry take into account the mass transport of species and the kinetics at the electrode surface. Analytical solutions of these models are not well-known due to the complexity of the boundary conditions. In this study we present closed form analytical solutions of the planar voltammetry model for two soluble species with fast electron transfer and equal diffusivities using the eigenfunction expansion method. Our solution methodology does not incorporate Laplace transforms and yields good agreement with the numerical solution. This solution method can be extended to cases that are more general and may be useful for benchmarking purposes.

  7. Fast and simple procedure for fractionation of zinc in soil using an ultrasound probe and FAAS detection. Validation of the analytical method and evaluation of the uncertainty budget.

    PubMed

    Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata

    2016-01-01

    A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland). PMID:26666658

  8. Fast and simple procedure for fractionation of zinc in soil using an ultrasound probe and FAAS detection. Validation of the analytical method and evaluation of the uncertainty budget.

    PubMed

    Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata

    2016-01-01

    A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland).

  9. Analytical method for fast screening and confirmation of multi-class veterinary drug residues in fish and shrimp by LC-MS/MS.

    PubMed

    Kim, Junghyun; Suh, Joon Hyuk; Cho, Hyun-Deok; Kang, Wonjae; Choi, Yong Seok; Han, Sang Beom

    2016-01-01

    A multi-class, multi-residue analytical method based on LC-MS/MS detection was developed for the screening and confirmation of 28 veterinary drug and metabolite residues in flatfish, shrimp and eel. The chosen veterinary drugs are prohibited or unauthorised compounds in Korea, which were categorised into various chemical classes including nitroimidazoles, benzimidazoles, sulfones, quinolones, macrolides, phenothiazines, pyrethroids and others. To achieve fast and simultaneous extraction of various analytes, a simple and generic liquid extraction procedure using EDTA-ammonium acetate buffer and acetonitrile, without further clean-up steps, was applied to sample preparation. The final extracts were analysed by ultra-high-performance liquid chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS). The method was validated for each compound in each matrix at three different concentrations (5, 10 and 20 ng g(-1)) in accordance with Codex guidelines (CAC/GL 71-2009). For most compounds, the recoveries were in the range of 60-110%, and precision, expressed as the relative standard deviation (RSD), was in the range of 5-15%. The detection capabilities (CCβs) were below or equal to 5 ng g(-1), which indicates that the developed method is sufficient to detect illegal fishery products containing the target compounds above the residue limit (10 ng g(-1)) of the new regulatory system (Positive List System - PLS). PMID:26751111

  10. Analytical model for fast-shock ignition

    NASA Astrophysics Data System (ADS)

    Ghasemi, S. A.; Farahbod, A. H.; Sobhanian, S.

    2014-07-01

    A model and its improvements are introduced for a recently proposed approach to inertial confinement fusion, called fast-shock ignition (FSI). The analysis is based upon the gain models of fast ignition, shock ignition and considerations for the fast electrons penetration into the pre-compressed fuel to examine the formation of an effective central hot spot. Calculations of fast electrons penetration into the dense fuel show that if the initial electron kinetic energy is of the order ˜4.5 MeV, the electrons effectively reach the central part of the fuel. To evaluate more realistically the performance of FSI approach, we have used a quasi-two temperature electron energy distribution function of Strozzi (2012) and fast ignitor energy formula of Bellei (2013) that are consistent with 3D PIC simulations for different values of fast ignitor laser wavelength and coupling efficiency. The general advantages of fast-shock ignition in comparison with the shock ignition can be estimated to be better than 1.3 and it is seen that the best results can be obtained for the fuel mass around 1.5 mg, fast ignitor laser wavelength ˜0.3 micron and the shock ignitor energy weight factor about 0.25.

  11. Analytical model for fast-shock ignition

    SciTech Connect

    Ghasemi, S. A. Farahbod, A. H.; Sobhanian, S.

    2014-07-15

    A model and its improvements are introduced for a recently proposed approach to inertial confinement fusion, called fast-shock ignition (FSI). The analysis is based upon the gain models of fast ignition, shock ignition and considerations for the fast electrons penetration into the pre-compressed fuel to examine the formation of an effective central hot spot. Calculations of fast electrons penetration into the dense fuel show that if the initial electron kinetic energy is of the order ∼4.5 MeV, the electrons effectively reach the central part of the fuel. To evaluate more realistically the performance of FSI approach, we have used a quasi-two temperature electron energy distribution function of Strozzi (2012) and fast ignitor energy formula of Bellei (2013) that are consistent with 3D PIC simulations for different values of fast ignitor laser wavelength and coupling efficiency. The general advantages of fast-shock ignition in comparison with the shock ignition can be estimated to be better than 1.3 and it is seen that the best results can be obtained for the fuel mass around 1.5 mg, fast ignitor laser wavelength ∼0.3  micron and the shock ignitor energy weight factor about 0.25.

  12. Analytic Methods in Investigative Geometry.

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2001-01-01

    Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)

  13. Assessment of a fast generated analytical matrix for rotating slat collimation iterative reconstruction: a possible method to optimize the collimation profile

    NASA Astrophysics Data System (ADS)

    Boisson, F.; Bekaert, V.; Reilhac, A.; Wurtz, J.; Brasse, D.

    2015-03-01

    In SPECT imaging, improvement or deterioration of performance is mostly due to collimator design. Classical SPECT systems mainly use parallel hole or pinhole collimators. Rotating slat collimators (RSC) can be an interesting alternative to optimize the tradeoff between detection efficiency and spatial resolution. The present study was conducted using a RSC system for small animal imaging called CLiR. The CLiR system was used in planar mode only. In a previous study, planar 2D projections were reconstructed using the well-known filtered backprojection algorithm (FBP). In this paper, we investigated the use of the statistical reconstruction algorithm maximum likelihood expectation maximization (MLEM) to reconstruct 2D images with the CLiR system using a probability matrix calculated using an analytic approach. The primary objective was to propose a method to quickly generate a light system matrix, which facilitates its handling and storage, while providing accurate and reliable performance. Two other matrices were calculated using GATE Monte Carlo simulations to investigate the performance obtained using the matrix calculated analytically. The first matrix calculated using GATE took all the physics processes into account, where the second did not consider for the scattering, as the analytical matrix did not take this physics process into account either. 2D images were reconstructed using FBP and MLEM with the three different probability matrices. Both simulated and experimental data were used. A comparative study of these images was conducted using different metrics: the modulation transfert function, the signal-to-noise ratio and quantification measurement. All the results demonstrated the suitability of using a probability matrix calculated analytically. It provided similar results in terms of spatial resolution (about 0.6 mm with differences <5%), signal-to-noise ratio (differences <10%), or quality of image.

  14. Assessment of a fast generated analytical matrix for rotating slat collimation iterative reconstruction: a possible method to optimize the collimation profile.

    PubMed

    Boisson, F; Bekaert, V; Reilhac, A; Wurtz, J; Brasse, D

    2015-03-21

    In SPECT imaging, improvement or deterioration of performance is mostly due to collimator design. Classical SPECT systems mainly use parallel hole or pinhole collimators. Rotating slat collimators (RSC) can be an interesting alternative to optimize the tradeoff between detection efficiency and spatial resolution. The present study was conducted using a RSC system for small animal imaging called CLiR. The CLiR system was used in planar mode only. In a previous study, planar 2D projections were reconstructed using the well-known filtered backprojection algorithm (FBP). In this paper, we investigated the use of the statistical reconstruction algorithm maximum likelihood expectation maximization (MLEM) to reconstruct 2D images with the CLiR system using a probability matrix calculated using an analytic approach. The primary objective was to propose a method to quickly generate a light system matrix, which facilitates its handling and storage, while providing accurate and reliable performance. Two other matrices were calculated using GATE Monte Carlo simulations to investigate the performance obtained using the matrix calculated analytically. The first matrix calculated using GATE took all the physics processes into account, where the second did not consider for the scattering, as the analytical matrix did not take this physics process into account either. 2D images were reconstructed using FBP and MLEM with the three different probability matrices. Both simulated and experimental data were used. A comparative study of these images was conducted using different metrics: the modulation transfert function, the signal-to-noise ratio and quantification measurement. All the results demonstrated the suitability of using a probability matrix calculated analytically. It provided similar results in terms of spatial resolution (about 0.6 mm with differences <5%), signal-to-noise ratio (differences <10%), or quality of image. PMID:25716556

  15. Fast quench reactor method

    SciTech Connect

    Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.; Berry, Ray A.

    1999-01-01

    A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream.

  16. Fast quench reactor method

    DOEpatents

    Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.; Berry, R.A.

    1999-08-10

    A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream. 8 figs.

  17. Analytical Methods for Online Searching.

    ERIC Educational Resources Information Center

    Vigil, Peter J.

    1983-01-01

    Analytical methods for facilitating comparison of multiple sets during online searching are illustrated by description of specific searching methods that eliminate duplicate citations and a factoring procedure based on syntactic relationships that establishes ranked sets. Searches executed in National Center for Mental Health database on…

  18. Analytical methods under emergency conditions

    SciTech Connect

    Sedlet, J.

    1983-01-01

    This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references. (ACR)

  19. Capture and evolution of dust in planetary mean-motion resonances: a fast, semi-analytic method for generating resonantly trapped disc images

    NASA Astrophysics Data System (ADS)

    Shannon, Andrew; Mustill, Alexander J.; Wyatt, Mark

    2015-03-01

    Dust grains migrating under Poynting-Robertson drag may be trapped in mean-motion resonances with planets. Such resonantly trapped grains are observed in the Solar system. In extrasolar systems, the exozodiacal light produced by dust grains is expected to be a major obstacle to future missions attempting to directly image terrestrial planets. The patterns made by resonantly trapped dust, however, can be used to infer the presence of planets, and the properties of those planets, if the capture and evolution of the grains can be modelled. This has been done with N-body methods, but such methods are computationally expensive, limiting their usefulness when considering large, slowly evolving grains, and for extrasolar systems with unknown planets and parent bodies, where the possible parameter space for investigation is large. In this work, we present a semi-analytic method for calculating the capture and evolution of dust grains in resonance, which can be orders of magnitude faster than N-body methods. We calibrate the model against N-body simulations, finding excellent agreement for Earth to Neptune mass planets, for a variety of grain sizes, initial eccentricities, and initial semimajor axes. We then apply the model to observations of dust resonantly trapped by the Earth. We find that resonantly trapped, asteroidally produced grains naturally produce the `trailing blob' structure in the zodiacal cloud, while to match the intensity of the blob, most of the cloud must be composed of cometary grains, which owing to their high eccentricity are not captured, but produce a smooth disc.

  20. Waste minimization in analytical methods

    SciTech Connect

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-05-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department`s goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision.

  1. SU-E-I-01: A Fast, Analytical Pencil Beam Based Method for First Order X-Ray Scatter Estimation of Kilovoltage Cone Beam X-Rays

    SciTech Connect

    Liu, J; Bourland, J

    2014-06-01

    Purpose: To analytically estimate first-order x-ray scatter for kV cone beam x-ray imaging with high computational efficiency. Methods: In calculating first-order scatter using the Klein-Nishina formula, we found that by integrating the point-to-point scatter along an interaction line, a “pencil-beam” scatter kernel (BSK) can be approximated to a quartic expression when the imaging field is small. This BSK model for monoenergetic, 100keV x-rays has been verified on homogeneous cube and cylinder water phantoms by comparing with the exact implementation of KN formula. For heterogeneous medium, the water-equivalent length of a BSK was acquired with an improved Siddon's ray-tracing algorithm, which was also used in calculating pre- and post- scattering attenuation. To include the electron binding effect for scattering of low-kV photons, the mean corresponding scattering angle is determined from the effective point of scattered photons of a BSK. The behavior of polyenergetic x-rays was also investigated for 120kV x-rays incident to a sandwiched infinite heterogeneous slab phantom, with the electron binding effect incorporated. Exact computation and Monte Carlo simulations were performed for comparisons, using the EGSnrc code package. Results: By reducing the 3D volumetric target (o(n{sup 3})) to 2D pencil-beams (o(n{sup 2})), the computation expense can be generally lowered by n times, which our experience verifies. The scatter distribution on a flat detector shows high agreement between the analytic BSK model and exact calculations. The pixel-to-pixel differences are within (-2%, 2%) for the homogeneous cube and cylinder phantoms and within (0, 6%) for the heterogeneous slab phantom. However, the Monte Carlo simulation shows increased deviation of the BSK model toward detector periphery. Conclusion: The proposed BSK model, accommodating polyenergetic x-rays and electron binding effect at low kV, shows great potential in efficiently estimating the first

  2. A fast neighbor joining method.

    PubMed

    Li, J F

    2015-01-01

    With the rapid development of sequencing technologies, an increasing number of sequences are available for evolutionary tree reconstruction. Although neighbor joining is regarded as the most popular and fastest evolutionary tree reconstruction method [its time complexity is O(n(3)), where n is the number of sequences], it is not sufficiently fast to infer evolutionary trees containing more than a few hundred sequences. To increase the speed of neighbor joining, we herein propose FastNJ, a fast implementation of neighbor joining, which was motivated by RNJ and FastJoin, two improved versions of conventional neighbor joining. The main difference between FastNJ and conventional neighbor joining is that, in the former, many pairs of nodes selected by the rule used in RNJ are joined in each iteration. In theory, the time complexity of FastNJ can reach O(n(2)) in the best cases. Experimental results show that FastNJ yields a significant increase in speed compared to RNJ and conventional neighbor joining with a minimal loss of accuracy. PMID:26345805

  3. Analytic heuristics for a fast DSC-MRI

    NASA Astrophysics Data System (ADS)

    Virgulin, M.; Castellaro, M.; Marcuzzi, F.; Grisan, E.

    2014-03-01

    Hemodynamics of the human brain may be studied with Dynamic Susceptibility Contrast MRI (DSC-MRI) imaging. The sequence of volumes obtained exhibits a strong spatiotemporal correlation, that can be exploited to predict which measurements will bring mostly the new information contained in the next frames. In general, the sampling speed is an important issue in many applications of the MRI, so that the focus of many current researches is to study methods to reduce the number of measurement samples needed for each frame without degrading the image quality. For the DSC-MRI, the frequency under-sampling of single frame can be exploited to make more frequent space or time acquisitions, thus increasing the time resolution and allowing the analysis of fast dynamics not yet observed. Generally (and also for MRI), the recovery of sparse signals has been achieved by Compressed Sensing (CS) techniques, which are based on statistical properties rather than deterministic ones.. By studying analytically the compound Fourier+Wavelet transform, involved in the processes of reconstruction and sparsification of MR images, we propose a deterministic technique for a rapid-MRI, exploiting the relations between the wavelet sparse representation of the recovered and the frequency samples. We give results on real images and on artificial phantoms with added noise, showing the superiority of the methods both with respect to classical Iterative Hard Thresholding (IHT) and to Location Constraint Approximate Message Passing (LCAMP) reconstruction algorithms.

  4. Fast variation method for elastic strip calculation.

    PubMed

    Biryukov, Sergey V

    2002-05-01

    A new, fast, variation method (FVM) for determining an elastic strip response to stresses arbitrarily distributed on the flat side of the strip is proposed. The remaining surface of the strip may have an arbitrary form, and it is free of stresses. The FVM, as well as the well-known finite element method (FEM), starts with the variational principle. However, it does not use the meshing of the strip. A comparison of FVM results with the exact analytical solution in the special case of shear stresses and a rectangular strip demonstrates an excellent agreement.

  5. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...

  6. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...

  7. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...

  8. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...

  9. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...

  10. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...

  11. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...

  12. Advanced epidemiologic and analytical methods.

    PubMed

    Albanese, E

    2016-01-01

    Observational studies are indispensable for etiologic research, and are key to test life-course hypotheses and improve our understanding of neurologic diseases that have long induction and latency periods. In recent years a plethora of advanced design and analytic techniques have been developed to strengthen the robustness and ultimately the validity of the results of observational studies, and to address their inherent proneness to bias. It is the responsibility of clinicians and researchers to critically appraise and appropriately contextualize the findings of the exponentially expanding scientific literature. This critical appraisal should be rooted in a thorough understanding of advanced epidemiologic methods and techniques commonly used to formulate and test relevant hypotheses and to keep bias at bay. PMID:27637951

  13. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  14. Validation of an analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals in soil

    PubMed Central

    2013-01-01

    Background The aim of this paper was the validation of a new analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals (Ag, Cd, Co, Cr, Cu, Ni, Pb and Zn) in soil after microwave assisted digestion in aqua regia. Determinations were performed on the ContrAA 300 (Analytik Jena) air-acetylene flame spectrometer equipped with xenon short-arc lamp as a continuum radiation source for all elements, double monochromator consisting of a prism pre-monocromator and an echelle grating monochromator, and charge coupled device as detector. For validation a method-performance study was conducted involving the establishment of the analytical performance of the new method (limits of detection and quantification, precision and accuracy). Moreover, the Bland and Altman statistical method was used in analyzing the agreement between the proposed assay and inductively coupled plasma optical emission spectrometry as standardized method for the multielemental determination in soil. Results The limits of detection in soil sample (3σ criterion) in the high-resolution continuum source flame atomic absorption spectrometry method were (mg/kg): 0.18 (Ag), 0.14 (Cd), 0.36 (Co), 0.25 (Cr), 0.09 (Cu), 1.0 (Ni), 1.4 (Pb) and 0.18 (Zn), close to those in inductively coupled plasma optical emission spectrometry: 0.12 (Ag), 0.05 (Cd), 0.15 (Co), 1.4 (Cr), 0.15 (Cu), 2.5 (Ni), 2.5 (Pb) and 0.04 (Zn). Accuracy was checked by analyzing 4 certified reference materials and a good agreement for 95% confidence interval was found in both methods, with recoveries in the range of 94–106% in atomic absorption and 97–103% in optical emission. Repeatability found by analyzing real soil samples was in the range 1.6–5.2% in atomic absorption, similar with that of 1.9–6.1% in optical emission spectrometry. The Bland and Altman method showed no statistical significant difference

  15. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  16. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    NASA Astrophysics Data System (ADS)

    Jagetic, Lydia J.; Newhauser, Wayne D.

    2015-06-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  17. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...

  18. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...

  19. Development of a fast analytical method for the determination of sudan dyes in chili- and curry-containing foodstuffs by high-performance liquid chromatography-photodiode array detection.

    PubMed

    Cornet, Vanessa; Govaert, Yasmine; Moens, Goedele; Van Loco, Joris; Degroodt, Jean-Marie

    2006-02-01

    A simple and fast analytical method for the determination of sudans I, II, III, and IV in chili- and curry-containing foodstuffs is described. These dyes are extracted from the samples with acetonitrile and analyzed by high-performance liquid chromatography coupled to a photodiode array detector. The chromatographic separation is carried out on a reverse phase C18 column with an isocratic mode using a mixture of acetonitrile and water. An "in-house" validation was achieved in chili- and curry-based sauces and powdered spices. Depending on the dye, limits of detection range from 0.2 to 0.5 mg/kg in sauces and from 1.5 to 2 mg/kg in spices. Limits of quantification are between 0.4 and 1 mg/kg in sauces and between 3 and 4 mg/kg in spices. Validation data show a good repeatability and within-lab reproducibility with relative standard deviations < 15%. The overall recoveries are in the range of 51-86% in sauces and in the range of 89-100% in powdered spices depending on the dye involved. Calibration curves are linear in the 0-5 mg/kg range for sauces and in the 0-20 mg/kg range for spices. The proposed method is specific and selective, allowing the analysis of over 20 samples per working day.

  20. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity,...

  1. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity,...

  2. Fast neutron imaging device and method

    DOEpatents

    Popov, Vladimir; Degtiarenko, Pavel; Musatov, Igor V.

    2014-02-11

    A fast neutron imaging apparatus and method of constructing fast neutron radiography images, the apparatus including a neutron source and a detector that provides event-by-event acquisition of position and energy deposition, and optionally timing and pulse shape for each individual neutron event detected by the detector. The method for constructing fast neutron radiography images utilizes the apparatus of the invention.

  3. Continuous shading and its fast update in fully analytic triangular-mesh-based computer generated hologram.

    PubMed

    Park, Jae-Hyeung; Kim, Seong-Bok; Yeom, Han-Ju; Kim, Hee-Jae; Zhang, HuiJun; Li, BoNi; Ji, Yeong-Min; Kim, Sang-Hoo; Ko, Seok-Bum

    2015-12-28

    Fully analytic mesh-based computer generated hologram enables efficient and precise representation of three-dimensional scene. Conventional method assigns uniform amplitude inside individual mesh, resulting in reconstruction of the three-dimensional scene of flat shading. In this paper, we report an extension of the conventional method to achieve the continuous shading where the amplitude in each mesh is continuously varying. The proposed method enables the continuous shading, while maintaining fully analytic framework of the conventional method without any sacrifice in the precision. The proposed method can also be extended to enable fast update of the shading for different illumination directions and the ambient-diffuse reflection ratio based on Phong reflection model. The feasibility of the proposed method is confirmed by the numerical and optical reconstruction of the generated hologram.

  4. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... Environmental Protection Agency (EPA) Chemical Exposure Research Branch, EPA Office of Research and Development... Evaluating Solid Waste Physical/Chemical Methods, Environmental Protection Agency, Office of Solid Waste, SW... and Engineering Center's Military Specifications, approved analytical test methods noted therein,...

  5. Method of identity analyte-binding peptides

    DOEpatents

    Kauvar, Lawrence M.

    1990-01-01

    A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4-20 amino acids for specific affinity to the analyte.

  6. Method of identity analyte-binding peptides

    DOEpatents

    Kauvar, L.M.

    1990-10-16

    A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4--20 amino acids for specific affinity to the analyte. 5 figs.

  7. Matrix Methods to Analytic Geometry.

    ERIC Educational Resources Information Center

    Bandy, C.

    1982-01-01

    The use of basis matrix methods to rotate axes is detailed. It is felt that persons who have need to rotate axes often will find that the matrix method saves considerable work. One drawback is that most students first learning to rotate axes will not yet have studied linear algebra. (MP)

  8. Method and apparatus for detecting an analyte

    DOEpatents

    Allendorf, Mark D.; Hesketh, Peter J.

    2011-11-29

    We describe the use of coordination polymers (CP) as coatings on microcantilevers for the detection of chemical analytes. CP exhibit changes in unit cell parameters upon adsorption of analytes, which will induce a stress in a static microcantilever upon which a CP layer is deposited. We also describe fabrication methods for depositing CP layers on surfaces.

  9. Safer staining method for acid fast bacilli.

    PubMed Central

    Ellis, R C; Zabrowarny, L A

    1993-01-01

    To develop a method for staining acid fast bacilli which excluded highly toxic phenol from the staining solution. A lipophilic agent, a liquid organic detergent, LOC High Studs, distributed by Amway, was substituted. The acid fast bacilli stained red; nuclei, cytoplasm, and cytoplasmic elements stained blue on a clear background. These results compare very favourably with acid fast bacilli stained by the traditional method. Detergents are efficient lipophilic agents and safer to handle than phenol. The method described here stains acid fast bacilli as efficiently as traditional carbol fuchsin methods. LOC High Suds is considerably cheaper than phenol. Images PMID:7687254

  10. Safer staining method for acid fast bacilli.

    PubMed

    Ellis, R C; Zabrowarny, L A

    1993-06-01

    To develop a method for staining acid fast bacilli which excluded highly toxic phenol from the staining solution. A lipophilic agent, a liquid organic detergent, LOC High Studs, distributed by Amway, was substituted. The acid fast bacilli stained red; nuclei, cytoplasm, and cytoplasmic elements stained blue on a clear background. These results compare very favourably with acid fast bacilli stained by the traditional method. Detergents are efficient lipophilic agents and safer to handle than phenol. The method described here stains acid fast bacilli as efficiently as traditional carbol fuchsin methods. LOC High Suds is considerably cheaper than phenol.

  11. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., Gaithersburg, MD 20877-2417. (d) Manual of Analytical Methods for the Analysis of Pesticide Residues in Human...), Volumes I and II, Food and Drug Administration, Center for Food Safety and Applied Nutrition (CFSAN),...

  12. Fast quench reactor and method

    DOEpatents

    Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.

    2002-01-01

    A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.

  13. Fast quench reactor and method

    DOEpatents

    Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.

    2002-09-24

    A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.

  14. Fast quench reactor and method

    DOEpatents

    Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.

    1998-01-01

    A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.

  15. Fast quench reactor and method

    DOEpatents

    Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.

    1998-05-12

    A fast quench reactor includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This ``freezes`` the desired end product(s) in the heated equilibrium reaction stage. 7 figs.

  16. Analytical Methods for Trace Metals. Training Manual.

    ERIC Educational Resources Information Center

    Office of Water Program Operations (EPA), Cincinnati, OH. National Training and Operational Technology Center.

    This training manual presents material on the theoretical concepts involved in the methods listed in the Federal Register as approved for determination of trace metals. Emphasis is on laboratory operations. This course is intended for chemists and technicians with little or no experience in analytical methods for trace metals. Students should have…

  17. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....

  18. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....

  19. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....

  20. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....

  1. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....

  2. Analytical Methods for Characterizing Magnetic Resonance Probes

    PubMed Central

    Manus, Lisa M.; Strauch, Renee C.; Hung, Andy H.; Eckermann, Amanda L.; Meade, Thomas J.

    2012-01-01

    SUMMARY The efficiency of Gd(III) contrast agents in magnetic resonance image enhancement is governed by a set of tunable structural parameters. Understanding and measuring these parameters requires specific analytical techniques. This Feature describes strategies to optimize each of the critical Gd(III) relaxation parameters for molecular imaging applications and the methods employed for their evaluation. PMID:22624599

  3. FAST TRACK COMMUNICATION: Uniqueness of static black holes without analyticity

    NASA Astrophysics Data System (ADS)

    Chruściel, Piotr T.; Galloway, Gregory J.

    2010-08-01

    We show that the hypothesis of analyticity in the uniqueness theory of vacuum, or electrovacuum, static black holes is not needed. More generally, we show that prehorizons covering a closed set cannot occur in well-behaved domains of outer communications.

  4. Prioritizing pesticide compounds for analytical methods development

    USGS Publications Warehouse

    Norman, Julia E.; Kuivila, Kathryn M.; Nowell, Lisa H.

    2012-01-01

    The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1

  5. Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation

    NASA Astrophysics Data System (ADS)

    Poddar, Banibrata; Giurgiutiu, Victor

    2016-04-01

    Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.

  6. Analytic sequential methods for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2014-05-01

    In this paper, we propose an analytic sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. We have developed explicit formulae for quick determination of the parameters of the new detection algorithm.

  7. An Analytical Method of Estimating Turbine Performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1948-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.

  8. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  9. Algorithmic and analytical methods in network biology.

    PubMed

    Koyutürk, Mehmet

    2010-01-01

    During the genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining to functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of biological systems, commonly using network models. Such efforts lead to novel insights into the complexity of living systems, through development of sophisticated abstractions, algorithms, and analytical techniques that address a broad range of problems, including the following: (1) inference and reconstruction of complex cellular networks; (2) identification of common and coherent patterns in cellular networks, with a view to understanding the organizing principles and building blocks of cellular signaling, regulation, and metabolism; and (3) characterization of cellular mechanisms that underlie the differences between living systems, in terms of evolutionary diversity, development and differentiation, and complex phenotypes, including human disease. These problems pose significant algorithmic and analytical challenges because of the inherent complexity of the systems being studied; limitations of data in terms of availability, scope, and scale; intractability of resulting computational problems; and limitations of reference models for reliable statistical inference. This article provides a broad overview of existing algorithmic and analytical approaches to these problems, highlights key biological insights provided by these approaches, and outlines emerging opportunities and challenges in computational systems biology.

  10. Algorithmic and analytical methods in network biology

    PubMed Central

    Koyutürk, Mehmet

    2011-01-01

    During genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of biological systems, commonly using network models. Such efforts lead to novel insights into the complexity of living systems, through development of sophisticated abstractions, algorithms, and analytical techniques that address a broad range of problems, including the following: (1) inference and reconstruction of complex cellular networks; (2) identification of common and coherent patterns in cellular networks, with a view to understanding the organizing principles and building blocks of cellular signaling, regulation, and metabolism; and (3) characterization of cellular mechanisms that underlie the differences between living systems, in terms of evolutionary diversity, development and differentiation, and complex phenotypes, including human disease. These problems pose significant algorithmic and analytical challenges because of the inherent complexity of the systems being studied; limitations of data in terms of availability, scope, and scale; intractability of resulting computational problems; and limitations of reference models for reliable statistical inference. This article provides a broad overview of existing algorithmic and analytical approaches to these problems, highlights key biological insights provided by these approaches, and outlines emerging opportunities and challenges in computational systems biology. PMID:20836029

  11. Secondary waste minimization in analytical methods

    SciTech Connect

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S.; Schilling, J.B.

    1995-07-01

    The characterization phase of site remediation is an important and costly part of the process. Because toxic solvents and other hazardous materials are used in common analytical methods, characterization is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the volume or form of hazardous waste produced either in the sample preparation step or in the measurement step. The authors are examining alternative methods in the areas of inorganic, radiological, and organic analysis. For determining inorganic constituents, alternative methods were studied for sample introduction into inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as their associated waste volumes, were compared with the conventional approaches. In the radiological area, the authors are comparing conventional methods for gross {alpha}/{beta} measurements of soil samples to an alternative method that uses high-pressure microwave dissolution. For determination of organic constituents, microwave-assisted extraction was studied for RCRA regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil; polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil. Extraction efficiencies were determined under varying conditions of time, temperature, microwave power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in conventional extraction methods to about 30 mL. Extraction results varied from one matrix to another. In most cases, the microwave-assisted extraction technique was as efficient as the more common Soxhlet or sonication extraction techniques.

  12. A pragmatic overview of fast multipole methods

    SciTech Connect

    Strickland, J.H.; Baty, R.S.

    1995-12-01

    A number of physics problems can be modeled by a set of N elements which have pair-wise interactions with one another. A direct solution technique requires computational effort which is O(N{sup 2}). Fast multipole methods (FMM) have been widely used in recent years to obtain solutions to these problems requiring a computational effort of only 0 (N lnN) or O (N). In this paper we present an overview of several variations of the fast multipole method along with examples of its use in solving a variety of physical problems.

  13. Fast analytical modeling of SEM images at a high level of accuracy

    NASA Astrophysics Data System (ADS)

    Babin, S.; Borisov, S. S.; Trifonenkov, V. P.

    2015-03-01

    Simulating SEM images is important in order to optimize SEM subsystems and the setup of the SEM for specific tasks, such as new devices and fabrication methods, as well as to complete simulation flows in lithography and nanofabrication. Monte Carlo simulators have been used for these purposes, but their disadvantage is the low speed of simulation. A fast analytic simulator of SEM images, ASEM, is presented in this paper, which takes into account the most important factors in SEM: electron scattering in 3D samples composed of various materials, electrical fields, the properties and geometry of detectors, and charging. This allows for a simulation accuracy approaching that of Monte Carlo, while the simulation time is on the scale of one minute. Examples of simulations and their comparison to actual experiments are presented with various detectors, samples, electrical fields and charging, including the contrast reversal effect due to charging. Simulations of SEM images using resist profiles exported from a lithography simulator are also presented.

  14. SXR Continuum Radiation Transmitted Through Metallic Filters: An Analytical Approach To Fast Electron Temperature Measurements

    SciTech Connect

    Delgado-Aparicio, L.; Tritz, K.; Kramer, T.; Stutman, D.; Finkentha, M.; Hill, K.; Bitter, M.

    2010-08-26

    A new set of analytic formulae describes the transmission of soft X-ray (SXR) continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler, Rev. Sci. Instrum., 20, 599, (1999)]. The new analytic formulae can improve the interpretation of the experimental results and thus contribute in obtaining fast teperature measurements in between intermittent Thomson Scattering data.

  15. Analytical methods for toxic gases from thermal degradation of polymers

    NASA Technical Reports Server (NTRS)

    Hsu, M.-T. S.

    1977-01-01

    Toxic gases evolved from the thermal oxidative degradation of synthetic or natural polymers in small laboratory chambers or in large scale fire tests are measured by several different analytical methods. Gas detector tubes are used for fast on-site detection of suspect toxic gases. The infrared spectroscopic method is an excellent qualitative and quantitative analysis for some toxic gases. Permanent gases such as carbon monoxide, carbon dioxide, methane and ethylene, can be quantitatively determined by gas chromatography. Highly toxic and corrosive gases such as nitrogen oxides, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and sulfur dioxide should be passed into a scrubbing solution for subsequent analysis by either specific ion electrodes or spectrophotometric methods. Low-concentration toxic organic vapors can be concentrated in a cold trap and then analyzed by gas chromatography and mass spectrometry. The limitations of different methods are discussed.

  16. Analytical Methods for Secondary Metabolite Detection.

    PubMed

    Taibon, Judith; Strasser, Hermann

    2016-01-01

    The entomopathogenic fungi Metarhizium brunneum, Beauveria bassiana, and B. brongniartii are widely applied as biological pest control agent in OECD countries. Consequently, their use has to be flanked by a risk management approach, which includes the need to monitor the fate of their relevant toxic metabolites. There are still data gaps claimed by regulatory authorities pending on their identification and quantification of relevant toxins or secondary metabolites. In this chapter, analytical methods are presented allowing the qualitative and quantitative analysis of the relevant toxic B. brongniartii metabolite oosporein and the three M. brunneum relevant destruxin (dtx) derivatives dtx A, dtx B, and dtx E. PMID:27565501

  17. Analytical chromatography. Methods, instrumentation and applications

    NASA Astrophysics Data System (ADS)

    Yashin, Ya I.; Yashin, A. Ya

    2006-04-01

    The state-of-the-art and the prospects in the development of main methods of analytical chromatography, viz., gas, high performance liquid and ion chromatographic techniques, are characterised. Achievements of the past 10-15 years in the theory and general methodology of chromatography and also in the development of new sorbents, columns and chromatographic instruments are outlined. The use of chromatography in the environmental control, biology, medicine, pharmaceutics, and also for monitoring the quality of foodstuffs and products of chemical, petrochemical and gas industries, etc. is considered.

  18. Analytical Methods for Secondary Metabolite Detection.

    PubMed

    Taibon, Judith; Strasser, Hermann

    2016-01-01

    The entomopathogenic fungi Metarhizium brunneum, Beauveria bassiana, and B. brongniartii are widely applied as biological pest control agent in OECD countries. Consequently, their use has to be flanked by a risk management approach, which includes the need to monitor the fate of their relevant toxic metabolites. There are still data gaps claimed by regulatory authorities pending on their identification and quantification of relevant toxins or secondary metabolites. In this chapter, analytical methods are presented allowing the qualitative and quantitative analysis of the relevant toxic B. brongniartii metabolite oosporein and the three M. brunneum relevant destruxin (dtx) derivatives dtx A, dtx B, and dtx E.

  19. The greening of PCB analytical methods

    SciTech Connect

    Erickson, M.D.; Alvarado, J.S.; Aldstadt, J.H.

    1995-12-01

    Green chemistry incorporates waste minimization, pollution prevention and solvent substitution. The primary focus of green chemistry over the past decade has been within the chemical industry; adoption by routine environmental laboratories has been slow because regulatory standard methods must be followed. A related paradigm, microscale chemistry has gained acceptance in undergraduate teaching laboratories, but has not been broadly applied to routine environmental analytical chemistry. We are developing green and microscale techniques for routine polychlorinated biphenyl (PCB) analyses as an example of the overall potential within the environmental analytical community. Initial work has focused on adaptation of commonly used routine EPA methods for soils and oils. Results of our method development and validation demonstrate that: (1) Solvent substitution can achieve comparable results and eliminate environmentally less-desirable solvents, (2) Microscale extractions can cut the scale of the analysis by at least a factor of ten, (3) We can better match the amount of sample used with the amount needed for the GC determination step, (4) The volume of waste generated can be cut by at least a factor of ten, and (5) Costs are reduced significantly in apparatus, reagent consumption, and labor.

  20. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  1. Numerical methods: Analytical benchmarking in transport theory

    SciTech Connect

    Ganapol, B.D. )

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.

  2. An overview of fast multipole methods

    SciTech Connect

    Strickland, J.H.; Baty, R.S.

    1995-11-01

    A number of physics problems may be cast in terms of Hilbert-Schmidt integral equations. In many cases, the integrals tend to be zero over a large portion of the domain of interest. All of the information is contained in compact regions of the domain which renders their use very attractive from the standpoint of efficient numerical computation. Discrete representation of these integrals leads to a system of N elements which have pair-wise interactions with one another. A direct solution technique requires computational effort which is O(N{sup 2}). Fast multipole methods (FMM) have been widely used in recent years to obtain solutions to these problems requiring a computational effort of only O(Nln N) or O(N). In this paper we present an overview of several variations of the fast multipole method along with examples of its use in solving a variety of physical problems.

  3. Magnetostatic solution by hybrid technique and fast multipole method

    NASA Astrophysics Data System (ADS)

    Gruosso, G.; Repetto, M.

    2008-02-01

    The use of fast multipole method (FMM) in the solution of a magnetostatic problem is presented. The magnetostatic solution strategy is based on finite formulation of electromagnetic field coupled with an integral formulation for the definition of boundary conditions on the external surface of the unstructured mesh. Due to the hypothesis of micromagnetic problem, the resulting matrix structure is sparse and integral terms are only on the RHS. Magnetic surface charge is used as source of these integral terms and is localized on the faces between tetrahedra. The computation of the integral terms can be performed by analytical formulas for the near field contributes and by FMM for far field ones.

  4. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  5. Analytical methods for optical remote sensing

    SciTech Connect

    Spellicy, R.L.

    1997-12-31

    Optical monitoring systems are very powerful because of their ability to see many compounds simultaneously as well as their ability to report results in real time. However, these strengths also present unique problems to analysis of the resulting data and validation of observed results. Today, many FTIR and UV-DOAS systems are in use. Some of these are manned systems supporting short term tests while others are totally unmanned systems which are expected to operate without intervention for weeks or months at a time. The analytical methods needed to support both the diversity of compounds and the diversity of applications is challenging. In this paper, the fundamental concepts of spectral analysis for IR/UV systems are presented. This is followed by examples of specific field data from both short term measurement programs looking at unique sources and long-term unmanned monitoring systems looking at ambient air.

  6. Recent advances in analytical methods for mycotoxins.

    PubMed

    Gilbert, J

    1993-01-01

    Recent advances in analytical methods are reviewed using the examples of aflatoxins and trichothecene mycotoxins. The most dramatic advances are seen as being those based on immunological principles utilized for aflatoxins to produce simple screening methods and for rapid specific clean-up. The possibilities of automation using immunoaffinity columns is described. In contrast for the trichothecenes immunological methods have not had the same general impact. Post-column derivatization using bromine or iodine to enhance fluorescence for HPLC detection of aflatoxins has become widely employed and there are similar possibilities for improved HPLC detection for trichothecenes using electrochemical or trichothecene-specific post-column reactions. There have been improvements in the use of more rapid and specific clean-up methods for trichothecenes, whilst HPLC and GC remain equally favoured for the end-determination. More sophisticated instrumental techniques such as mass spectrometry (LC/MS, MS/MS) and supercritical fluid chromatography (SFC/MS) have been demonstrated to have potential for application to mycotoxin analysis, but have not as yet made much general impact.

  7. Pyrroloquinoline quinone: Metabolism and analytical methods

    SciTech Connect

    Smidt, C.R.

    1990-01-01

    Pyrroloquinoline quinone (PQQ) functions as a cofactor for bacterial oxidoreductases. Whether or not PQQ serves as a cofactor in higher plants and animals remains controversial. Nevertheless, strong evidence exists that PQQ has nutritional importance. In highly purified, chemically defined diets PQQ stimulates animal growth. Further PQQ deprivation impairs connective tissue maturation, particularly when initiated in utero and throughout perinatal development. The study addresses two main objectives: (1) to elucidate basic aspects of the metabolism of PQQ in animals, and (2) to develop and improve existing analytical methods for PQQ. To study intestinal absorption of PQQ, ten mice were administered [[sup 14]C]-PQQ per os. PQQ was readily absorbed (62%) in the lower intestine and was excreted by the kidney within 24 hours. Significant amounts of labeled-PQQ were retained only by skin and kidney. Three approaches were taken to answer the question whether or not PQQ is synthesized by the intestinal microflora of mice. First, dietary antibiotics had no effect on fecal PQQ excretion. Then, no bacterial isolates could be identified that are known to synthesize PQQ. Last, cecal contents were incubated anaerobically with radiolabeled PQQ-precursors with no label appearing in isolated PQQ. Thus, intestinal PQQ synthesis is unlikely. Analysis of PQQ in biological samples is problematic since PQQ forms adducts with nucleophilic compounds and binds to the protein fraction. Existing analytical methods are reviewed and a new approach is introduced that allows for detection of PQQ in animal tissue and foods. PQQ is freed from proteins by ion exchange chromatography, purified on activated silica cartridges, detected by a colorimetric redox-cycling assay, and identified by mass spectrometry. That compounds with the properties of PQQ may be nutritionally important offers interesting areas for future investigation.

  8. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must...

  9. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98.... Army Individual Protection Directorate's Military Specifications, approved analytical test...

  10. Paper analytical devices for fast field screening of beta lactam antibiotics and anti-tuberculosis pharmaceuticals

    PubMed Central

    Weaver, Abigail A.; Reiser, Hannah; Barstis, Toni; Benvenuti, Michael; Ghosh, Debarati; Hunckler, Michael; Joy, Brittney; Koenig, Leah; Raddell, Kellie; Lieberman, Marya

    2013-01-01

    Reports of low quality pharmaceuticals have been on the rise in the last decade with the greatest prevalence of substandard medicines in developing countries, where lapses in manufacturing quality control or breaches in the supply chain allow substandard medicines to reach the marketplace. Here, we describe inexpensive test cards for fast field screening of pharmaceutical dosage forms containing beta lactam antibiotics or combinations of the four first-line antituberculosis (TB) drugs. The devices detect the active pharmaceutical ingredients (APIs) ampicillin, amoxicillin, rifampicin, isoniazid, ethambutol, and pyrazinamide, and also screen for substitute pharmaceuticals such as acetaminophen and chloroquine that may be found in counterfeit pharmaceuticals. The tests can detect binders and fillers like chalk, talc, and starch not revealed by traditional chromatographic methods. These paper devices contain twelve lanes, separated by hydrophobic barriers, with different reagents deposited in the lanes. The user rubs some of the solid pharmaceutical across the lanes and dips the edge of the paper into water. As water climbs up the lanes by capillary action, it triggers a library of different chemical tests and a timer to indicate when the tests are completed. The reactions in each lane generate colors to form a “color bar code” which can be analyzed visually by comparison to standard outcomes. While quantification of the APIs is poor compared to conventional analytical methods, the sensitivity and selectivity for the analytes is high enough to pick out suspicious formulations containing no API or a substitute API, as well as formulations containing APIs that have been “cut” with inactive ingredients. PMID:23725012

  11. Analytical estimates of electron quasi-linear diffusion by fast magnetosonic waves

    NASA Astrophysics Data System (ADS)

    Mourenas, D.; Artemyev, A. V.; Agapitov, O. V.; Krasnoselskikh, V.

    2013-06-01

    Quantifying the loss of relativistic electrons from the Earth's radiation belts requires to estimate the effects of many kinds of observed waves, ranging from ULF to VLF. Analytical estimates of electron quasi-linear diffusion coefficients for whistler-mode chorus and hiss waves of arbitrary obliquity have been recently derived, allowing useful analytical approximations for lifetimes. We examine here the influence of much lower frequency and highly oblique, fast magnetosonic waves (also called ELF equatorial noise) by means of both approximate analytical formulations of the corresponding diffusion coefficients and full numerical simulations. Further analytical developments allow us to identify the most critical wave and plasma parameters necessary for a strong impact of fast magnetosonic waves on electron lifetimes and acceleration in the simultaneous presence of chorus, hiss, or lightning-generated waves, both inside and outside the plasmasphere. In this respect, a relatively small frequency over ion gyrofrequency ratio appears more favorable, and other propitious circumstances are characterized. This study should be useful for a comprehensive appraisal of the potential effect of fast magnetosonic waves throughout the magnetosphere.

  12. Analytical solution and computer program (FAST) to estimate fluid fluxes from subsurface temperature profiles

    NASA Astrophysics Data System (ADS)

    Kurylyk, Barret L.; Irvine, Dylan J.

    2016-02-01

    This study details the derivation and application of a new analytical solution to the one-dimensional, transient conduction-advection equation that is applied to trace vertical subsurface fluid fluxes. The solution employs a flexible initial condition that allows for nonlinear temperature-depth profiles, providing a key improvement over most previous solutions. The boundary condition is composed of any number of superimposed step changes in surface temperature, and thus it accommodates intermittent warming and cooling periods due to long-term changes in climate or land cover. The solution is verified using an established numerical model of coupled groundwater flow and heat transport. A new computer program FAST (Flexible Analytical Solution using Temperature) is also presented to facilitate the inversion of this analytical solution to estimate vertical groundwater flow. The program requires surface temperature history (which can be estimated from historic climate data), subsurface thermal properties, a present-day temperature-depth profile, and reasonable initial conditions. FAST is written in the Python computing language and can be run using a free graphical user interface. Herein, we demonstrate the utility of the analytical solution and FAST using measured subsurface temperature and climate data from the Sendia Plain, Japan. Results from these illustrative examples highlight the influence of the chosen initial and boundary conditions on estimated vertical flow rates.

  13. [Pharmacokinetics, metabolism, and analytical methods of ethanol].

    PubMed

    Goullé, J-P; Guerbet, M

    2015-09-01

    Alcohol is a licit substance whose significant consumption is responsible for a major public health problem. Every year, a large number of deaths are related to its consumption. It is also involved in various accidents, on the road, at work, as well as during acts of violence. Ethanol absorption and its fate are detailed. It is mainly absorbed in the small intestine. It accompanies the movements of the water, so it diffuses in all the tissues uniformly with the exception of bones and fat. The major route of ethanol detoxification is located into the liver. Detoxification is a saturable two-step oxidation. During the first stage ethanol is oxidized into acetaldehyde, under the action of alcohol dehydrogenase. During the second stage acetaldehyde is oxidized into acetate. Genetic factors or some drugs are able to disturb the absorption and the metabolism of ethanol. The analytical methods for the quantification of alcohol in man include analysis in exhaled air and in blood. The screening and quantification of ethanol for road safety are performed in exhaled air. In hospitals, blood ethanol determination is routinely performed by enzymatic method, but the rule for forensic samples is gas chromatography.

  14. 40 CFR 425.03 - Sulfide analytical methods and applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...

  15. 40 CFR 425.03 - Sulfide analytical methods and applicability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...

  16. 40 CFR 425.03 - Sulfide analytical methods and applicability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...

  17. Green analytical method development for statin analysis.

    PubMed

    Assassi, Amira Louiza; Roy, Claude-Eric; Perovitch, Philippe; Auzerie, Jack; Hamon, Tiphaine; Gaudin, Karen

    2015-02-01

    Green analytical chemistry method was developed for pravastatin, fluvastatin and atorvastatin analysis. HPLC/DAD method using ethanol-based mobile phase with octadecyl-grafted silica with various grafting and related-column parameters such as particle sizes, core-shell and monolith was studied. Retention, efficiency and detector linearity were optimized. Even for column with particle size under 2 μm, the benefit of keeping efficiency within a large range of flow rate was not obtained with ethanol based mobile phase compared to acetonitrile one. Therefore the strategy to shorten analysis by increasing the flow rate induced decrease of efficiency with ethanol based mobile phase. An ODS-AQ YMC column, 50 mm × 4.6 mm, 3 μm was selected which showed the best compromise between analysis time, statin separation, and efficiency. HPLC conditions were at 1 mL/min, ethanol/formic acid (pH 2.5, 25 mM) (50:50, v/v) and thermostated at 40°C. To reduce solvent consumption for sample preparation, 0.5mg/mL concentration of each statin was found the highest which respected detector linearity. These conditions were validated for each statin for content determination in high concentrated hydro-alcoholic solutions. Solubility higher than 100mg/mL was found for pravastatin and fluvastatin, whereas for atorvastatin calcium salt the maximum concentration was 2mg/mL for hydro-alcoholic binary mixtures between 35% and 55% of ethanol in water. Using atorvastatin instead of its calcium salt, solubility was improved. Highly concentrated solution of statins offered potential fluid for per Buccal Per-Mucous(®) administration with the advantages of rapid and easy passage of drugs. PMID:25582487

  18. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements,...

  19. An analytic method to compute star cluster luminosity statistics

    NASA Astrophysics Data System (ADS)

    da Silva, Robert L.; Krumholz, Mark R.; Fumagalli, Michele; Fall, S. Michael

    2014-03-01

    The luminosity distribution of the brightest star clusters in a population of galaxies encodes critical pieces of information about how clusters form, evolve and disperse, and whether and how these processes depend on the large-scale galactic environment. However, extracting constraints on models from these data is challenging, in part because comparisons between theory and observation have traditionally required computationally intensive Monte Carlo methods to generate mock data that can be compared to observations. We introduce a new method that circumvents this limitation by allowing analytic computation of cluster order statistics, i.e. the luminosity distribution of the Nth most luminous cluster in a population. Our method is flexible and requires few assumptions, allowing for parametrized variations in the initial cluster mass function and its upper and lower cutoffs, variations in the cluster age distribution, stellar evolution and dust extinction, as well as observational uncertainties in both the properties of star clusters and their underlying host galaxies. The method is fast enough to make it feasible for the first time to use Markov chain Monte Carlo methods to search parameter space to find best-fitting values for the parameters describing cluster formation and disruption, and to obtain rigorous confidence intervals on the inferred values. We implement our method in a software package called the Cluster Luminosity Order-Statistic Code, which we have made publicly available.

  20. Metalaxyl: persistence, degradation, metabolism, and analytical methods.

    PubMed

    Sukul, P; Spiteller, M

    2000-01-01

    Metalaxyl is a systemic fungicide used to control plant diseases caused by Oomycete fungi. Its formulations include granules, wettable powders, dusts, and emulsifiable concentrates. Application may be by foliar or soil incorporation, surface spraying (broadcast or band), drenching, and seed treatment. Metalaxyl registered products either contain metalaxyl as the sole active ingredient or are combined with other active ingredients (e.g., captan, mancozeb, copper compounds, carboxin). Due to its broad-spectrum activity, metalaxyl is used world-wide on a variety of fruit and vegetable crops. Its effectiveness results from inhibition of uridine incorporation into RNA and specific inhibition of RNA polymerase-1. Metalaxyl has both curative and systemic properties. Its mammalian toxicity is classified as EPA toxicity class III and it is also relatively non-toxic to most nontarget arthropod and vertebrate species. Adequate analytical methods of TLC, GLC, HPLC, MS, and other techniques are available for identification and determination of metalaxyl residues and its metabolites. Available laboratory and field studies indicate that metalaxyl is stable to hydrolysis under normal environmental pH values, It is also photolytically stable in water and soil when exposed to natural sunlight. Its tolerance to a wide range of pH, light, and temperature leads to its continued use in agriculture. Metalaxyl is photodecomposed in UV light, and photoproducts are formed by rearrangement of the N-acyl group to the aromatic ring, demethoxylation, N-deacylation, and elimination of the methoxycarbonyl group from the molecule. Photosensitizers such as humic acid, TiO2, H2O2, acetone, and riboflavin accelerate its photodecomposition. Information is provided on the fate of metalaxyl in plant, soil, water, and animals. Major metabolic routes include hydrolysis of the methyl ester and methyl ether oxidation of the ring-methyl groups. The latter are precursors of conjugates in plants and animals

  1. An analytical method for computing atomic contact areas in biomolecules.

    PubMed

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc.

  2. 40 CFR 425.03 - Sulfide analytical methods and applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration method... ferricyanide titration method for the determination of sulfide in wastewaters discharged by plants operating...

  3. 40 CFR 425.03 - Sulfide analytical methods and applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration method... ferricyanide titration method for the determination of sulfide in wastewaters discharged by plants operating...

  4. Methods and Instruments for Fast Neutron Detection

    SciTech Connect

    Jordan, David V.; Reeder, Paul L.; Cooper, Matthew W.; McCormick, Kathleen R.; Peurrung, Anthony J.; Warren, Glen A.

    2005-05-01

    Pacific Northwest National Laboratory evaluated the performance of a large-area (~0.7 m2) plastic scintillator time-of-flight (TOF) sensor for direct detection of fast neutrons. This type of sensor is a readily area-scalable technology that provides broad-area geometrical coverage at a reasonably low cost. It can yield intrinsic detection efficiencies that compare favorably with moderator-based detection methods. The timing resolution achievable should permit substantially more precise time windowing of return neutron flux than would otherwise be possible with moderated detectors. The energy-deposition threshold imposed on each scintillator contributing to the event-definition trigger in a TOF system can be set to blind the sensor to direct emission from the neutron generator. The primary technical challenge addressed in the project was to understand the capabilities of a neutron TOF sensor in the limit of large scintillator area and small scintillator separation, a size regime in which the neutral particle’s flight path between the two scintillators is not tightly constrained.

  5. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  6. Fast Single Image Super-Resolution Using a New Analytical Solution for l2 - l2 Problems.

    PubMed

    Zhao, Ningning; Wei, Qi; Basarab, Adrian; Dobigeon, Nicolas; Kouame, Denis; Tourneret, Jean-Yves

    2016-08-01

    This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 - l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques. PMID:27187960

  7. Fast Single Image Super-Resolution Using a New Analytical Solution for l2 - l2 Problems.

    PubMed

    Zhao, Ningning; Wei, Qi; Basarab, Adrian; Dobigeon, Nicolas; Kouame, Denis; Tourneret, Jean-Yves

    2016-08-01

    This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 - l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques.

  8. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....

  9. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....

  10. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....

  11. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....

  12. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....

  13. Thiram: degradation, applications and analytical methods.

    PubMed

    Sharma, Vaneet Kumar; Aulakh, J S; Malik, Ashok Kumar

    2003-10-01

    In this review a brief introduction to thiram (tetramethylthiuram disulfide; TMTD) pesticide has been given along with other applications. All the important methods available are systematically arranged and are listed under various techniques. Some of these methods have been applied for the determination of thiram in commercial formulations, synthetic mixtures in grains, vegetables and fruits. A comparison of different methods is the salient feature of this review.

  14. Learner Language Analytic Methods and Pedagogical Implications

    ERIC Educational Resources Information Center

    Dyson, Bronwen

    2010-01-01

    Methods for analysing interlanguage have long aimed to capture learner language in its own right. By surveying the cognitive methods of Error Analysis, Obligatory Occasion Analysis and Frequency Analysis, this paper traces reformulations to attain this goal. The paper then focuses on Emergence Analysis, which fine-tunes learner language analysis…

  15. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...

  16. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... MEALS, READY-TO-EAT (MREs), MEATS, AND MEAT PRODUCTS MREs, Meats, and Related Meat Food Products § 98.4... of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of...

  17. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...

  18. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...

  19. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... are used. The manuals of standard methods most often used by the Science and Technology laboratories... Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O. Box 3489,...

  20. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... are used. The manuals of standard methods most often used by the Science and Technology laboratories... Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O. Box 3489,...

  1. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association.... Environmental Protection Agency (EPA) Chemical Exposure Research Branch, EPA Office of Research and Development... Methods for the Examination of Dairy Products, American Public Health Association, 1015 Fifteenth...

  2. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...

  3. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...

  4. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...

  5. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...

  6. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...

  7. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005. (b... Examination of Dairy Products, American Public Health Association, 1015 Fifteenth Street, NW, Washington,...

  8. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005. (b... Examination of Dairy Products, American Public Health Association, 1015 Fifteenth Street, NW, Washington,...

  9. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005. (b... Examination of Dairy Products, American Public Health Association, 1015 Fifteenth Street, NW, Washington,...

  10. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...), Volumes I and II, Food and Drug Administration, Center for Food Safety and Applied Nutrition (CFSAN), 200... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005....

  11. Rotary fast tool servo system and methods

    DOEpatents

    Montesanti, Richard C.; Trumper, David L.

    2007-10-02

    A high bandwidth rotary fast tool servo provides tool motion in a direction nominally parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. Three or more flexure blades having all ends fixed are used to form an axis of rotation for a swing arm that carries a cutting tool at a set radius from the axis of rotation. An actuator rotates a swing arm assembly such that a cutting tool is moved in and away from the lathe-mounted, rotating workpiece in a rapid and controlled manner in order to machine the workpiece. A pair of position sensors provides rotation and position information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in-feed slide of a precision lathe.

  12. FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT

    EPA Science Inventory

    This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...

  13. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  14. Analytical chemistry methods for mixed oxide fuel, March 1985

    SciTech Connect

    Not Available

    1985-03-01

    This standard provides analytical chemistry methods for the analysis of materials used to produce mixed oxide fuel. These materials are ceramic fuel and insulator pellets and the plutonium and uranium oxides and nitrates used to fabricate these pellets.

  15. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method... oxygen demand. (6) QC means “quality control.” (b) Method modifications. (1) If the underlying chemistry... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR...

  16. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method... oxygen demand. (6) QC means “quality control.” (b) Method modifications. (1) If the underlying chemistry... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR...

  17. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method... oxygen demand. (6) QC means “quality control.” (b) Method modifications. (1) If the underlying chemistry... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR...

  18. 40 CFR 766.16 - Developing the analytical test method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method... to separate the HDDs/HDFs from the sample matrix. Methods are reviewed in the Guidelines under § 766... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution...

  19. Analytical techniques for instrument design - matrix methods

    SciTech Connect

    Robinson, R.A.

    1997-09-01

    We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.

  20. Handbook of Analytical Methods for Textile Composites

    NASA Technical Reports Server (NTRS)

    Cox, Brian N.; Flanagan, Gerry

    1997-01-01

    The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.

  1. Analytical and numerical methods; advanced computer concepts

    SciTech Connect

    Lax, P D

    1991-03-01

    This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.

  2. Analytical techniques for instrument design -- Matrix methods

    SciTech Connect

    Robinson, R.A.

    1997-12-31

    The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.

  3. Relativistic mirrors in laser plasmas (analytical methods)

    NASA Astrophysics Data System (ADS)

    Bulanov, S. V.; Esirkepov, T. Zh; Kando, M.; Koga, J.

    2016-10-01

    Relativistic flying mirrors in plasmas are realized as thin dense electron (or electron-ion) layers accelerated by high-intensity electromagnetic waves to velocities close to the speed of light in vacuum. The reflection of an electromagnetic wave from the relativistic mirror results in its energy and frequency changing. In a counter-propagation configuration, the frequency of the reflected wave is multiplied by the factor proportional to the Lorentz factor squared. This scientific area promises the development of sources of ultrashort x-ray pulses in the attosecond range. The expected intensity will reach the level at which the effects predicted by nonlinear quantum electrodynamics start to play a key role. We present an overview of theoretical methods used to describe relativistic flying, accelerating, oscillating mirrors emerging in intense laser-plasma interactions.

  4. Fracture mechanics life analytical methods verification testing

    NASA Technical Reports Server (NTRS)

    Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.

    1994-01-01

    The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.

  5. Analytical instruments, ionization sources, and ionization methods

    DOEpatents

    Atkinson, David A.; Mottishaw, Paul

    2006-04-11

    Methods and apparatus for simultaneous vaporization and ionization of a sample in a spectrometer prior to introducing the sample into the drift tube of the analyzer are disclosed. The apparatus includes a vaporization/ionization source having an electrically conductive conduit configured to receive sample particulate which is conveyed to a discharge end of the conduit. Positioned proximate to the discharge end of the conduit is an electrically conductive reference device. The conduit and the reference device act as electrodes and have an electrical potential maintained between them sufficient to cause a corona effect, which will cause at least partial simultaneous ionization and vaporization of the sample particulate. The electrical potential can be maintained to establish a continuous corona, or can be held slightly below the breakdown potential such that arrival of particulate at the point of proximity of the electrodes disrupts the potential, causing arcing and the corona effect. The electrical potential can also be varied to cause periodic arcing between the electrodes such that particulate passing through the arc is simultaneously vaporized and ionized. The invention further includes a spectrometer containing the source. The invention is particularly useful for ion mobility spectrometers and atmospheric pressure ionization mass spectrometers.

  6. Analytic Methods for Simulated Light Transport

    NASA Astrophysics Data System (ADS)

    Arvo, James Richard

    1995-01-01

    This thesis presents new mathematical and computational tools for the simulation of light transport in realistic image synthesis. New algorithms are presented for exact computation of direct illumination effects related to light emission, shadowing, and first-order scattering from surfaces. New theoretical results are presented for the analysis of global illumination algorithms, which account for all interreflections of light among surfaces of an environment. First, a closed-form expression is derived for the irradiance Jacobian, which is the derivative of a vector field representing radiant energy flux. The expression holds for diffuse polygonal scenes and correctly accounts for shadowing, or partial occlusion. Three applications of the irradiance Jacobian are demonstrated: locating local irradiance extrema, direct computation of isolux contours, and surface mesh generation. Next, the concept of irradiance is generalized to tensors of arbitrary order. A recurrence relation for irradiance tensors is derived that extends a widely used formula published by Lambert in 1760. Several formulas with applications in computer graphics are derived from this recurrence relation and are independently verified using a new Monte Carlo method for sampling spherical triangles. The formulas extend the range of non-diffuse effects that can be computed in closed form to include illumination from directional area light sources and reflections from and transmissions through glossy surfaces. Finally, new analysis for global illumination is presented, which includes both direct illumination and indirect illumination due to multiple interreflections of light. A novel operator equation is proposed that clarifies existing deterministic algorithms for simulating global illumination and facilitates error analysis. Basic properties of the operators and solutions are identified which are not evident from previous formulations. A taxonomy of errors that arise in simulating global illumination is

  7. Internal R and D task summary report: analytical methods development

    SciTech Connect

    Schweighardt, F.K.

    1983-07-01

    International Coal Refining Company (ICRC) conducted two research programs to develop analytical procedures for characterizing the feed, intermediates,and products of the proposed SRC-I Demonstration Plant. The major conclusion is that standard analytical methods must be defined and assigned statistical error limits of precision and reproducibility early in development. Comparing all SRC-I data or data from different processes is complex and expensive if common data correlation procedures are not followed. ICRC recommends that processes be audited analytically and statistical analyses generated as quickly as possible, in order to quantify process-dependent and -independent variables. 16 references, 10 figures, 20 tables.

  8. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  9. Study of an analytical method for hexavalent chromium.

    PubMed

    Bhargava, O P; Bumsted, H E; Grunder, F I; Hunt, B L; Manning, G E; Riemann, R A; Samuels, J K; Tatone, V; Waldschmidt, S J; Hernandez, P

    1983-06-01

    The diphenylcarbazide colorimetric method was evaluated by analyzing spiked PVC filters prepared by an AIHA-accredited consultant laboratory for chromium (VI). All seven participating laboratories received the samples and performed the analyses at the same time. Three laboratories simultaneously tested three alternative analytical procedures. Reduced amounts of chromium (VI) were found by both the consultant and participating laboratories when using the test procedure and one of the alternative methods. Two of the alternative analytical methods, both of which involve an alkaline extraction procedure, provided higher recoveries and more precise values for the test filters. It appears that the alkaline extraction procedure may be more appropriate for occupational health samples taken in steel industry environments which may include several interferents. Suggestions are made for further studies to determine the most appropriate analytical method.

  10. Analytical methods for quantitation of prenylated flavonoids from hops.

    PubMed

    Nikolić, Dejan; van Breemen, Richard B

    2013-01-01

    The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106

  11. Analytical methods for quantitation of prenylated flavonoids from hops

    PubMed Central

    Nikolić, Dejan; van Breemen, Richard B.

    2013-01-01

    The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106

  12. Computer Subroutines for Analytic Rotation by Two Gradient Methods.

    ERIC Educational Resources Information Center

    van Thillo, Marielle

    Two computer subroutine packages for the analytic rotation of a factor matrix, A(p x m), are described. The first program uses the Flectcher (1970) gradient method, and the second uses the Polak-Ribiere (Polak, 1971) gradient method. The calculations in both programs involve the optimization of a function of free parameters. The result is a…

  13. An analytical method for designing low noise helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.

    1978-01-01

    The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.

  14. FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...

  15. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. PMID:26907860

  16. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    PubMed

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM.

  17. Beamforming and holography image formation methods: an analytic study.

    PubMed

    Solimene, Raffaele; Cuccaro, Antonio; Ruvio, Giuseppe; Tapia, Daniel Flores; O'Halloran, Martin

    2016-04-18

    Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the configuration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) configuration.. PMID:27137336

  18. Beamforming and holography image formation methods: an analytic study.

    PubMed

    Solimene, Raffaele; Cuccaro, Antonio; Ruvio, Giuseppe; Tapia, Daniel Flores; O'Halloran, Martin

    2016-04-18

    Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the configuration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) configuration..

  19. Uncertainty profiles for the validation of analytical methods.

    PubMed

    Saffaj, T; Ihssane, B

    2011-09-15

    This article aims to expose a new global strategy for the validation of analytical methods and the estimation of measurement uncertainty. Our purpose is to allow to researchers in the field of analytical chemistry get access to a powerful tool for the evaluation of quantitative analytical procedures. Indeed, the proposed strategy facilitates analytical validation by providing a decision tool based on the uncertainty profile and the β-content tolerance interval. Equally important, this approach allows a good estimate of measurement uncertainty by using data validation and without recourse to other additional experiments. In the example below, we confirmed the applicability of this new strategy for the validation of a chromatographic bioanalytical method and the good estimate of the measurement uncertainty without referring to any extra effort and additional experiments. A comparative study with the SFSTP approach showed that both strategies have selected the same calibration functions. The holistic character of the measurement uncertainty compared to the total error was influenced by our choice of profile uncertainty. Nevertheless, we think that the adoption of the uncertainty in the validation stage controls the risk of using the analytical method in routine phase.

  20. Fast Particle Methods for Multiscale Phenomena Simulations

    NASA Technical Reports Server (NTRS)

    Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew

    2000-01-01

    We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.

  1. A fast and environmental friendly analytical procedure for determination of melamine in milk exploiting fluorescence quenching.

    PubMed

    Nascimento, Carina F; Rocha, Diogo L; Rocha, Fábio R P

    2015-02-15

    An environmental friendly procedure was developed for fast melamine determination as an adulterant of protein content in milk. Triton X-114 was used for sample clean-up and as a fluorophore, whose fluorescence was quenched by the analyte. A linear response was observed from 1.0 to 6.0mgL(-1) melamine, described by the Stern-Volmer equation I°/I=(0.999±0.002)+(0.0165±0.004) CMEL (r=0.999). The detection limit was estimated at 0.8mgL(-1) (95% confidence level), which allows detecting as low as 320μg melamine in 100g of milk. Coefficients of variation (n=8) were estimated at 0.4% and 1.4% with and without melamine, respectively. Recoveries to melamine spiked to milk samples from 95% to 101% and similar slopes of calibration graphs obtained with and without milk indicated the absence of matrix effects. Results for different milk samples agreed with those obtained by high performance liquid chromatography at the 95% confidence level.

  2. Fast and Sensitive Method for Determination of Domoic Acid in Mussel Tissue.

    PubMed

    Barbaro, Elena; Zangrando, Roberta; Barbante, Carlo; Gambaro, Andrea

    2016-01-01

    Domoic acid (DA), a neurotoxic amino acid produced by diatoms, is the main cause of amnesic shellfish poisoning (ASP). In this work, we propose a very simple and fast analytical method to determine DA in mussel tissue. The method consists of two consecutive extractions and requires no purification steps, due to a reduction of the extraction of the interfering species and the application of very sensitive and selective HILIC-MS/MS method. The procedural method was validated through the estimation of trueness, extract yield, precision, detection, and quantification limits of analytical method. The sample preparation was also evaluated through qualitative and quantitative evaluations of the matrix effect. These evaluations were conducted both on the DA-free matrix spiked with known DA concentration and on the reference certified material (RCM). We developed a very selective LC-MS/MS method with a very low value of method detection limit (9 ng g(-1)) without cleanup steps. PMID:26904720

  3. Analytical Methods for Detonation Residues of Insensitive Munitions

    NASA Astrophysics Data System (ADS)

    Walsh, Marianne E.

    2016-01-01

    Analytical methods are described for the analysis of post-detonation residues from insensitive munitions. Standard methods were verified or modified to obtain the mass of residues deposited per round. In addition, a rapid chromatographic separation was developed and used to measure the mass of NTO (3-nitro-1,2,4-triazol-5-one), NQ (nitroguanidine) and DNAN (2,4-dinitroanisole). The HILIC (hydrophilic-interaction chromatography) separation described here uses a trifunctionally-bonded amide phase to retain the polar analytes. The eluent is 75/25 v/v acetonitrile/water acidified with acetic acid, which is also suitable for LC/MS applications. Analytical runtime was three minutes. Solid phase extraction and LC/MS conditions are also described.

  4. Fast linear method of illumination classification

    NASA Astrophysics Data System (ADS)

    Cooper, Ted J.; Baqai, Farhan A.

    2003-01-01

    We present a simple method for estimating the scene illuminant for images obtained by a Digital Still Camera (DSC). The proposed method utilizes basis vectors obtained from known memory color reflectance to identify the memory color objects in the image. Once the memory color pixels are identified, we use the ratios of the red/green and blue/green to determine the most likely illuminant in the image. The critical part of the method is to estimate the smallest set of basis vectors that closely represent the memory color reflectances. Basis vectors obtained from both Principal Component Analysis (PCA) and Independent Component Analysis (ICA) are used. We will show that only two ICA basis vectors are needed to get an acceptable estimate.

  5. A New Analytic Alignment Method for a SINS.

    PubMed

    Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing

    2015-01-01

    Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration.

  6. A New Analytic Alignment Method for a SINS.

    PubMed

    Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing

    2015-01-01

    Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353

  7. A New Analytic Alignment Method for a SINS

    PubMed Central

    Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing

    2015-01-01

    Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353

  8. Laser: a Tool for Optimization and Enhancement of Analytical Methods

    SciTech Connect

    Preisler, Jan

    1997-01-01

    In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p

  9. Fast tomographic methods for the tokamak ISTTOK

    SciTech Connect

    Carvalho, P. J.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.; Gori, S.; Toussaint, U. v.

    2008-04-07

    The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.

  10. A Simple Spectrophotometric Method for the Determination of Thiobarbituric Acid Reactive Substances in Fried Fast Foods

    PubMed Central

    Zeb, Alam; Ullah, Fareed

    2016-01-01

    A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360

  11. A Simple Spectrophotometric Method for the Determination of Thiobarbituric Acid Reactive Substances in Fried Fast Foods.

    PubMed

    Zeb, Alam; Ullah, Fareed

    2016-01-01

    A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360

  12. ANALYTICAL METHOD READINESS FOR THE CONTAMINANT CANDIDATE LIST

    EPA Science Inventory

    The Contaminant Candidate List (CCL), which was promulgated in March 1998, includes 50 chemical and 10 microbiological contaminants/contaminant groups. At the time of promulgation, analytical methods were available for 6 inorganic and 28 organic contaminants. Since then, 4 anal...

  13. Analytical chemistry methods for metallic core components: Revision March 1985

    SciTech Connect

    Not Available

    1985-03-01

    This standard provides analytical chemistry methods for the analysis of alloys used to fabricate core components. These alloys are 302, 308, 316, 316-Ti, and 321 stainless steels and 600 and 718 Inconels and they may include other 300-series stainless steels.

  14. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...

  15. 40 CFR 766.16 - Developing the analytical test method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...

  16. 40 CFR 766.16 - Developing the analytical test method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...

  17. 40 CFR 766.16 - Developing the analytical test method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...

  18. 40 CFR 766.16 - Developing the analytical test method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...

  19. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...

  20. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...

  1. A New Splitting Method for Both Analytical and Preparative LC/MS

    NASA Astrophysics Data System (ADS)

    Cai, Yi; Adams, Daniel; Chen, Hao

    2013-11-01

    This paper presents a novel splitting method for liquid chromatography/mass spectrometry (LC/MS) application, which allows fast MS detection of LC-separated analytes and subsequent online analyte collection. In this approach, a PEEK capillary tube with a micro-orifice drilled on the tube side wall is used to connect with LC column. A small portion of LC eluent emerging from the orifice can be directly ionized by desorption electrospray ionization (DESI) with negligible time delay (6~10 ms) while the remaining analytes exiting the tube outlet can be collected. The DESI-MS analysis of eluted compounds shows narrow peaks and high sensitivity because of the extremely small dead volume of the orifice used for LC eluent splitting (as low as 4 nL) and the freedom to choose favorable DESI spray solvent. In addition, online derivatization using reactive DESI is possible for supercharging proteins and for enhancing their signals without introducing extra dead volume. Unlike UV detector used in traditional preparative LC experiments, this method is applicable to compounds without chromophores (e.g., saccharides) due to the use of MS detector. Furthermore, this splitting method well suits monolithic column-based ultra-fast LC separation at a high elution flow rate of 4 mL/min. [Figure not available: see fulltext.

  2. Fast timing methods for semiconductor detectors. Revision

    SciTech Connect

    Spieler, H.

    1984-10-01

    This tutorial paper discusses the basic parameters which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter.

  3. Fast-timing methods for semiconductor detectors

    SciTech Connect

    Spieler, H.

    1982-03-01

    The basic parameters are discussed which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter.

  4. A fast full constraints unmixing method

    NASA Astrophysics Data System (ADS)

    Ye, Zhang; Wei, Ran; Wang, Qing Yan

    2012-10-01

    Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.

  5. Analytic energy gradient for the projected Hartree-Fock method

    NASA Astrophysics Data System (ADS)

    Schutski, Roman; Jiménez-Hoyos, Carlos A.; Scuseria, Gustavo E.

    2014-05-01

    We derive and implement the analytic energy gradient for the symmetry Projected Hartree-Fock (PHF) method avoiding the solution of coupled-perturbed HF-like equations, as in the regular unprojected method. Our formalism therefore has mean-field computational scaling and cost, despite the elaborate multi-reference character of the PHF wave function. As benchmark examples, we here apply our gradient implementation to the ortho-, meta-, and para-benzyne biradicals, and discuss their equilibrium geometries and vibrational frequencies.

  6. Recent developments in detection methods for microfabricated analytical devices.

    PubMed

    Schwarz, M A; Hauser, P C

    2001-09-01

    Sensitive detection in microfluidic analytical devices is a challenge because of the extremely small detection volumes available. Considerable efforts have been made lately to further address this aspect and to investigate techniques other than fluorescence. Among the newly introduced techniques are the optical methods of chemiluminescence, refraction and thermooptics, as well as the electrochemical methods of amperometry, conductimetry and potentiometry. Developments are also in progress to create miniaturized plasma-emission spectrometers and sensitive detectors for gas-chromatographic separations.

  7. Current analytical methods for plant auxin quantification--A review.

    PubMed

    Porfírio, Sara; Gomes da Silva, Marco D R; Peixe, Augusto; Cabrita, Maria J; Azadi, Parastoo

    2016-01-01

    Plant hormones, and especially auxins, are low molecular weight compounds highly involved in the control of plant growth and development. Auxins are also broadly used in horticulture, as part of vegetative plant propagation protocols, allowing the cloning of genotypes of interest. Over the years, large efforts have been put in the development of more sensitive and precise methods of analysis and quantification of plant hormone levels in plant tissues. Although analytical techniques have evolved, and new methods have been implemented, sample preparation is still the limiting step of auxin analysis. In this review, the current methods of auxin analysis are discussed. Sample preparation procedures, including extraction, purification and derivatization, are reviewed and compared. The different analytical techniques, ranging from chromatographic and mass spectrometry methods to immunoassays and electrokinetic methods, as well as other types of detection are also discussed. Considering that auxin analysis mirrors the evolution in analytical chemistry, the number of publications describing new and/or improved methods is always increasing and we considered appropriate to update the available information. For that reason, this article aims to review the current advances in auxin analysis, and thus only reports from the past 15 years will be covered.

  8. Fast Erase Method and Apparatus For Digital Media

    NASA Technical Reports Server (NTRS)

    Oakely, Ernest C. (Inventor)

    2006-01-01

    A non-contact fast erase method for erasing information stored on a magnetic or optical media. The magnetic media element includes a magnetic surface affixed to a toroidal conductor and stores information in a magnetic polarization pattern. The fast erase method includes applying an alternating current to a planar inductive element positioned near the toroidal conductor, inducing an alternating current in the toroidal conductor, and heating the magnetic surface to a temperature that exceeds the Curie-point so that information stored on the magnetic media element is permanently erased. The optical disc element stores information in a plurality of locations being defined by pits and lands in a toroidal conductive layer. The fast erase method includes similarly inducing a plurality of currents in the optical media element conductive layer and melting a predetermined portion of the conductive layer so that the information stored on the optical medium is destroyed.

  9. Dosimetry Methods of Fast Neutron Using the Semiconductor Diodes

    NASA Astrophysics Data System (ADS)

    H. Zaki, Dizaji; Kakavand, T.; F. Abbasi, Davani

    2014-01-01

    Semiconductor detectors based on a silicon pin diode are frequently used in the detection of different nuclear radiations. For the detection and dosimetry of fast neutrons, these silicon detectors are coupled with a fast neutron converter. Incident neutrons interact with the converter and produce charged particles that can deposit their energy in the detectors and produce a signal. In this study, three methods are introduced for fast neutron dosimetry by using the silicon detectors, which are: recoil proton spectroscopy, similarity of detector response function with conversion function, and a discriminator layer. Monte Carlo simulation is used to calculate the response of dosimetry systems based on these methods. In the different doses of an 241Am-Be neutron source, dosimetry responses are evaluated. The error values of measured data for dosimetry by these methods are in the range of 15-25%. We find fairly good agreement in the 241Am-Be neutron sources.

  10. Analytical methods for physicochemical characterization of antibody drug conjugates

    PubMed Central

    Wakankar, Aditya; Chen, Yan; Gokarn, Yatin

    2011-01-01

    Antibody-drug conjugates (ADCs), produced through the chemical linkage of a potent small molecule cytotoxin (drug) to a monoclonal antibody, have more complex and heterogeneous structures than the corresponding antibodies. This review describes the analytical methods that have been used in their physicochemical characterization. The selection of the most appropriate methods for a specific ADC is heavily dependent on the properties of the linker, the drug and the choice of attachment sites (lysines, inter-chain cysteines, Fc glycans). Improvements in analytical techniques such as protein mass spectrometry and capillary electrophoresis have significantly increased the quality of information that can be obtained for use in product and process characterization and for routine lot release and stability testing. PMID:21441786

  11. Analytical method for determination of benzene-arsenic acids

    SciTech Connect

    Mitchell, G.L.; Bayse, G.S.

    1988-01-01

    A sensitive analytical method has been modified for use in determination of several benzenearsonic acids, including arsanilic acid (p-aminobenzenearsonic acid), Roxarsone (3-nitro-4-hydroxybenzenearsonic acid), and p-ureidobenzene arsonic acid. Controlled acid hydrolysis of these compounds produces a quantitative yield of arsenate, which is measured colorimetrically as the molybdenum blue complex at 865 nm. The method obeys Beer's Law over the micromolar concentration range. These benzenearsonic acids are routinely used as feed additives in poultry and swine. This method should be useful in assessing tissue levels of the arsenicals in appropriate extracts.

  12. Customizing computational methods for visual analytics with big data.

    PubMed

    Choo, Jaegul; Park, Haesun

    2013-01-01

    The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.

  13. Active controls: A look at analytical methods and associated tools

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Adams, W. M., Jr.; Mukhopadhyay, V.; Tiffany, S. H.; Abel, I.

    1984-01-01

    A review of analytical methods and associated tools for active controls analysis and design problems is presented. Approaches employed to develop mathematical models suitable for control system analysis and/or design are discussed. Significant efforts have been expended to develop tools to generate the models from the standpoint of control system designers' needs and develop the tools necessary to analyze and design active control systems. Representative examples of these tools are discussed. Examples where results from the methods and tools have been compared with experimental data are also presented. Finally, a perspective on future trends in analysis and design methods is presented.

  14. Analytical Methods for Measuring Mercury in Water, Sediment and Biota

    SciTech Connect

    Lasorsa, Brenda K.; Gill, Gary A.; Horvat, Milena

    2012-06-07

    Mercury (Hg) exists in a large number of physical and chemical forms with a wide range of properties. Conversion between these different forms provides the basis for mercury's complex distribution pattern in local and global cycles and for its biological enrichment and effects. Since the 1960’s, the growing awareness of environmental mercury pollution has stimulated the development of more accurate, precise and efficient methods of determining mercury and its compounds in a wide variety of matrices. During recent years new analytical techniques have become available that have contributed significantly to the understanding of mercury chemistry in natural systems. In particular, these include ultra sensitive and specific analytical equipment and contamination-free methodologies. These improvements allow for the determination of total mercury as well as major species of mercury to be made in water, sediments and soils, and biota. Analytical methods are selected depending on the nature of the sample, the concentration levels of mercury, and what species or fraction is to be quantified. The terms “speciation” and “fractionation” in analytical chemistry were addressed by the International Union for Pure and Applied Chemistry (IUPAC) which published guidelines (Templeton et al., 2000) or recommendations for the definition of speciation analysis. "Speciation analysis is the analytical activity of identifying and/or measuring the quantities of one or more individual chemical species in a sample. The chemical species are specific forms of an element defined as to isotopic composition, electronic or oxidation state, and/or complex or molecular structure. The speciation of an element is the distribution of an element amongst defined chemical species in a system. In case that it is not possible to determine the concentration of the different individual chemical species that sum up the total concentration of an element in a given matrix, meaning it is impossible to

  15. Analytical Methods for Biomass Characterization during Pretreatment and Bioconversion

    SciTech Connect

    Pu, Yunqiao; Meng, Xianzhi; Yoo, Chang Geun; Li, Mi; Ragauskas, Arthur J

    2016-01-01

    Lignocellulosic biomass has been introduced as a promising resource for alternative fuels and chemicals because of its abundance and complement for petroleum resources. Biomass is a complex biopolymer and its compositional and structural characteristics largely vary depending on its species as well as growth environments. Because of complexity and variety of biomass, understanding its physicochemical characteristics is a key for effective biomass utilization. Characterization of biomass does not only provide critical information of biomass during pretreatment and bioconversion, but also give valuable insights on how to utilize the biomass. For better understanding biomass characteristics, good grasp and proper selection of analytical methods are necessary. This chapter introduces existing analytical approaches that are widely employed for biomass characterization during biomass pretreatment and conversion process. Diverse analytical methods using Fourier transform infrared (FTIR) spectroscopy, gel permeation chromatography (GPC), and nuclear magnetic resonance (NMR) spectroscopy for biomass characterization are reviewed. In addition, biomass accessibility methods by analyzing surface properties of biomass are also summarized in this chapter.

  16. Analytical analysis of slow and fast pressure waves in a two-dimensional cellular solid with fluid-filled cells.

    PubMed

    Dorodnitsyn, Vladimir; Van Damme, Bart

    2016-06-01

    Wave propagation in cellular and porous media is widely studied due to its abundance in nature and industrial applications. Biot's theory for open-cell media predicts the existence of two simultaneous pressure waves, distinguished by its velocity. A fast wave travels through the solid matrix, whereas a much slower wave is carried by fluid channels. In closed-cell materials, the slow wave disappears due to a lack of a continuous fluid path. However, recent finite element (FE) simulations done by the authors of this paper also predict the presence of slow pressure waves in saturated closed-cell materials. The nature of the slow wave is not clear. In this paper, an equivalent unit cell of a medium with square cells is proposed to permit an analytical description of the dynamics of such a material. A simplified FE model suggests that the fluid-structure interaction can be fully captured using a wavenumber-dependent spring support of the vibrating cell walls. Using this approach, the pressure wave behavior can be calculated with high accuracy, but with less numerical effort. Finally, Rayleigh's energy method is used to investigate the coexistence of two waves with different velocities. PMID:27369159

  17. A new simple multidomain fast multipole boundary element method

    NASA Astrophysics Data System (ADS)

    Huang, S.; Liu, Y. J.

    2016-09-01

    A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.

  18. Methods for quantifying uncertainty in fast reactor analyses.

    SciTech Connect

    Fanning, T. H.; Fischer, P. F.

    2008-04-07

    Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.

  19. A fast multipole boundary element method for solving two-dimensional thermoelasticity problems

    NASA Astrophysics Data System (ADS)

    Liu, Y. J.; Li, Y. X.; Huang, S.

    2014-09-01

    A fast multipole boundary element method (BEM) for solving general uncoupled steady-state thermoelasticity problems in two dimensions is presented in this paper. The fast multipole BEM is developed to handle the thermal term in the thermoelasticity boundary integral equation involving temperature and heat flux distributions on the boundary of the problem domain. Fast multipole expansions, local expansions and related translations for the thermal term are derived using complex variables. Several numerical examples are presented to show the accuracy and effectiveness of the developed fast multipole BEM in calculating the displacement and stress fields for 2-D elastic bodies under various thermal loads, including thin structure domains that are difficult to mesh using the finite element method (FEM). The BEM results using constant elements are found to be accurate compared with the analytical solutions, and the accuracy of the BEM results is found to be comparable to that of the FEM with linear elements. In addition, the BEM offers the ease of use in generating the mesh for a thin structure domain or a domain with complicated geometry, such as a perforated plate with randomly distributed holes for which the FEM fails to provide an adequate mesh. These results clearly demonstrate the potential of the developed fast multipole BEM for solving 2-D thermoelasticity problems.

  20. Fast semi-analytical approach to approximate plumes of dissolved redox-reactive pollutants in heterogeneous aquifers. 2: Chlorinated ethenes

    NASA Astrophysics Data System (ADS)

    Atteia, O.; Höhener, P.

    2012-09-01

    The aim of this work was to extend and to validate the flux tube-mixed instantaneous and kinetics superposition sequence approach (FT-MIKSS) to reaction chains of degrading species. Existing analytical solutions for the reactive transport of chains of decaying solutes were embedded in the flux-tube approach in order to conceive a semi-analytical model that allows fast parameter fitting. The model was applied for chloroethenes undergoing reductive dechlorination and oxidation in homogeneous and heterogeneous aquifers with sorption. The results from the semi-analytical model were compared to results from three numerical models (RT3D, PH3TD, PHAST). All models were validated in a homogeneous domain with an existing analytical solution. In heterogeneous domains, we found significant differences between the four models. FT-MIKSS gave intermediate results for all modelled cases. Results were obtained almost instantaneously, whereas other models had calculation times of up to several hours. Chloroethene plumes and redox conditions at the Plattsburgh field site were realistically modelled by FT-MIKSS, although results differed somewhat from those of PHT3D and PHAST. It is concluded that it may be tedious to obtain correct modelling results in heterogeneous media with degradation chain reactions and that the comparison of two different models may be useful. FT-MIKSS is a valuable tool for fast parameter fitting at field sites and should be used in the preparation of longer model runs with other numerical models.

  1. Analytical and Numerical Studies of the Complex Interaction of a Fast Ion Beam Pulse with a Background Plasma

    SciTech Connect

    Igor D. Kaganovich; Edward A. Startsev; Ronald C. Davidson

    2003-11-25

    Plasma neutralization of an intense ion beam pulse is of interest for many applications, including plasma lenses, heavy ion fusion, high energy physics, etc. Comprehensive analytical, numerical, and experimental studies are underway to investigate the complex interaction of a fast ion beam with a background plasma. The positively charged ion beam attracts plasma electrons, and as a result the plasma electrons have a tendency to neutralize the beam charge and current. A suite of particle-in-cell codes has been developed to study the propagation of an ion beam pulse through the background plasma. For quasi-steady-state propagation of the ion beam pulse, an analytical theory has been developed using the assumption of long charge bunches and conservation of generalized vorticity. The analytical results agree well with the results of the numerical simulations. The visualization of the data obtained in the numerical simulations shows complex collective phenomena during beam entry into and ex it from the plasma.

  2. Analytical method for distribution of metallic gasket contact stress

    NASA Astrophysics Data System (ADS)

    Feng, Xiu; Gu, Boqing; Wei, Long; Sun, Jianjun

    2008-11-01

    Metallic gasket seals have been widely used in chemical and petrochemical plants. The failure of sealing system will lead to enormous pecuniary loss, serious environment pollution and personal injury accident. The failure of sealing systems is mostly caused not by the strength of flanges or bolts but by the leakage of the connections. The leakage behavior of bolted flanged connections is related to the gasket contact stress. In particular, the non-uniform distribution of this stress in the radial direction caused by the flange rotational flexibility has a major influence on the tightness of bolted flanged connections. In this paper, based on Warters method and considering the operating pressure, the deformation of the flanges is analyzed theoretically, and the formula for calculating the angle of rotation of the flanges is derived, based on which and the mechanical property of the gasket material, the method for calculating the gasket contact stresses is put forward. The maximum stress at the gasket outer flank calculated by the analytical method is lower than that obtained by numerical simulation, but the mean stresses calculated by the two methods are nearly the same. The analytical method presented in this paper can be used as an engineering method for designing the metallic gasket connections.

  3. Analytical Method for Measuring Cosmogenic (35)S in Natural Waters.

    PubMed

    Urióstegui, Stephanie H; Bibby, Richard K; Esser, Bradley K; Clark, Jordan F

    2015-06-16

    Cosmogenic sulfur-35 in water as dissolved sulfate ((35)SO4) has successfully been used as an intrinsic hydrologic tracer in low-SO4, high-elevation basins. Its application in environmental waters containing high SO4 concentrations has been limited because only small amounts of SO4 can be analyzed using current liquid scintillation counting (LSC) techniques. We present a new analytical method for analyzing large amounts of BaSO4 for (35)S. We quantify efficiency gains when suspending BaSO4 precipitate in Inta-Gel Plus cocktail, purify BaSO4 precipitate to remove dissolved organic matter, mitigate interference of radium-226 and its daughter products by selection of high purity barium chloride, and optimize LSC counting parameters for (35)S determination in larger masses of BaSO4. Using this improved procedure, we achieved counting efficiencies that are comparable to published LSC techniques despite a 10-fold increase in the SO4 sample load. (35)SO4 was successfully measured in high SO4 surface waters and groundwaters containing low ratios of (35)S activity to SO4 mass demonstrating that this new analytical method expands the analytical range of (35)SO4 and broadens the utility of (35)SO4 as an intrinsic tracer in hydrologic settings. PMID:25981756

  4. Development of A High Throughput Method Incorporating Traditional Analytical Devices

    PubMed Central

    White, C. C.; Embree, E.; Byrd, W. E; Patel, A. R.

    2004-01-01

    A high-throughput (high throughput is the ability to process large numbers of samples) and companion informatics system has been developed and implemented. High throughput is defined as the ability to autonomously evaluate large numbers of samples, while an informatics system provides the software control of the physical devices, in addition to the organization and storage of the generated electronic data. This high throughput system includes both an ultra-violet and visible light spectrometer (UV-Vis) and a Fourier transform infrared spectrometer (FTIR) integrated with a multi sample positioning table. This method is designed to quantify changes in polymeric materials occurring from controlled temperature, humidity and high flux UV exposures. The integration of the software control of these analytical instruments within a single computer system is presented. Challenges in enhancing the system to include additional analytical devices are discussed. PMID:27366626

  5. An analytical method for regional dental manpower training.

    PubMed

    Mulvey, P J; Foley, W J; Schneider, D P

    1978-07-01

    This paper presents an analytical method for dental manpower planning for use by Health Systems Agencies. The planning methods discard geopolitical boundaries in favor of Dental Service Areas (DSA). A method for defining DSAs by aggregating Minor Civil Divisions based on current population mobility and current distribution of dentists is presented. The Dental Manpower Balance Model (DMBM) is presented to calculate shortages (or surpluses) of dentists. This model uses sociodemographic data to calculate the demand for dental services and age adjusted productivity measures to calculate the effective supply of dentists. A case study for the HSA region in Northeastern New York is presented. The case study demonstrates that, although the planning methods are quite simple, they are more flexible and produce more sensitive results than the normative ratio method of manpower planning. PMID:10308627

  6. Fast and stable numerical method for neuronal modelling

    NASA Astrophysics Data System (ADS)

    Hashemi, Soheil; Abdolali, Ali

    2016-11-01

    Excitable cell modelling is of a prime interest in predicting and targeting neural activity. Two main limits in solving related equations are speed and stability of numerical method. Since there is a tradeoff between accuracy and speed, most previously presented methods for solving partial differential equations (PDE) are focused on one side. More speed means more accurate simulations and therefore better device designing. By considering the variables in finite differenced equation in proper time and calculating the unknowns in the specific sequence, a fast, stable and accurate method is introduced in this paper for solving neural partial differential equations. Propagation of action potential in giant axon is studied by proposed method and traditional methods. Speed, consistency and stability of the methods are compared and discussed. The proposed method is as fast as forward methods and as stable as backward methods. Forward methods are known as fastest methods and backward methods are stable in any circumstances. Complex structures can be simulated by proposed method due to speed and stability of the method.

  7. Comparison of analytical methods for calculation of wind loads

    NASA Technical Reports Server (NTRS)

    Minderman, Donald J.; Schultz, Larry L.

    1989-01-01

    The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.

  8. A new analytical method for groundwater recharge and discharge estimation

    NASA Astrophysics Data System (ADS)

    Liang, Xiuyu; Zhang, You-Kuan

    2012-07-01

    SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.

  9. Selectivity in analytical chemistry: two interpretations for univariate methods.

    PubMed

    Dorkó, Zsanett; Verbić, Tatjana; Horvai, George

    2015-01-01

    Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity.

  10. Gaussian Analytic Centroiding method of star image of star tracker

    NASA Astrophysics Data System (ADS)

    Wang, Haiyong; Xu, Ershuai; Li, Zhifeng; Li, Jingjin; Qin, Tianmu

    2015-11-01

    The energy distribution of an actual star image coincides with the Gaussian law statistically in most cases, so the optimized processing algorithm about star image centroiding should be constructed also by following Gaussian law. For a star image spot covering a certain number of pixels, the marginal distribution of the gray accumulation on rows and columns are shown and analyzed, based on which the formulas of Gaussian Analytic Centroiding method (GAC) are deduced, and the robustness is also promoted due to the inherited filtering effect of gray accumulation. Ideal reference star images are simulated by the PSF (point spread function) with integral form. Precision and speed tests for the Gaussian Analytic formulas are conducted under three scenarios of Gaussian radius (0.5, 0.671, 0.8 pixel), The simulation results show that the precision of GAC method is better than that of the other given algorithms when the Gaussian radius is not bigger than 5 × 5 pixel window, a widely used parameter. Above all, the algorithm which consumes the least time is still the novel GAC method. GAC method helps to promote the comprehensive performance in the attitude determination of a star tracker.

  11. Organic analysis and analytical methods development: FY 1995 progress report

    SciTech Connect

    Clauss, S.A.; Hoopes, V.; Rau, J.

    1995-09-01

    This report describes the status of organic analyses and developing analytical methods to account for the organic components in Hanford waste tanks, with particular emphasis on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-103 (Tank 103-SY). The analytical data are to serve as an example of the status of methods development and application. Samples of the convective and nonconvective layers from Tank 103-SY were analyzed for total organic carbon (TOC). The TOC value obtained for the nonconvective layer using the hot persulfate method was 10,500 {mu}g C/g. The TOC value obtained from samples of Tank 101-SY was 11,000 {mu}g C/g. The average value for the TOC of the convective layer was 6400 {mu}g C/g. Chelator and chelator fragments in Tank 103-SY samples were identified using derivatization. gas chromatography/mass spectrometry (GC/MS). Organic components were quantified using GC/flame ionization detection. Major components in both the convective and nonconvective-layer samples include ethylenediaminetetraacetic acid (EDTA), nitrilotriacetic acid (NTA), succinic acid, nitrosoiminodiacetic acid (NIDA), citric acid, and ethylenediaminetriacetic acid (ED3A). Preliminary results also indicate the presence of C16 and C18 carboxylic acids in the nonconvective-layer sample. Oxalic acid was one of the major components in the nonconvective layer as determined by derivatization GC/flame ionization detection.

  12. Analytical methods for kinetic studies of biological interactions: A review.

    PubMed

    Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S

    2015-09-10

    The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies.

  13. Evolution of microbiological analytical methods for dairy industry needs

    PubMed Central

    Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence

    2014-01-01

    Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675

  14. Analytical methods for kinetic studies of biological interactions: A review.

    PubMed

    Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S

    2015-09-10

    The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721

  15. ANALYTICAL METHODS FOR KINETIC STUDIES OF BIOLOGICAL INTERACTIONS: A REVIEW

    PubMed Central

    Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S.

    2015-01-01

    The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721

  16. Evolution of microbiological analytical methods for dairy industry needs.

    PubMed

    Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence

    2014-01-01

    Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards.

  17. A novel fast gas chromatography method for higher time resolution measurements of speciated monoterpenes in air

    NASA Astrophysics Data System (ADS)

    Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.

    2014-05-01

    Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in ambient air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C9-C15 BVOC composition of single plant emissions may be characterised within a 14.5 min analysis time. Moreover, in-situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an 11.7 min chromatographic separation time (increasing to 19.7 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). These analysis times potentially allow for a twofold to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in-situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC (OBVOC) linalool in ambient air. During this field deployment within a suburban forest

  18. Analytical methods for human biomonitoring of pesticides. A review.

    PubMed

    Yusa, Vicent; Millet, Maurice; Coscolla, Clara; Roca, Marta

    2015-09-01

    Biomonitoring of both currently-used and banned-persistent pesticides is a very useful tool for assessing human exposure to these chemicals. In this review, we present current approaches and recent advances in the analytical methods for determining the biomarkers of exposure to pesticides in the most commonly used specimens, such as blood, urine, and breast milk, and in emerging non-invasive matrices such as hair and meconium. We critically discuss the main applications for sample treatment, and the instrumental techniques currently used to determine the most relevant pesticide biomarkers. We finally look at the future trends in this field.

  19. Flue gas desulfurization (FGD) chemistry and analytical methods handbook

    SciTech Connect

    Noblett, J.G.; Burke, J.M.

    1990-08-01

    The purpose of this handbook is to provide a comprehensive guide to sampling, analytical, and physical test methods essential to the operation, maintenance, and understanding of flue gas desulfurization (FGD) system chemistry. EPRI sponsored the first edition of this three-volume report in response to the needs of electric utility personnel responsible for establishing and operating commercial FGD analytical laboratories. The second, revised editions of Volumes 1 and 2 were prompted by the results of research into various non-standard aspects of FGD system chemistry. Volume 1 of the handbook explains FGD system chemistry in the detail necessary to understand how the processes operate and how process performance indicators can be used to optimize system operation. Volume 2 includes 63 physical-testing and chemical-analysis methods for reagents, slurries, and solids, and information on the applicability of individual methods to specific FGD systems. Volume 3 contains instructions for FGD solution chemistry computer program designated by EPRI as FGDLIQEQ. Executable on IBM-compatible personal computers, this program calculates the concentrations (activities) of chemical species (ions) in scrubber liquor and can calculate driving forces for important chemical reactions such as S0{sub 2} absorption and calcium sulfite and sulfate precipitation. This program and selected chemical analyses will help an FGD system operator optimize system performance, prevent many potential process problems, and define solutions to existing problems. 22 refs., 17 figs., 28 tabs.

  20. Performance of analytical methods for tomographic gamma scanning

    SciTech Connect

    Prettyman, T.H.; Mercer, D.J.

    1997-06-01

    The use of gamma-ray computerized tomography for nondestructive assay of radioactive materials has led to the development of specialized analytical methods. Over the past few years, Los Alamos has developed and implemented a computer code, called ARC-TGS, for the analysis of data obtained by tomographic gamma scanning (TGS). ARC-TGS reduces TGS transmission and emission tomographic data, providing the user with images of the sample contents, the activity or mass of selected radionuclides, and an estimate of the uncertainty in the measured quantities. The results provided by ARC-TGS can be corrected for self-attenuation when the isotope of interest emits more than one gamma-ray. In addition, ARC-TGS provides information needed to estimate TGS quantification limits and to estimate the scan time needed to screen for small amounts of radioactivity. In this report, an overview of the analytical methods used by ARC-TGS is presented along with an assessment of the performance of these methods for TGS.

  1. Fast nonlinear regression method for CT brain perfusion analysis.

    PubMed

    Bennink, Edwin; Oosterbroek, Jaap; Kudo, Kohsuke; Viergever, Max A; Velthuis, Birgitta K; de Jong, Hugo W A M

    2016-04-01

    Although computed tomography (CT) perfusion (CTP) imaging enables rapid diagnosis and prognosis of ischemic stroke, current CTP analysis methods have several shortcomings. We propose a fast nonlinear regression method with a box-shaped model (boxNLR) that has important advantages over the current state-of-the-art method, block-circulant singular value decomposition (bSVD). These advantages include improved robustness to attenuation curve truncation, extensibility, and unified estimation of perfusion parameters. The method is compared with bSVD and with a commercial SVD-based method. The three methods were quantitatively evaluated by means of a digital perfusion phantom, described by Kudo et al. and qualitatively with the aid of 50 clinical CTP scans. All three methods yielded high Pearson correlation coefficients ([Formula: see text]) with the ground truth in the phantom. The boxNLR perfusion maps of the clinical scans showed higher correlation with bSVD than the perfusion maps from the commercial method. Furthermore, it was shown that boxNLR estimates are robust to noise, truncation, and tracer delay. The proposed method provides a fast and reliable way of estimating perfusion parameters from CTP scans. This suggests it could be a viable alternative to current commercial and academic methods. PMID:27413770

  2. Experimental validation of an analytical method of calculating photon distributions

    SciTech Connect

    Wells, R.G.; Celler, A.; Harrop, R.

    1996-12-31

    We have developed a method for analytically calculating photon distributions in SPECT projections. This method models primary photon distributions as well as first and second order Compton scattering and Rayleigh scattering. It uses no free fitting parameters and so the projections produced are completely determined by the characteristics of the SPECT camera system, the energy of the isotope, an estimate of the source distribution and an attenuation map of the scattering object. The method was previously validated by comparison with Monte Carlo simulations and we are now verifying its accuracy with respect to phantom experiments. We have performed experiments using a Siemens MS3 SPECT camera system for a point source (2mm in diameter) within a homogeneous water bath and a small spherical source (1cm in diameter) within both a homogeneous water cylinder and a non-homogeneous medium consisting of air and water. Our technique reproduces well the distribution of photons in the experimentally acquired projections.

  3. Method of Analytic Evolution of Flat Distribution Amplitudes in QCD

    NASA Astrophysics Data System (ADS)

    Tandogan, Asli; Radyushkin, Anatoly V.

    A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x > 1/2. For a purely flat DA, the evolution is governed by an overall (x\\bar {x})t dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1 - 2x|2t appears due to a jump at x = 1/2. A good convergence was observed in the t ≲ 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.

  4. The evolution of analytical chemistry methods in foodomics.

    PubMed

    Gallo, Monica; Ferranti, Pasquale

    2016-01-01

    The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. PMID:26363946

  5. The evolution of analytical chemistry methods in foodomics.

    PubMed

    Gallo, Monica; Ferranti, Pasquale

    2016-01-01

    The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science.

  6. Using analytic network process for evaluating mobile text entry methods.

    PubMed

    Ocampo, Lanndon A; Seva, Rosemary R

    2016-01-01

    This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods.

  7. The methods of decrease operating pressure of fast neutrals source

    NASA Astrophysics Data System (ADS)

    Barchenko, V. T.; Komlev, A. E.; Babinov, N. A.; Vinogradov, M. L.

    2015-11-01

    The fast neutral particles sources are more and more widely used in technologies of surface processing and coatings deposition, especially in the case of dielectric surfaces processing. However for substantial expansion of the sources applications scope it is necessary to decrease the pressure in the vacuum chamber at which they can operate. This article describes the methods to reduce the operating pressure of the fast neutral particles source with combined ions acceleration and its neutralization regions. This combination provide a total absence of the high-energy ions in the particles beam. The main discussed methods are creation of pressure drop between internal and external volumes of the source and working gas preionization which is provided by combustion of auxiliary gas discharge.

  8. New fast spectral analysis method for solid materials

    NASA Astrophysics Data System (ADS)

    Bel'Kov, M. V.; Burakov, V. S.; Kiris, V. V.; Raikov, S. N.

    2007-05-01

    We propose a new fast method for direct spectral analysis of solid materials based on laser ablation of the sample in deionized water and real-time transport of the aqueous suspension of nanoparticles into the inductively coupled plasma of an emission spectrometer. As a result, we have all the instrumental and methodological advantages of standard equipment, along with calibration of the spectrometer using standard aqueous solutions.

  9. Differential correction method applied to measurement of the FAST reflector

    NASA Astrophysics Data System (ADS)

    Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng

    2016-08-01

    The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%-80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.

  10. Differential correction method applied to measurement of the FAST reflector

    NASA Astrophysics Data System (ADS)

    Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng

    2016-08-01

    The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%–80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.

  11. Friedmann-Lemaitre cosmologies via roulettes and other analytic methods

    NASA Astrophysics Data System (ADS)

    Chen, Shouxin; Gibbons, Gary W.; Yang, Yisong

    2015-10-01

    In this work a series of methods are developed for understanding the Friedmann equation when it is beyond the reach of the Chebyshev theorem. First it will be demonstrated that every solution of the Friedmann equation admits a representation as a roulette such that information on the latter may be used to obtain that for the former. Next the Friedmann equation is integrated for a quadratic equation of state and for the Randall-Sundrum II universe, leading to a harvest of a rich collection of new interesting phenomena. Finally an analytic method is used to isolate the asymptotic behavior of the solutions of the Friedmann equation, when the equation of state is of an extended form which renders the integration impossible, and to establish a universal exponential growth law.

  12. A novel unified coding analytical method for Internet of Things

    NASA Astrophysics Data System (ADS)

    Sun, Hong; Zhang, JianHong

    2013-08-01

    This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.

  13. Fast Quantitation of Target Analytes in Small Volumes of Complex Samples by Matrix-Compatible Solid-Phase Microextraction Devices.

    PubMed

    Piri-Moghadam, Hamed; Ahmadi, Fardin; Gómez-Ríos, German Augusto; Boyacı, Ezel; Reyes-Garcés, Nathaly; Aghakhani, Ali; Bojko, Barbara; Pawliszyn, Janusz

    2016-06-20

    Herein we report the development of solid-phase microextraction (SPME) devices designed to perform fast extraction/enrichment of target analytes present in small volumes of complex matrices (i.e. V≤10 μL). Micro-sampling was performed with the use of etched metal tips coated with a thin layer of biocompatible nano-structured polypyrrole (PPy), or by using coated blade spray (CBS) devices. These devices can be coupled either to liquid chromatography (LC), or directly to mass spectrometry (MS) via dedicated interfaces. The reported results demonstrated that the whole analytical procedure can be carried out within a few minutes with high sensitivity and quantitation precision, and can be used to sample from various biological matrices such as blood, urine, or Allium cepa L single-cells. PMID:27158909

  14. GenoSets: Visual Analytic Methods for Comparative Genomics

    PubMed Central

    Cain, Aurora A.; Kosara, Robert; Gibas, Cynthia J.

    2012-01-01

    Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest. PMID:23056299

  15. GenoSets: visual analytic methods for comparative genomics.

    PubMed

    Cain, Aurora A; Kosara, Robert; Gibas, Cynthia J

    2012-01-01

    Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.

  16. Fast 2D fluid-analytical simulation of ion energy distributions and electromagnetic effects in multi-frequency capacitive discharges

    NASA Astrophysics Data System (ADS)

    Kawamura, E.; Lieberman, M. A.; Graves, D. B.

    2014-12-01

    A fast 2D axisymmetric fluid-analytical plasma reactor model using the finite elements simulation tool COMSOL is interfaced with a 1D particle-in-cell (PIC) code to study ion energy distributions (IEDs) in multi-frequency capacitive argon discharges. A bulk fluid plasma model, which solves the time-dependent plasma fluid equations for the ion continuity and electron energy balance, is coupled with an analytical sheath model, which solves for the sheath parameters. The time-independent Helmholtz equation is used to solve for the fields and a gas flow model solves for the steady-state pressure, temperature and velocity of the neutrals. The results of the fluid-analytical model are used as inputs to a PIC simulation of the sheath region of the discharge to obtain the IEDs at the target electrode. Each 2D fluid-analytical-PIC simulation on a moderate 2.2 GHz CPU workstation with 8 GB of memory took about 15-20 min. The multi-frequency 2D fluid-analytical model was compared to 1D PIC simulations of a symmetric parallel-plate discharge, showing good agreement. We also conducted fluid-analytical simulations of a multi-frequency argon capacitively coupled plasma (CCP) with a typical asymmetric reactor geometry at 2/60/162 MHz. The low frequency 2 MHz power controlled the sheath width and sheath voltage while the high frequencies controlled the plasma production. A standing wave was observable at the highest frequency of 162 MHz. We noticed that adding 2 MHz power to a 60 MHz discharge or 162 MHz to a dual frequency 2 MHz/60 MHz discharge can enhance the plasma uniformity. We found that multiple frequencies were not only useful for controlling IEDs but also plasma uniformity in CCP reactors.

  17. Comparison of four [sup 90]Sr groundwater analytical methods

    SciTech Connect

    Scarpitta, S.; Odin-McCabe, J.; Gaschott, R.; Meier, A.; Klug, E. . Analytical Services Lab.)

    1999-06-01

    Data are presented for 45 Long Island groundwater samples each measured for [sup 90]Sr using four different analytical methods. [sup 90]Sr levels were first established by two New York State certified laboratories, one of which used the US Environmental Protection Agency Radioactive Strontium in Drinking Water Method 905.0. Three of the [sup 90]Sr methods evaluated at Brookhaven National Laboratory can reduce analysis time by more than 50%. They were (a) an Environmental Measurements Laboratory Cerenkov technique and (b) two commercially available products that utilize strontium-specific crown-ethers supported on either a resin or membrane disk. Method independent inter-laboratory bias was < 12% based on [sup 90]Sr results obtained using both US Department of Energy/Environmental Measurements Laboratory and US EPA/National Environmental Radiation Laboratory samples of known activity concentration. Brookhaven National Laboratory prepared a National Institute of Standards and Technology traceable [sup 90]Sr tap-water sample used to quantify test method biases. With gas proportional or liquid scintillation counting, minimum detectable levels (MDLs) of 37 Bq m[sup [minus]3] (1 pCi L[sup [minus]1]) wee achievable for both crown-ether methods using a 1-L processed sample beta counted for 1 h.

  18. [Comparison of intestinal bacteria composition identified by various analytical methods].

    PubMed

    Fujisawa, Tomohiko

    2014-01-01

    Many different kinds of bacteria are normally found in the intestines of healthy humans and animals. To study the ecology and function of these intestinal bacteria, the culture method was fundamental until recent years, and suitable agar plates such as non-selective agar plates and several selective agar plates have been developed. Furthermore, the roll-tube, glove box, and plate-in-bottle methods have also been developed for the cultivation of fastidious anaerobes that predominantly colonize the intestine. Until recently, the evaluation of functional foods such as pre- and probiotics was mainly done using culture methods, and many valuable data were produced. On the other hand, genomic analysis such as the fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), clone-library, denaturing gradient gel electrophoresis (DGGE), temperature gradient gel electrophoresis (TGGE), terminal-restriction fragment length polymorphism (T-RFLP) methods, and metagenome analysis have been used for the investigation of intestinal microbiota in recent years. The identification of bacteria is done by investigation of the phenotypic characteristics in culture methods, while rRNA genes are used as targets in genomic analysis. Here, I compare the fecal bacteria identified by various analytical methods.

  19. Analytical solutions for radiation-driven winds in massive stars. I. The fast regime

    SciTech Connect

    Araya, I.; Curé, M.; Cidale, L. S.

    2014-11-01

    Accurate mass-loss rate estimates are crucial keys in the study of wind properties of massive stars and for testing different evolutionary scenarios. From a theoretical point of view, this implies solving a complex set of differential equations in which the radiation field and the hydrodynamics are strongly coupled. The use of an analytical expression to represent the radiation force and the solution of the equation of motion has many advantages over numerical integrations. Therefore, in this work, we present an analytical expression as a solution of the equation of motion for radiation-driven winds in terms of the force multiplier parameters. This analytical expression is obtained by employing the line acceleration expression given by Villata and the methodology proposed by Müller and Vink. On the other hand, we find useful relationships to determine the parameters for the line acceleration given by Müller and Vink in terms of the force multiplier parameters.

  20. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    SciTech Connect

    Kroniger, K; Herzog, M; Landry, G; Dedes, G; Parodi, K; Traneus, E

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used as irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.

  1. MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION

    EPA Science Inventory

    The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...

  2. Application of Fast Multipole Methods to the NASA Fast Scattering Code

    NASA Technical Reports Server (NTRS)

    Dunn, Mark H.; Tinetti, Ana F.

    2008-01-01

    The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.

  3. Basal buoyancy and fast-moving glaciers: in defense of analytic force balance

    NASA Astrophysics Data System (ADS)

    van der Veen, C. J.

    2016-06-01

    The geometric approach to force balance advocated by T. Hughes in a series of publications has challenged the analytic approach by implying that the latter does not adequately account for basal buoyancy on ice streams, thereby neglecting the contribution to the gravitational driving force associated with this basal buoyancy. Application of the geometric approach to Byrd Glacier, Antarctica, yields physically unrealistic results, and it is argued that this is because of a key limiting assumption in the geometric approach. A more traditional analytic treatment of force balance shows that basal buoyancy does not affect the balance of forces on ice streams, except locally perhaps, through bridging effects.

  4. Fast gridding projectors for analytical and iterative tomographic reconstruction of differential phase contrast data.

    PubMed

    Arcadu, Filippo; Stampanoni, Marco; Marone, Federica

    2016-06-27

    This paper introduces new gridding projectors designed to efficiently perform analytical and iterative tomographic reconstruction, when the forward model is represented by the derivative of the Radon transform. This inverse problem is tightly connected with an emerging X-ray tube- and synchrotron-based imaging technique: differential phase contrast based on a grating interferometer. This study shows, that the proposed projectors, compared to space-based implementations of the same operators, yield high quality analytical and iterative reconstructions, while improving the computational efficiency by few orders of magnitude. PMID:27410628

  5. A fast phase space method for computing creeping rays

    SciTech Connect

    Motamed, Mohammad . E-mail: mohamad@nada.kth.se; Runborg, Olof . E-mail: olofr@nada.kth.se

    2006-11-20

    Creeping rays can give an important contribution to the solution of medium to high frequency scattering problems. They are generated at the shadow lines of the illuminated scatterer by grazing incident rays and propagate along geodesics on the scatterer surface, continuously shedding diffracted rays in their tangential direction. In this paper, we show how the ray propagation problem can be formulated as a partial differential equation (PDE) in a three-dimensional phase space. To solve the PDE we use a fast marching method. The PDE solution contains information about all possible creeping rays. This information includes the phase and amplitude of the field, which are extracted by a fast post-processing. Computationally, the cost of solving the PDE is less than tracing all rays individually by solving a system of ordinary differential equations. We consider an application to mono-static radar cross section problems where creeping rays from all illumination angles must be computed. The numerical results of the fast phase space method and a comparison with the results of ray tracing are presented.

  6. Gaussian and finite-element Coulomb method for the fast evaluation of Coulomb integrals

    NASA Astrophysics Data System (ADS)

    Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko

    2007-04-01

    The authors propose a new linear-scaling method for the fast evaluation of Coulomb integrals with Gaussian basis functions called the Gaussian and finite-element Coulomb (GFC) method. In this method, the Coulomb potential is expanded in a basis of mixed Gaussian and finite-element auxiliary functions that express the core and smooth Coulomb potentials, respectively. Coulomb integrals can be evaluated by three-center one-electron overlap integrals among two Gaussian basis functions and one mixed auxiliary function. Thus, the computational cost and scaling for large molecules are drastically reduced. Several applications to molecular systems show that the GFC method is more efficient than the analytical integration approach that requires four-center two-electron repulsion integrals. The GFC method realizes a near linear scaling for both one-dimensional alanine α-helix chains and three-dimensional diamond pieces.

  7. 21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Procedure for announcing analytical methods for...-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a) FDA may issue an order announcing a specific analytical method or methods for the quantification...

  8. Analytical methods in untargeted metabolomics: state of the art in 2015.

    PubMed

    Alonso, Arnald; Marsal, Sara; Julià, Antonio

    2015-01-01

    Metabolomics comprises the methods and techniques that are used to measure the small molecule composition of biofluids and tissues, and is actually one of the most rapidly evolving research fields. The determination of the metabolomic profile - the metabolome - has multiple applications in many biological sciences, including the developing of new diagnostic tools in medicine. Recent technological advances in nuclear magnetic resonance and mass spectrometry are significantly improving our capacity to obtain more data from each biological sample. Consequently, there is a need for fast and accurate statistical and bioinformatic tools that can deal with the complexity and volume of the data generated in metabolomic studies. In this review, we provide an update of the most commonly used analytical methods in metabolomics, starting from raw data processing and ending with pathway analysis and biomarker identification. Finally, the integration of metabolomic profiles with molecular data from other high-throughput biotechnologies is also reviewed.

  9. Analytical Methods in Untargeted Metabolomics: State of the Art in 2015

    PubMed Central

    Alonso, Arnald; Marsal, Sara; Julià, Antonio

    2015-01-01

    Metabolomics comprises the methods and techniques that are used to measure the small molecule composition of biofluids and tissues, and is actually one of the most rapidly evolving research fields. The determination of the metabolomic profile – the metabolome – has multiple applications in many biological sciences, including the developing of new diagnostic tools in medicine. Recent technological advances in nuclear magnetic resonance and mass spectrometry are significantly improving our capacity to obtain more data from each biological sample. Consequently, there is a need for fast and accurate statistical and bioinformatic tools that can deal with the complexity and volume of the data generated in metabolomic studies. In this review, we provide an update of the most commonly used analytical methods in metabolomics, starting from raw data processing and ending with pathway analysis and biomarker identification. Finally, the integration of metabolomic profiles with molecular data from other high-throughput biotechnologies is also reviewed. PMID:25798438

  10. Polymeric vehicles for topical delivery and related analytical methods.

    PubMed

    Cho, Heui Kyoung; Cho, Jin Hun; Jeong, Seong Hoon; Cho, Dong Chul; Yeum, Jeong Hyun; Cheong, In Woo

    2014-04-01

    Recently a variety of polymeric vehicles, such as micelles, nanoparticles, and polymersomes, have been explored and some of them are clinically used to deliver therapeutic drugs through skin. In topical delivery, the polymeric vehicles as drug carrier should guarantee non-toxicity, long-term stability, and permeation efficacy for drugs, etc. For the development of the successful topical delivery system, it is of importance to develop the polymeric vehicles of well-defined intrinsic properties, such as molecular weights, HLB, chemical composition, topology, specific ligand conjugation and to investigate the effects of the properties on drug permeation behavior. In addition, the role of polymeric vehicles must be elucidated in in vitro and in vivo analyses. This article describes some important features of polymeric vehicles and corresponding analytical methods in topical delivery even though the application span of polymers has been truly broad in the pharmaceutical fields.

  11. Application of analytical methods in authentication and adulteration of honey.

    PubMed

    Siddiqui, Amna Jabbar; Musharraf, Syed Ghulam; Choudhary, M Iqbal; Rahman, Atta-Ur-

    2017-02-15

    Honey is synthesized from flower nectar and it is famous for its tremendous therapeutic potential since ancient times. Many factors influence the basic properties of honey including the nectar-providing plant species, bee species, geographic area, and harvesting conditions. Quality and composition of honey is also affected by many other factors, such as overfeeding of bees with sucrose, harvesting prior to maturity, and adulteration with sugar syrups. Due to the complex nature of honey, it is often challenging to authenticate the purity and quality by using common methods such as physicochemical parameters and more specialized procedures need to be developed. This article reviews the literature (between 2000 and 2016) on the use of analytical techniques, mainly NMR spectroscopy, for authentication of honey, its botanical and geographical origin, and adulteration by sugar syrups. NMR is a powerful technique and can be used as a fingerprinting technique to compare various samples. PMID:27664687

  12. Analytical Failure Prediction Method Developed for Woven and Braided Composites

    NASA Technical Reports Server (NTRS)

    Min, James B.

    2003-01-01

    Historically, advances in aerospace engine performance and durability have been linked to improvements in materials. Recent developments in ceramic matrix composites (CMCs) have led to increased interest in CMCs to achieve revolutionary gains in engine performance. The use of CMCs promises many advantages for advanced turbomachinery engine development and may be especially beneficial for aerospace engines. The most beneficial aspects of CMC material may be its ability to maintain its strength to over 2500 F, its internal material damping, and its relatively low density. Ceramic matrix composites reinforced with two-dimensional woven and braided fabric preforms are being considered for NASA s next-generation reusable rocket turbomachinery applications (for example, see the preceding figure). However, the architecture of a textile composite is complex, and therefore, the parameters controlling its strength properties are numerous. This necessitates the development of engineering approaches that combine analytical methods with limited testing to provide effective, validated design analyses for the textile composite structures development.

  13. The Finite Analytic Method for steady and unsteady heat transfer problems

    NASA Technical Reports Server (NTRS)

    Chen, C.-J.; Li, P.

    1980-01-01

    A new numerical method called the Finite Analytical Method for solving partial differential equations is introduced. The basic idea of the finite analytic method is the incorporation of the local analytic solution in obtaining the numerical solution of the problem. The finite analytical method first divides the total region of the problem into small subregions in which local analytic solutions are obtained. Then an algebraic equation is derived from the local analytic solution for each subregion relating an interior nodal value at a point P in the subregion to its neighboring nodal values. The assembly of all the local analytic solutions thus provides the finite-analytic numerical solution of the problem. In this paper the finite analytic method is illustrated in solving steady and unsteady heat transfer problems.

  14. [Analytical methods for control of foodstuffs made from bioengineered plants].

    PubMed

    Chernysheva, O N; Sorokina, E Iu

    2013-01-01

    Foodstuffs made by modern biotechnology are requiring for special control. Analytical methods used for these purposes are being constantly perfected. When choosing a strategy for the analysis, several factors have to be assessed: specificity, sensitivity, practically of the method and time efficiency. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. Protein detection methods are based mainly on ELISA. The specific detection of a novel protein synthesized by gene introduced during transformation constitutes an alternative approach for the identification of GMO. The genetic modification is not always specifically directed at the production of a novel protein and does not always result in protein expression levels sufficient for detection purposes. In addition, some proteins may be expressed only in specific parts of the plant or expressed at different levels in distinct parts of plant. As DNA is a rather stable molecule relative to proteins, it is preferred target for any kind of sample. These methods are more sensitive and specific than protein detection methods. PCR-based test can be categorized into several levels of specificity. The least specific methods are commonly called "screening methods" and relate to target DNA elements, such as promoters and terminators that are present in many different GMOs. For routine screening purpose regulatory elements 35S promoter, derived from the Cauliflower Mosaic Virus and the NOS terminator, derived from the nopaline synthase gene of Agrobacterium tumefaciens, are used as target sequences. The second level is "gene-specific methods". These methods target a part of the DNA harbouring the active gene associated with the specific genetic modification. The highest specificity is seen when the target is the unique junction found at the integration locus between the inserted DNA and the recipient genome. These are called "event-specific methods". For a

  15. [Analytical methods for control of foodstuffs made from bioengineered plants].

    PubMed

    Chernysheva, O N; Sorokina, E Iu

    2013-01-01

    Foodstuffs made by modern biotechnology are requiring for special control. Analytical methods used for these purposes are being constantly perfected. When choosing a strategy for the analysis, several factors have to be assessed: specificity, sensitivity, practically of the method and time efficiency. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. Protein detection methods are based mainly on ELISA. The specific detection of a novel protein synthesized by gene introduced during transformation constitutes an alternative approach for the identification of GMO. The genetic modification is not always specifically directed at the production of a novel protein and does not always result in protein expression levels sufficient for detection purposes. In addition, some proteins may be expressed only in specific parts of the plant or expressed at different levels in distinct parts of plant. As DNA is a rather stable molecule relative to proteins, it is preferred target for any kind of sample. These methods are more sensitive and specific than protein detection methods. PCR-based test can be categorized into several levels of specificity. The least specific methods are commonly called "screening methods" and relate to target DNA elements, such as promoters and terminators that are present in many different GMOs. For routine screening purpose regulatory elements 35S promoter, derived from the Cauliflower Mosaic Virus and the NOS terminator, derived from the nopaline synthase gene of Agrobacterium tumefaciens, are used as target sequences. The second level is "gene-specific methods". These methods target a part of the DNA harbouring the active gene associated with the specific genetic modification. The highest specificity is seen when the target is the unique junction found at the integration locus between the inserted DNA and the recipient genome. These are called "event-specific methods". For a

  16. Fast myocardial strain estimation from 3D ultrasound through elastic image registration with analytic regularization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan

    2016-04-01

    Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.

  17. Analytic characterization of linear accelerator radiosurgery dose distributions for fast optimization.

    PubMed

    Meeks, S L; Bova, F J; Buatti, J M; Friedman, W A; Eyster, B; Kendrick, L A

    1999-11-01

    Linear accelerator (linac) radiosurgery utilizes non-coplanar arc therapy delivered through circular collimators. Generally, spherically symmetric arc sets are used, resulting in nominally spherical dose distributions. Various treatment planning parameters may be manipulated to provide dose conformation to irregular lesions. Iterative manipulation of these variables can be a difficult and time-consuming task, because (a) understanding the effect of these parameters is complicated and (b) three-dimensional (3D) dose calculations are computationally expensive. This manipulation can be simplified, however, because the prescription isodose surface for all single isocentre distributions can be approximated by conic sections. In this study, the effects of treatment planning parameter manipulation on the dimensions of the treatment isodose surface were determined empirically. These dimensions were then fitted to analytic functions, assuming that the dose distributions were characterized as conic sections. These analytic functions allowed real-time approximation of the 3D isodose surface. Iterative plan optimization, either manual or automated, is achieved more efficiently using this real time approximation of the dose matrix. Subsequent to iterative plan optimization, the analytic function is related back to the appropriate plan parameters, and the dose distribution is determined using conventional dosimetry calculations. This provides a pseudo-inverse approach to radiosurgery optimization, based solely on geometric considerations.

  18. Method of Analytic Evolution of Flat Distribution Amplitudes in QCD

    SciTech Connect

    Asli Tandogan, Anatoly V. Radyushkin

    2011-11-01

    A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x < 1/2 or x > 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x)){sup t} dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|{sup 2t} appears due to a jump at x = 1/2. A good convergence was observed in the t {approx}< 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.

  19. The between-day reproducibility of fasting, satiety-related analytes, in 8 to 11year-old boys.

    PubMed

    Allsop, Susan; Rumbold, Penny L S; Green, Benjamin P

    2016-10-01

    The aim of the present study was to establish the between-day reproducibility of fasting plasma GLP-17-36, glucagon, leptin, insulin and glucose, in lean and overweight/obese 8-11year-old boys. A within-group study design was utilised wherein the boys attended two study days, separated by 1week, where a fasting fingertip capillary blood sample was obtained. Deming regression, mean difference, Bland-Altman limits of agreement (LOA) and typical imprecision as a percentage coefficient of variation (CV %), were utilised to assess reproducibility between-days. On a group level, Deming regression detected no evidence of systematic or proportional bias between-days for all of the satiety-related analytes however, only glucose and plasma GLP-17-36 displayed low typical and random imprecision. When analysed according to body composition, good reproducibility was maintained for glucose in the overweight/obese boys and for plasma GLP-17-36, in those with lean body mass. The present findings demonstrate that the measurement of glucose and plasma GLP-17-36 by fingertip capillary sampling on a group level, is reproducible between-days, in 8-11year-old boys. Comparison of blood glucose obtained by fingertip capillary sampling can be made between lean and overweight/obese 8-11year-old boys. Presently, the comparison of fasting plasma GLP-17-36 according to body weight is inappropriate due to high imprecision observed in lean boys between-days. The use of fingertip capillary sampling in the measurement of satiety-related analytes has the potential to provide a better understanding of mechanisms that affect appetite and feeding behaviour in children. PMID:27265877

  20. The between-day reproducibility of fasting, satiety-related analytes, in 8 to 11year-old boys.

    PubMed

    Allsop, Susan; Rumbold, Penny L S; Green, Benjamin P

    2016-10-01

    The aim of the present study was to establish the between-day reproducibility of fasting plasma GLP-17-36, glucagon, leptin, insulin and glucose, in lean and overweight/obese 8-11year-old boys. A within-group study design was utilised wherein the boys attended two study days, separated by 1week, where a fasting fingertip capillary blood sample was obtained. Deming regression, mean difference, Bland-Altman limits of agreement (LOA) and typical imprecision as a percentage coefficient of variation (CV %), were utilised to assess reproducibility between-days. On a group level, Deming regression detected no evidence of systematic or proportional bias between-days for all of the satiety-related analytes however, only glucose and plasma GLP-17-36 displayed low typical and random imprecision. When analysed according to body composition, good reproducibility was maintained for glucose in the overweight/obese boys and for plasma GLP-17-36, in those with lean body mass. The present findings demonstrate that the measurement of glucose and plasma GLP-17-36 by fingertip capillary sampling on a group level, is reproducible between-days, in 8-11year-old boys. Comparison of blood glucose obtained by fingertip capillary sampling can be made between lean and overweight/obese 8-11year-old boys. Presently, the comparison of fasting plasma GLP-17-36 according to body weight is inappropriate due to high imprecision observed in lean boys between-days. The use of fingertip capillary sampling in the measurement of satiety-related analytes has the potential to provide a better understanding of mechanisms that affect appetite and feeding behaviour in children.

  1. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a...

  2. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...

  3. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...

  4. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...

  5. 21 CFR 530.40 - Safe levels and availability of analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe... accordance with § 530.22, the following analytical methods have been accepted by FDA:...

  6. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical,...

  7. Segmentation of hand radiographs using fast marching methods

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Novak, Carol L.

    2006-03-01

    Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.

  8. How to assess the quality of your analytical method?

    PubMed

    Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten

    2015-10-01

    Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees.

  9. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    USGS Publications Warehouse

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  10. An analytical method for predicting postwildfire peak discharges

    USGS Publications Warehouse

    Moody, John A.

    2012-01-01

    An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The

  11. A fast target location method for the photogrammetry system

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Dong, Mingli; Liang, Bo

    2010-12-01

    In close range photogrammetry and vision metrology, several images which are taken at different stations are required for high accuracy. Before camera calibration and 3D reconstruction, the targets in the images must be recognized and located with high accuracy firstly. Furthermore, in order to monitor the deformation of the surface, real-time and on-line photogrammetry system is needed, in which high speed is necessary. So, the image processing method and speed will affect the accuracy and speed of the photogrammetry system. This paper describes a fast target location method for the photogrammetry system. Experimental results show that the target edge pixels preserve the important geometric information for subpixel centroid, which can reach accuracies to 2-3% of the pixel size. The process time of an image with 3008x2000 pixels is about 0.1S, much higher than other similar methods.

  12. A fast target location method for the photogrammetry system

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Dong, Mingli; Liang, Bo

    2011-05-01

    In close range photogrammetry and vision metrology, several images which are taken at different stations are required for high accuracy. Before camera calibration and 3D reconstruction, the targets in the images must be recognized and located with high accuracy firstly. Furthermore, in order to monitor the deformation of the surface, real-time and on-line photogrammetry system is needed, in which high speed is necessary. So, the image processing method and speed will affect the accuracy and speed of the photogrammetry system. This paper describes a fast target location method for the photogrammetry system. Experimental results show that the target edge pixels preserve the important geometric information for subpixel centroid, which can reach accuracies to 2-3% of the pixel size. The process time of an image with 3008x2000 pixels is about 0.1S, much higher than other similar methods.

  13. Improved Fermi operator expansion methods for fast electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Liang, WanZhen; Saravanan, Chandra; Shao, Yihan; Baer, Roi; Bell, Alexis T.; Head-Gordon, Martin

    2003-08-01

    Linear scaling algorithms based on Fermi operator expansions (FOE) have been considered significantly slower than other alternative approaches in evaluating the density matrix in Kohn-Sham density functional theory, despite their attractive simplicity. In this work, two new improvements to the FOE method are introduced. First, novel fast summation methods are employed to evaluate a matrix polynomial or Chebyshev matrix polynomial with matrix multiplications totalling roughly twice the square root of the degree of the polynomial. Second, six different representations of the Fermi operators are compared to assess the smallest possible degree of polynomial expansion for a given target precision. The optimal choice appears to be the complementary error function. Together, these advances make the FOE method competitive with the best existing alternatives.

  14. A Fast Estimation Method of Railway Passengers' Flow

    NASA Astrophysics Data System (ADS)

    Nagasaki, Yusaku; Asuka, Masashi; Komaya, Kiyotoshi

    To evaluate a train schedule from the viewpoint of passengers' convenience, it is important to know each passenger's choice of trains and transfer stations to arrive at his/her destination. Because of difficulties of measuring such passengers' behavior, estimation methods of railway passengers' flow are proposed to execute such an evaluation. However, a train schedule planning system equipped with those methods is not practical due to necessity of much time to complete the estimation. In this article, the authors propose a fast passengers' flow estimation method that employs features of passengers' flow graph using preparative search based on each train's arrival time at each station. And the authors show the results of passengers' flow estimation applied on a railway in an urban area.

  15. Pesticides in honey: A review on chromatographic analytical methods.

    PubMed

    Tette, Patrícia Amaral Souza; Rocha Guidi, Letícia; Glória, Maria Beatriz de Abreu; Fernandes, Christian

    2016-01-01

    Honey is a product of high consumption due to its nutritional and antimicrobial properties. However, residues of pesticides, used in plagues' treatment in the hive or in crop fields in the neighborhoods, can compromise its quality. Therefore, determination of these contaminants in honey is essential, since the use of pesticides has increased significantly in recent decades because of the growing demand for food production. Furthermore, pesticides in honey can be an indicator of environmental contamination. As the concentration of these compounds in honey is usually at trace levels and several pesticides can be found simultaneously, the use of highly sensitive and selective techniques is required. In this context, miniaturized sample preparation approaches and liquid or gas chromatography coupled to mass spectrometry became the most important analytical techniques. In this review we present and discuss recent studies dealing with pesticide determination in honey, focusing on sample preparation and separation/detection methods as well as application of the developed methods worldwide. Furthermore, trends and future perspectives are presented. PMID:26717823

  16. Pesticides in honey: A review on chromatographic analytical methods.

    PubMed

    Tette, Patrícia Amaral Souza; Rocha Guidi, Letícia; Glória, Maria Beatriz de Abreu; Fernandes, Christian

    2016-01-01

    Honey is a product of high consumption due to its nutritional and antimicrobial properties. However, residues of pesticides, used in plagues' treatment in the hive or in crop fields in the neighborhoods, can compromise its quality. Therefore, determination of these contaminants in honey is essential, since the use of pesticides has increased significantly in recent decades because of the growing demand for food production. Furthermore, pesticides in honey can be an indicator of environmental contamination. As the concentration of these compounds in honey is usually at trace levels and several pesticides can be found simultaneously, the use of highly sensitive and selective techniques is required. In this context, miniaturized sample preparation approaches and liquid or gas chromatography coupled to mass spectrometry became the most important analytical techniques. In this review we present and discuss recent studies dealing with pesticide determination in honey, focusing on sample preparation and separation/detection methods as well as application of the developed methods worldwide. Furthermore, trends and future perspectives are presented.

  17. A method of fast mosaic for massive UAV images

    NASA Astrophysics Data System (ADS)

    Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong

    2014-11-01

    With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.

  18. Pyrrolizidine alkaloids in honey: comparison of analytical methods.

    PubMed

    Kempf, M; Wittig, M; Reinhard, A; von der Ohe, K; Blacquière, T; Raezke, K-P; Michel, R; Schreier, P; Beuerle, T

    2011-03-01

    Pyrrolizidine alkaloids (PAs) are a structurally diverse group of toxicologically relevant secondary plant metabolites. Currently, two analytical methods are used to determine PA content in honey. To achieve reasonably high sensitivity and selectivity, mass spectrometry detection is demanded. One method is an HPLC-ESI-MS-MS approach, the other a sum parameter method utilising HRGC-EI-MS operated in the selected ion monitoring mode (SIM). To date, no fully validated or standardised method exists to measure the PA content in honey. To establish an LC-MS method, several hundred standard pollen analysis results of raw honey were analysed. Possible PA plants were identified and typical commercially available marker PA-N-oxides (PANOs). Three distinct honey sets were analysed with both methods. Set A consisted of pure Echium honey (61-80% Echium pollen). Echium is an attractive bee plant. It is quite common in all temperate zones worldwide and is one of the major reasons for PA contamination in honey. Although only echimidine/echimidine-N-oxide were available as reference for the LC-MS target approach, the results for both analytical techniques matched very well (n = 8; PA content ranging from 311 to 520 µg kg(-1)). The second batch (B) consisted of a set of randomly picked raw honeys, mostly originating from Eupatorium spp. (0-15%), another common PA plant, usually characterised by the occurrence of lycopsamine-type PA. Again, the results showed good consistency in terms of PA-positive samples and quantification results (n = 8; ranging from 0 to 625 µg kg(-1) retronecine equivalents). The last set (C) was obtained by consciously placing beehives in areas with a high abundance of Jacobaea vulgaris (ragwort) from the Veluwe region (the Netherlands). J. vulgaris increasingly invades countrysides in Central Europe, especially areas with reduced farming or sites with natural restorations. Honey from two seasons (2007 and 2008) was sampled. While only trace amounts of

  19. DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROTECTION

    EPA Science Inventory

    A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...

  20. A square-wave adsorptive stripping voltammetric method for determination of fast green dye.

    PubMed

    Al-Ghamdi, Ali F

    2009-01-01

    Square-wave adsorptive stripping voltammetric (SW-AdSV) determinations of trace concentrations of the coloring agent fast green were described. The analytical methodology used was based on the adsorptive preconcentration of the dye on the hanging mercury drop electrode, and then a negative sweep was initiated. In pH 10 carbonate supporting electrolyte, fast green gave a well-defined and sensitive SW-AdSV peak at -1220 mV. The electroanalytical determination of this dye was found to be optimized in carbonate buffer (pH 10) with the following experimental conditions: accumulation time (120 s); accumulation potential (-0.8 V); scan rate (800 mV/s); pulse amplitude (90 mV); frequency (90 Hz); surface area of the working electrode (0.6 mm2); and the convection rate (2000 rpm). Under these optimized conditions, the AdSV peak current was proportional over the concentration range 2 x 10(-8) -6 x 10(-7) M (r = 0.999), with an LOD of 1.63 x 10(-10) M (0.132 ppb). This analytical approach possessed more enhanced sensitivity than conventional chromatography or spectrophotometry, and was simple and quick. The precision of the method in terms of RSD was 0.17%, whereas the accuracy was evaluated via the mean recovery of 99.6%. Possible interferences by several substances usually present as food additive azo dyes (E110, E102, E123, and E129), natural and artificial sweeteners, and antioxidants were also investigated. Applicability of the developed electroanalysis method was illustrated via the determination of fast green in ice cream and soft drink samples.

  1. Temperature-controlled micro-TLC: a versatile green chemistry and fast analytical tool for separation and preliminary screening of steroids fraction from biological and environmental samples.

    PubMed

    Zarzycki, Paweł K; Slączka, Magdalena M; Zarzycka, Magdalena B; Bartoszuk, Małgorzata A; Włodarczyk, Elżbieta; Baran, Michał J

    2011-11-01

    This paper is a continuation of our previous research focusing on development of micro-TLC methodology under temperature-controlled conditions. The main goal of present paper is to demonstrate separation and detection capability of micro-TLC technique involving simple analytical protocols without multi-steps sample pre-purification. One of the advantages of planar chromatography over its column counterpart is that each TLC run can be performed using non-previously used stationary phase. Therefore, it is possible to fractionate or separate complex samples characterized by heavy biological matrix loading. In present studies components of interest, mainly steroids, were isolated from biological samples like fish bile using single pre-treatment steps involving direct organic liquid extraction and/or deproteinization by freeze-drying method. Low-molecular mass compounds with polarity ranging from estetrol to progesterone derived from the environmental samples (lake water, untreated and treated sewage waters) were concentrated using optimized solid-phase extraction (SPE). Specific bands patterns for samples derived from surface water of the Middle Pomerania in northern part of Poland can be easily observed on obtained micro-TLC chromatograms. This approach can be useful as simple and non-expensive complementary method for fast control and screening of treated sewage water discharged by the municipal wastewater treatment plants. Moreover, our experimental results show the potential of micro-TLC as an efficient tool for retention measurements of a wide range of steroids under reversed-phase (RP) chromatographic conditions. These data can be used for further optimalization of SPE or HPLC systems working under RP conditions. Furthermore, we also demonstrated that micro-TLC based analytical approach can be applied as an effective method for the internal standard (IS) substance search. Generally, described methodology can be applied for fast fractionation or screening of the

  2. Temperature-controlled micro-TLC: a versatile green chemistry and fast analytical tool for separation and preliminary screening of steroids fraction from biological and environmental samples.

    PubMed

    Zarzycki, Paweł K; Slączka, Magdalena M; Zarzycka, Magdalena B; Bartoszuk, Małgorzata A; Włodarczyk, Elżbieta; Baran, Michał J

    2011-11-01

    This paper is a continuation of our previous research focusing on development of micro-TLC methodology under temperature-controlled conditions. The main goal of present paper is to demonstrate separation and detection capability of micro-TLC technique involving simple analytical protocols without multi-steps sample pre-purification. One of the advantages of planar chromatography over its column counterpart is that each TLC run can be performed using non-previously used stationary phase. Therefore, it is possible to fractionate or separate complex samples characterized by heavy biological matrix loading. In present studies components of interest, mainly steroids, were isolated from biological samples like fish bile using single pre-treatment steps involving direct organic liquid extraction and/or deproteinization by freeze-drying method. Low-molecular mass compounds with polarity ranging from estetrol to progesterone derived from the environmental samples (lake water, untreated and treated sewage waters) were concentrated using optimized solid-phase extraction (SPE). Specific bands patterns for samples derived from surface water of the Middle Pomerania in northern part of Poland can be easily observed on obtained micro-TLC chromatograms. This approach can be useful as simple and non-expensive complementary method for fast control and screening of treated sewage water discharged by the municipal wastewater treatment plants. Moreover, our experimental results show the potential of micro-TLC as an efficient tool for retention measurements of a wide range of steroids under reversed-phase (RP) chromatographic conditions. These data can be used for further optimalization of SPE or HPLC systems working under RP conditions. Furthermore, we also demonstrated that micro-TLC based analytical approach can be applied as an effective method for the internal standard (IS) substance search. Generally, described methodology can be applied for fast fractionation or screening of the

  3. An analytical filter design method for guided wave phased arrays

    NASA Astrophysics Data System (ADS)

    Kwon, Hyu-Sang; Kim, Jin-Yeon

    2016-12-01

    This paper presents an analytical method for designing a spatial filter that processes the data from an array of two-dimensional guided wave transducers. An inverse problem is defined where the spatial filter coefficients are determined in such a way that a prescribed beam shape, i.e., a desired array output is best approximated in the least-squares sense. Taking advantage of the 2π-periodicity of the generated wave field, Fourier-series representation is used to derive closed-form expressions for the constituting matrix elements. Special cases in which the desired array output is an ideal delta function and a gate function are considered in a more explicit way. Numerical simulations are performed to examine the performance of the filters designed by the proposed method. It is shown that the proposed filters can significantly improve the beam quality in general. Most notable is that the proposed method does not compromise between the main lobe width and the sidelobe levels; i.e. a narrow main lobe and low sidelobes are simultaneously achieved. It is also shown that the proposed filter can compensate the effects of nonuniform directivity and sensitivity of array elements by explicitly taking these into account in the formulation. From an example of detecting two separate targets, how much the angular resolution can be improved as compared to the conventional delay-and-sum filter is quantitatively illustrated. Lamb wave based imaging of localized defects in an elastic plate using a circular array is also presented as an example of practical applications.

  4. An effective loading method of americium targets in fast reactors

    SciTech Connect

    Ohki, Shigeo; Sato, Isamu; Mizuno, Tomoyasu; Hayashi, Hideyuki; Tanaka, Kenya

    2007-07-01

    Recently, the development of target fuel with high americium (Am) content has been launched for the reduction of the overall fuel fabrication cost of the minor actinide (MA) recycling. In the framework of the development, this study proposes an effective loading method of Am targets in fast reactors. As a result of parametric survey calculations, we have found the ring-shaped target loading pattern between inner and outer core regions. This loading method is satisfactory both in core characteristics and in MA transmutation property. It should be noted that the Am targets can contribute to the suppression of the core power distribution change due to burnup. The major drawback of Am target is the production of helium gas. A target design modification by increasing the cladding thickness is found to be the most feasible measure to cope with the helium production. (authors)

  5. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  6. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C–O–C, 1113 cm-1) present in the cements, and the mineral content (P–O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  7. Analytical methods for waste minimisation in the convenience food industry.

    PubMed

    Darlington, R; Staikos, T; Rahimifard, S

    2009-04-01

    Waste creation in some sectors of the food industry is substantial, and while much of the used material is non-hazardous and biodegradable, it is often poorly dealt with and simply sent to landfill mixed with other types of waste. In this context, overproduction wastes were found in a number of cases to account for 20-40% of the material wastes generated by convenience food manufacturers (such as ready-meals and sandwiches), often simply just to meet the challenging demands placed on the manufacturer due to the short order reaction time provided by the supermarkets. Identifying specific classes of waste helps to minimise their creation, through consideration of what the materials constitute and why they were generated. This paper aims to provide means by which food industry wastes can be identified, and demonstrate these mechanisms through a practical example. The research reported in this paper investigated the various categories of waste and generated three analytical methods for the support of waste minimisation activities by food manufacturers. The waste classifications and analyses are intended to complement existing waste minimisation approaches and are described through consideration of a case study convenience food manufacturer that realised significant financial savings through waste measurement, analysis and reduction.

  8. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  9. NIOSH Manual of Analytical Methods (third edition). Fourth supplement

    SciTech Connect

    Not Available

    1990-08-15

    The NIOSH Manual of Analytical Methods, 3rd edition, was updated for the following chemicals: allyl-glycidyl-ether, 2-aminopyridine, aspartame, bromine, chlorine, n-butylamine, n-butyl-glycidyl-ether, carbon-dioxide, carbon-monoxide, chlorinated-camphene, chloroacetaldehyde, p-chlorophenol, crotonaldehyde, 1,1-dimethylhydrazine, dinitro-o-cresol, ethyl-acetate, ethyl-formate, ethylenimine, sodium-fluoride, hydrogen-fluoride, cryolite, sodium-hexafluoroaluminate, formic-acid, hexachlorobutadiene, hydrogen-cyanide, hydrogen-sulfide, isopropyl-acetate, isopropyl-ether, isopropyl-glycidyl-ether, lead, lead-oxide, maleic-anhydride, methyl-acetate, methyl-acrylate, methyl-tert-butyl ether, methyl-cellosolve-acetate, methylcyclohexanol, 4,4'-methylenedianiline, monomethylaniline, monomethylhydrazine, nitric-oxide, p-nitroaniline, phenyl-ether, phenyl-ether-biphenyl mixture, phenyl-glycidyl-ether, phenylhydrazine, phosphine, ronnel, sulfuryl-fluoride, talc, tributyl-phosphate, 1,1,2-trichloro-1,2,2-trifluoroethane, trimellitic-anhydride, triorthocresyl-phosphate, triphenyl-phosphate, and vinyl-acetate.

  10. Fast mouse PK (Fast PK): a rapid screening method to increase pharmacokinetic throughput in pre-clinical drug discovery.

    PubMed

    Reddy, Jitendar; Madishetti, Sreedhar; Vachaspati, Prakash R

    2012-09-29

    We describe a rapid screening methodology for performing pharmacokinetic (PK) studies in mice called Fast PK. In this Fast PK method, two mice were used per compound and four blood samples were collected from each mouse. The sampling times were staggered (sparse sampling) between the two mice, thus yielding complete PK profile in singlicate across eight time points. The plasma PK parameters from Fast PK were comparable to that obtained from conventional PK methods. This method has been used to rapidly screen compounds in the early stages of drug discovery and about 600 compounds have been profiled in the last 3 years, which has resulted in reduction in the usage of mice by 800 per year in compliance with the 3R principles of animal ethics. In addition, this Fast PK method can also help in evaluating the PK parameters from the same set of animals used in safety/toxicology/efficacy studies without the need for satellite groups. PMID:22789493

  11. Fast detection of air contaminants using immunobiological methods

    NASA Astrophysics Data System (ADS)

    Schmitt, Katrin; Bolwien, Carsten; Sulz, Gerd; Koch, Wolfgang; Dunkhorst, Wilhelm; Lödding, Hubert; Schwarz, Katharina; Holländer, Andreas; Klockenbring, Torsten; Barth, Stefan; Seidel, Björn; Hofbauer, Wolfgang; Rennebarth, Torsten; Renzl, Anna

    2009-05-01

    The fast and direct identification of possibly pathogenic microorganisms in air is gaining increasing interest due to their threat for public health, e.g. in clinical environments or in clean rooms of food or pharmaceutical industries. We present a new detection method allowing the direct recognition of relevant germs or bacteria via fluorescence-labeled antibodies within less than one hour. In detail, an air-sampling unit passes particles in the relevant size range to a substrate which contains antibodies with fluorescence labels for the detection of a specific microorganism. After the removal of the excess antibodies the optical detection unit comprising reflected-light and epifluorescence microscopy can identify the microorganisms by fast image processing on a single-particle level. First measurements with the system to identify various test particles as well as interfering influences have been performed, in particular with respect to autofluorescence of dust particles. Specific antibodies for the detection of Aspergillus fumigatus spores have been established. The biological test system consists of protein A-coated polymer particles which are detected by a fluorescence-labeled IgG. Furthermore the influence of interfering particles such as dust or debris is discussed.

  12. Fast numerical treatment of nonlinear wave equations by spectral methods

    SciTech Connect

    Skjaeraasen, Olaf; Robinson, P. A.; Newman, D. L.

    2011-02-15

    A method is presented that accelerates spectral methods for numerical solution of a broad class of nonlinear partial differential wave equations that are first order in time and that arise in plasma wave theory. The approach involves exact analytical treatment of the linear part of the wave evolution including growth and damping as well as dispersion. After introducing the method for general scalar and vector equations, we discuss and illustrate it in more detail in the context of the coupling of high- and low-frequency plasma wave modes, as modeled by the electrostatic and electromagnetic Zakharov equations in multiple dimensions. For computational efficiency, the method uses eigenvector decomposition, which is particularly advantageous when the wave damping is mode-dependent and anisotropic in wavenumber space. In this context, it is shown that the method can significantly speed up numerical integration relative to standard spectral or finite difference methods by allowing much longer time steps, especially in the limit in which the nonlinear Schroedinger equation applies.

  13. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    SciTech Connect

    Groeneboom, N. E.; Dahle, H.

    2014-03-10

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  14. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    SciTech Connect

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.

    1994-07-01

    that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  15. Simultaneous sampling of volatile and non-volatile analytes in beer for fast fingerprinting by extractive electrospray ionization mass spectrometry.

    PubMed

    Zhu, Liang; Hu, Zhong; Gamez, Gerardo; Law, Wai Siang; Chen, HuanWen; Yang, ShuiPing; Chingin, Konstantin; Balabin, Roman M; Wang, Rui; Zhang, TingTing; Zenobi, Renato

    2010-09-01

    By gently bubbling nitrogen gas through beer, an effervescent beverage, both volatile and non-volatile compounds can be simultaneously sampled in the form of aerosol. This allows for fast (within seconds) fingerprinting by extractive electrospray ionization mass spectrometry (EESI-MS) in both negative and positive ion mode, without the need for any sample pre-treatment such as degassing and dilution. Trace analytes such as volatile esters (e.g., ethyl acetate and isoamyl acetate), free fatty acids (e.g., caproic acid, caprylic acid, and capric acid), semi/non-volatile organic/inorganic acids (e.g., lactic acid), and various amino acids, commonly present in beer at the low parts per million or at sub-ppm levels, were detected and identified based on tandem MS data. Furthermore, the appearance of solvent cluster ions in the mass spectra gives insight into the sampling and ionization mechanisms: aerosol droplets containing semi/non-volatile substances are thought to be generated via bubble bursting at the surface of the liquid; these neutral aerosol droplets then collide with the charged primary electrospray ionization droplets, followed by analyte extraction, desolvation, ionization, and MS detection. With principal component analysis, several beer samples were successfully differentiated. Therefore, the present study successfully extends the applicability of EESI-MS to the direct analysis of complex liquid samples with high gas content.

  16. Methods for fast, reliable growth of Sn whiskers

    NASA Astrophysics Data System (ADS)

    Bozack, M. J.; Snipes, S. K.; Flowers, G. N.

    2016-10-01

    We report several methods to reliably grow dense fields of high-aspect ratio tin whiskers for research purposes in a period of days to weeks. The techniques offer marked improvements over previous means to grow whiskers, which have struggled against the highly variable incubation period of tin whiskers and slow growth rate. Control of the film stress is the key to fast-growing whiskers, owing to the fact that whisker incubation and growth are fundamentally a stress-relief phenomenon. The ability to grow high-density fields of whiskers (103-106/cm2) in a reasonable period of time (days, weeks) has accelerated progress in whisker growth and aided in development of whisker mitigation strategies.

  17. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  18. Fast Second Degree Total Variation Method for Image Compressive Sensing

    PubMed Central

    Liu, Pengfei; Xiao, Liang; Zhang, Jun

    2015-01-01

    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed. PMID:26361008

  19. Fast Second Degree Total Variation Method for Image Compressive Sensing.

    PubMed

    Liu, Pengfei; Xiao, Liang; Zhang, Jun

    2015-01-01

    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.

  20. Current Methods in Sedimentation Velocity and Sedimentation Equilibrium Analytical Ultracentrifugation

    PubMed Central

    Zhao, Huaying; Brautigam, Chad A.; Ghirlando, Rodolfo; Schuck, Peter

    2013-01-01

    Significant progress in the interpretation of analytical ultracentrifugation (AUC) data in the last decade has led to profound changes in the practice of AUC, both for sedimentation velocity (SV) and sedimentation equilibrium (SE). Modern computational strategies have allowed for the direct modeling of the sedimentation process of heterogeneous mixtures, resulting in SV size-distribution analyses with significantly improved detection limits and strongly enhanced resolution. These advances have transformed the practice of SV, rendering it the primary method of choice for most existing applications of AUC, such as the study of protein self- and hetero-association, the study of membrane proteins, and applications in biotechnology. New global multi-signal modeling and mass conservation approaches in SV and SE, in conjunction with the effective-particle framework for interpreting the sedimentation boundary structure of interacting systems, as well as tools for explicit modeling of the reaction/diffusion/sedimentation equations to experimental data, have led to more robust and more powerful strategies for the study of reversible protein interactions and multi-protein complexes. Furthermore, modern mathematical modeling capabilities have allowed for a detailed description of many experimental aspects of the acquired data, thus enabling novel experimental opportunities, with important implications for both sample preparation and data acquisition. The goal of the current commentary is to supplement previous AUC protocols, Current Protocols in Protein Science 20.3 (1999) and 20.7 (2003), and 7.12 (2008), and provide an update describing the current tools for the study of soluble proteins, detergent-solubilized membrane proteins and their interactions by SV and SE. PMID:23377850

  1. Analytic Method to Estimate Particle Acceleration in Flux Ropes

    NASA Technical Reports Server (NTRS)

    Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.

    2015-01-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.

  2. Method modification of the Legipid® Legionella fast detection test kit.

    PubMed

    Albalat, Guillermo Rodríguez; Broch, Begoña Bedrina; Bono, Marisa Jiménez

    2014-01-01

    Legipid(®) Legionella Fast Detection is a test based on combined magnetic immunocapture and enzyme-immunoassay (CEIA) for the detection of Legionella in water. The test is based on the use of anti-Legionella antibodies immobilized on magnetic microspheres. Target microorganism is preconcentrated by filtration. Immunomagnetic analysis is applied on these preconcentrated water samples in a final test portion of 9 mL. The test kit was certified by the AOAC Research Institute as Performance Tested Method(SM) (PTM) No. 111101 in a PTM validation which certifies the performance claims of the test method in comparison to the ISO reference method 11731-1998 and the revision 11731-2004 "Water Quality: Detection and Enumeration of Legionella pneumophila" in potable water, industrial water, and waste water. The modification of this test kit has been approved. The modification includes increasing the target analyte from L. pneumophila to Legionella species and adding an optical reader to the test method. In this study, 71 strains of Legionella spp. other than L. pneumophila were tested to determine its reactivity with the kit based on CEIA. All the strains of Legionella spp. tested by the CEIA test were confirmed positive by reference standard method ISO 11731. This test (PTM 111101) has been modified to include a final optical reading. A methods comparison study was conducted to demonstrate the equivalence of this modification to the reference culture method. Two water matrixes were analyzed. Results show no statistically detectable difference between the test method and the reference culture method for the enumeration of Legionella spp. The relative level of detection was 93 CFU/volume examined (LOD50). For optical reading, the LOD was 40 CFU/volume examined and the LOQ was 60 CFU/volume examined. Results showed that the test Legipid Legionella Fast Detection is equivalent to the reference culture method for the enumeration of Legionella spp.

  3. Fast electronic structure methods for strongly correlated molecular systems

    NASA Astrophysics Data System (ADS)

    Head-Gordon, Martin; Beran, Gregory J. O.; Sodt, Alex; Jung, Yousung

    2005-01-01

    A short review is given of newly developed fast electronic structure methods that are designed to treat molecular systems with strong electron correlations, such as diradicaloid molecules, for which standard electronic structure methods such as density functional theory are inadequate. These new local correlation methods are based on coupled cluster theory within a perfect pairing active space, containing either a linear or quadratic number of pair correlation amplitudes, to yield the perfect pairing (PP) and imperfect pairing (IP) models. This reduces the scaling of the coupled cluster iterations to no worse than cubic, relative to the sixth power dependence of the usual (untruncated) coupled cluster doubles model. A second order perturbation correction, PP(2), to treat the neglected (weaker) correlations is formulated for the PP model. To ensure minimal prefactors, in addition to favorable size-scaling, highly efficient implementations of PP, IP and PP(2) have been completed, using auxiliary basis expansions. This yields speedups of almost an order of magnitude over the best alternatives using 4-center 2-electron integrals. A short discussion of the scope of accessible chemical applications is given.

  4. Fast integral methods for integrated optical systems simulations: a review

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.

    2015-09-01

    -functional profiles, very deep ones, very large ones compared to wavelength, or simple smooth profiles. This integral method with either trigonometric or spline collocation, iterative solver with O(N2) complexity, named IESMP, was significantly improved by an efficient mesh refinement, matrix preconditioning, Ewald summation method, and an exponentially convergent quadrature in 2006 by G. Schmidt and A. Rathsfeld from Weierstrass-Institute (WIAS) Berlin. The so-called modified integral method (MIM) is a modification of the IEM of D. Maystre and has been introduced by L. Goray in 1995. It has been improved for weak convergence problems in 2001 and it was the only commercial available integral method for a long time, known as PCGRATE. All referenced integral methods so far are for in-plane diffraction only, no conical diffraction was possible. The first integral method for gratings in conical mounting was developed and proven under very weak conditions by G. Schmidt (WIAS) in 2010. It works for separated interfaces and for inclusions as well as for interpenetrating interfaces and for a large number of thin and thick layers in the same stable way. This very fast method has then been implemented for parallel processing under Unix and Windows operating systems. This work gives an overview over the most important BIMs for grating diffraction. It starts by presenting the historical evolution of the methods, highlights their advantages and differences, and gives insight into new approaches and their achievements. It addresses future open challenges at the end.

  5. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... parameter that may be comprised of a number of substances. Examples of such analytes include temperature... requirements of paragraph (b)(2) of this section are met. (ii) If the characteristics of a wastewater matrix... must be dechlorinated prior to the addition of such salts. (iii) If the......

  6. An analytical method for the measurement of nonviable bioaerosols.

    PubMed

    Menetrez, M Y; Foarde, K K; Ensor, D S

    2001-10-01

    Exposures from indoor environments are a major issue for evaluating total long-term personal exposures to the fine fraction (<2.5 microm in aerodynamic diameter) of particulate matter (PM). It is widely accepted in the indoor air quality (IAQ) research community that biocontamination is one of the important indoor air pollutants. Major indoor air biocontaminants include mold, bacteria, dust mites, and other antigens. Once the biocontaminants or their metabolites become airborne, IAQ could be significantly deteriorated. The airborne biocontaminants or their metabolites can induce irritational, allergic, infectious, and chemical responses in exposed individuals. Biocontaminants, such as some mold spores or pollen grains, because of their size and mass, settle rapidly within the indoor environment. Over time they may become nonviable and fragmented by the process of desiccation. Desiccated nonviable fragments of organisms are common and can be toxic or allergenic, depending upon the specific organism or organism component. Once these smaller and lighter fragments of biological PM become suspended in air, they have a greater tendency to stay suspended. Although some bioaerosols have been identified, few have been quantitatively studied for their prevalence within the total indoor PM with time, or for their affinity to penetrate indoors. This paper describes a preliminary research effort to develop a methodology for the measurement of nonviable biologically based PM, analyzing for mold and ragweed antigens and endotoxins. The research objectives include the development of a set of analytical methods and the comparison of impactor media and sample size, and the quantification of the relationship between outdoor and indoor levels of bioaerosols. Indoor and outdoor air samples were passed through an Andersen nonviable cascade impactor in which particles from 0.2 to 9.0 microm were collected and analyzed. The presence of mold, ragweed, and endotoxin was found in all eight

  7. A new and consistent parameter for measuring the quality of multivariate analytical methods: Generalized analytical sensitivity.

    PubMed

    Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C

    2016-08-24

    Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid. PMID:27496995

  8. A new and consistent parameter for measuring the quality of multivariate analytical methods: Generalized analytical sensitivity.

    PubMed

    Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C

    2016-08-24

    Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid.

  9. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  10. Fast color contrast enhancement method for color night vision

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoyan; Wang, Yujin; Wang, Bangfeng

    2012-01-01

    The increasing availability and deployment of imaging sensors operating in multiple spectral bands has led to a large research effort in color image fusion, resulting in a plethora of pixel-level image fusion algorithms. In this study a simple and fast fusion approach for color night vision is presented. The contrast of infrared and visible images is adjusted by local histogram equalization. Then the two enhanced images are fused into the three components of a Lab image in terms of a simple linear fusion strategy. To obtain false color images possessing a natural day-time color appearance, this paper adopts an approach which transfers color from the reference to the fused images in a simplified Lab space. To enhance the contrast between the target and the background, a stretch factor is introduced into the transferring equation in the b channel. Experimental results based on three different data sets show that the hot targets are popped out with intense colors while the background details present natural color appearance. Target detection experiments also show that the presented method has a better performance than the former methods, owing to the target recognition area, detection rate, color distance and running time.

  11. Perinatal periods of risk: analytic preparation and phase 1 analytic methods for investigating feto-infant mortality.

    PubMed

    Sappenfield, William M; Peck, Magda G; Gilbert, Carol S; Haynatzka, Vera R; Bryant, Thomas

    2010-11-01

    The Perinatal Periods of Risk (PPOR) methods provide the necessary framework and tools for large urban communities to investigate feto-infant mortality problems. Adapted from the Periods of Risk model developed by Dr. Brian McCarthy, the six-stage PPOR approach includes epidemiologic methods to be used in conjunction with community planning processes. Stage 2 of the PPOR approach has three major analytic parts: Analytic Preparation, which involves acquiring, preparing, and assessing vital records files; Phase 1 Analysis, which identifies local opportunity gaps; and Phase 2 Analyses, which investigate the opportunity gaps to determine likely causes of feto-infant mortality and to suggest appropriate actions. This article describes the first two analytic parts of PPOR, including methods, innovative aspects, rationale, limitations, and a community example. In Analytic Preparation, study files are acquired and prepared and data quality is assessed. In Phase 1 Analysis, feto-infant mortality is estimated for four distinct perinatal risk periods defined by both birthweight and age at death. These mutually exclusive risk periods are labeled Maternal Health and Prematurity, Maternal Care, Newborn Care, and Infant Health to suggest primary areas of prevention. Disparities within the study community are identified by comparing geographic areas, subpopulations, and time periods. Excess mortality numbers and rates are estimated by comparing the study population to an optimal reference population. This excess mortality is described as the opportunity gap because it indicates where communities have the potential to make improvement.

  12. A Simple Transmission Electron Microscopy Method for Fast Thickness Characterization of Suspended Graphene and Graphite Flakes.

    PubMed

    Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus

    2016-02-01

    We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images. PMID:26915000

  13. A Simple Transmission Electron Microscopy Method for Fast Thickness Characterization of Suspended Graphene and Graphite Flakes.

    PubMed

    Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus

    2016-02-01

    We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.

  14. Hanford environmental analytical methods: Methods as of March 1990. Volume 3, Appendix A2-I

    SciTech Connect

    Goheen, S.C.; McCulloch, M.; Daniel, J.L.

    1993-05-01

    This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually.

  15. Fast tablet tensile strength prediction based on non-invasive analytics.

    PubMed

    Halenius, Anna; Lakio, Satu; Antikainen, Osmo; Hatara, Juha; Yliruusi, Jouko

    2014-06-01

    In this paper, linkages between tablet surface roughness, tablet compression forces, material properties, and the tensile strength of tablets were studied. Pure sodium halides (NaF, NaBr, NaCl, and NaI) were chosen as model substances because of their simple and similar structure. Based on the data available in the literature and our own measurements, various models were made to predict the tensile strength of the tablets. It appeared that only three parameters-surface roughness, upper punch force, and the true density of material-were needed to predict the tensile strength of a tablet. Rather surprising was that the surface roughness alone was capable in the prediction. The used new 3D imaging method (Flash sizer) was roughly a thousand times quicker in determining tablet surface roughness than traditionally used laser profilometer. Both methods gave practically analogous results. It is finally suggested that the rapid 3D imaging can be a potential in-line PAT tool to predict mechanical properties of tablets in production.

  16. A fast screening MALDI method for the detection of cocaine and its metabolites in hair.

    PubMed

    Vogliardi, Susanna; Favretto, Donata; Frison, Giampietro; Ferrara, Santo Davide; Seraglia, Roberta; Traldi, Pietro

    2009-01-01

    Matrix-assisted laser desorption/ionisation (MALDI) mass spectrometry was used for the rapid detection of cocaine, benzoylecgonine and cocaethylene in hair. Different MALDI sample preparation procedures have been tested and the employment of a multi-layer 'graphite-sample-electrosprayed alpha-cyano-4-hydroxycinnamic acid (HCCA)' yielded the best results for standard solutions of the target analytes. The same approach was subsequently applied to hair samples that were known to contain cocaine, benzoylecgonine and cocaethylene, as determined by a classical GC-MS method. It was however necessary to extract hair samples by incubating them in methanol/trifluoroacetic acid for a short time (15 min) at 45 degrees C; 1 microl of the obtained supernatant was deposed on a metal surface treated with graphite, and HCCA was electrosprayed on it. This procedure successfully suppressed matrix peaks and was effective in detecting all the target analytes as their protonated species. The results obtained give further confirmation of the effectiveness of the MALDI for detecting drugs and their metabolites in complex biological matrices. The method can be useful as a fast screening procedure to detect the presence of cocaine and metabolites in hair samples. PMID:18698561

  17. Analytical methods for PCBs and organochlorine pesticides in environmental monitoring and surveillance: a critical appraisal

    PubMed Central

    Sverko, Ed

    2006-01-01

    Analytical methods for the analysis of polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs) are widely available and are the result of a vast amount of environmental analytical method development and research on persistent organic pollutants (POPs) over the past 30–40 years. This review summarizes procedures and examines new approaches for extraction, isolation, identification and quantification of individual congeners/isomers of the PCBs and OCPs. Critical to the successful application of this methodology is the collection, preparation, and storage of samples, as well as specific quality control and reporting criteria, and therefore these are also discussed. With the signing of the Stockholm convention on POPs and the development of global monitoring programs, there is an increased need for laboratories in developing countries to determine PCBs and OCPs. Thus, while this review attempts to summarize the current best practices for analysis of PCBs and OCPs, a major focus is the need for low-cost methods that can be easily implemented in developing countries. A “performance based” process is described whereby individual laboratories can adapt methods best suited to their situations. Access to modern capillary gas chromatography (GC) equipment with either electron capture or low-resolution mass spectrometry (MS) detection to separate and quantify OCP/PCBs is essential. However, screening of samples, especially in areas of known use of OCPs or PCBs, could be accomplished with bioanalytical methods such as specific commercially available enzyme-linked immunoabsorbent assays and thus this topic is also reviewed. New analytical techniques such two-dimensional GC (2D-GC) and “fast GC” using GC–ECD may be well-suited for broader use in routine PCB/OCP analysis in the near future given their relatively low costs and ability to provide high-resolution separations of PCB/OCPs. Procedures with low environmental impact (SPME, microscale, low

  18. Sonoluminescence Spectroscopy as a Promising New Analytical Method

    NASA Astrophysics Data System (ADS)

    Yurchenko, O. I.; Kalinenko, O. S.; Baklanov, A. N.; Belov, E. A.; Baklanova, L. V.

    2016-03-01

    The sonoluminescence intensity of Cs, Ru, K, Na, Li, Sr, In, Ga, Ca, Th, Cr, Pb, Mn, Ag, and Mg salts in aqueous solutions of various concentrations was investigated as a function of ultrasound frequency and intensity. Techniques for the determination of these elements in solutions of table salt and their own salts were developed. It was shown that the proposed analytical technique gave results at high concentrations with better metrological characteristics than atomic-absorption spectroscopy because the samples were not diluted.

  19. A simplified analytical method for a phenotyping cocktail of major CYP450 biotransformation routes.

    PubMed

    Jerdi, Mallorie Clement; Daali, Youssef; Oestreicher, Mitsuko Kondo; Cherkaoui, Samir; Dayer, Pierre

    2004-09-01

    An efficient, fast and reliable analytical method was developed for the simultaneous evaluation of the activities of five major human drug metabolising cytochrome P450 (1A2, 2C9, 2C19, 2D6 and 3A4) with a cocktail approach including five probe substances, namely caffeine, flurbiprofen, omeprazole, dextromethorphan and midazolam. All substances were administered simultaneously and a single plasma sample was obtained 2h after the administration. Plasma samples were handled by liquid-liquid extraction and analysed by gradient high performance liquid chromatography (HPLC) coupled to UV and fluorescence detectors. The chromatographic separation was achieved using a Discovery semi-micro HS C18 HPLC column (5 microm particle size, 150 mm x 2.1 mm i.d.) protected by a guard column (5 microm particle size, 20 mm x 2.1 mm i.d.) The mobile phase was constituted of a methanol, acetonitrile and 20mM ammonium acetate (pH 4.5) with 0.1% triethylamine mixture and was delivered at a flow rate of 0.3 mL min(-1). All substances were separated simultaneously in a single run lasting less than 22 min. The HPLC method was formally validated and showed good performances in terms of linearity, sensitivity, precision and accuracy. Finally, the method was found suitable for the screening of these compounds in plasma samples.

  20. Method for Operating a Sensor to Differentiate Between Analytes in a Sample

    DOEpatents

    Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J

    1998-07-28

    Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.

  1. Progress in the GEOROC Database - Fast and Simple Access to Analytical Data by Precompilation

    NASA Astrophysics Data System (ADS)

    Sarbas, B.

    2001-12-01

    sample, these are compiled according to specific rules. These rules consider the method of analysis as well as the year of publication.

  2. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  3. Fast high-throughput method for the determination of acidity constants by capillary electrophoresis: I. Monoprotic weak acids and bases.

    PubMed

    Fuguet, Elisabet; Ràfols, Clara; Bosch, Elisabeth; Rosés, Martí

    2009-04-24

    A new and fast method to determine acidity constants of monoprotic weak acids and bases by capillary zone electrophoresis based on the use of an internal standard (compound of similar nature and acidity constant as the analyte) has been developed. This method requires only two electrophoretic runs for the determination of an acidity constant: a first one at a pH where both analyte and internal standard are totally ionized, and a second one at another pH where both are partially ionized. Furthermore, the method is not pH dependent, so an accurate measure of the pH of the buffer solutions is not needed. The acidity constants of several phenols and amines have been measured using internal standards of known pK(a), obtaining a mean deviation of 0.05 pH units compared to the literature values. PMID:19168179

  4. Fast high-throughput method for the determination of acidity constants by capillary electrophoresis: I. Monoprotic weak acids and bases.

    PubMed

    Fuguet, Elisabet; Ràfols, Clara; Bosch, Elisabeth; Rosés, Martí

    2009-04-24

    A new and fast method to determine acidity constants of monoprotic weak acids and bases by capillary zone electrophoresis based on the use of an internal standard (compound of similar nature and acidity constant as the analyte) has been developed. This method requires only two electrophoretic runs for the determination of an acidity constant: a first one at a pH where both analyte and internal standard are totally ionized, and a second one at another pH where both are partially ionized. Furthermore, the method is not pH dependent, so an accurate measure of the pH of the buffer solutions is not needed. The acidity constants of several phenols and amines have been measured using internal standards of known pK(a), obtaining a mean deviation of 0.05 pH units compared to the literature values.

  5. 40 CFR 141.852 - Analytical methods and laboratory certification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Standard Methods Online 9221 B.1, B.2-99 2 3 Presence-Absence (P-A) Coliform Test Standard Methods 9221 D.1... Method 1604 2 m-ColiBlue24® Test 2 4 Chromocult 2 4 Enzyme Substrate Methods Colilert® Standard Methods... MI mediumm-ColiBlue24® Test 2,4 EPA Method 1604 2 Chromocult 2 4 Enzyme Substrate Methods...

  6. PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS

    EPA Science Inventory

    Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...

  7. 40 CFR 141.852 - Analytical methods and laboratory certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Standard Methods Online 9221 B.1, B.2-99.2 3 Presence-Absence (P-A) Coliform Test Standard Methods 9221 D.1...-ColiBlue24® Test 2 4 Chromocult 2 4 Enzyme Substrate Methods Colilert® Standard Methods 9223 B (20th ed...-ColiBlue24® Test 2 4 EPA Method 1604 2 Chromocult 2 4 Enzyme Substrate Methods Colilert®...

  8. FAST TRACK COMMUNICATION: An analytical approximation scheme to two-point boundary value problems of ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Boisseau, Bruno; Forgács, Péter; Giacomini, Hector

    2007-03-01

    A new (algebraic) approximation scheme to find global solutions of two-point boundary value problems of ordinary differential equations (ODEs) is presented. The method is applicable for both linear and nonlinear (coupled) ODEs whose solutions are analytic near one of the boundary points. It is based on replacing the original ODEs by a sequence of auxiliary first-order polynomial ODEs with constant coefficients. The coefficients in the auxiliary ODEs are uniquely determined from the local behaviour of the solution in the neighbourhood of one of the boundary points. The problem of obtaining the parameters of the global (connecting) solutions, analytic at one of the boundary points, reduces to find the appropriate zeros of algebraic equations. The power of the method is illustrated by computing the approximate values of the 'connecting parameters' for a number of nonlinear ODEs arising in various problems in field theory. We treat in particular the static and rotationally symmetric global vortex, the skyrmion, the Abrikosov-Nielsen-Olesen vortex, as well as the 't Hooft-Polyakov magnetic monopole. The total energy of the skyrmion and of the monopole is also computed by the new method. We also consider some ODEs coming from the exact renormalization group. The ground-state energy level of the anharmonic oscillator is also computed for arbitrary coupling strengths with good precision.

  9. Validation of analytical methods involved in dissolution assays: acceptance limits and decision methodologies.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph

    2012-11-01

    Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization. PMID:23084050

  10. Methods for performing fast discrete curvelet transforms of data

    DOEpatents

    Candes, Emmanuel; Donoho, David; Demanet, Laurent

    2010-11-23

    Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.

  11. Development of a fast voltage control method for electrostatic accelerators

    NASA Astrophysics Data System (ADS)

    Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios

    2014-12-01

    The concept of a novel fast voltage control loop for tandem electrostatic accelerators is described. This control loop utilises high-frequency components of the ion beam current intercepted by the image slits to generate a correction voltage that is applied to the first few gaps of the low- and high-energy acceleration tubes adjoining the high voltage terminal. New techniques for the direct measurement of the transfer function of an ultra-high impedance structure, such as an electrostatic accelerator, have been developed. For the first time, the transfer function for the fast feedback loop has been measured directly. Slow voltage variations are stabilised with common corona control loop and the relationship between transfer functions for the slow and new fast control loops required for optimum operation is discussed. The main source of terminal voltage instabilities, which are due to variation of the charging current caused by mechanical oscillations of charging chains, has been analysed.

  12. DEMONSTRATION BULLETIN: FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - U.S. ENVIRONMENTAL PROTECTION AGENCY

    EPA Science Inventory

    The field analytical screening program (FASP) polychlorinated biphenyl (PCB) method uses a temperature-programmable gas chromatograph (GC) equipped with an electron capture detector (ECD) to identify and quantify PCBs. Gas chromatography is an EPA-approved method for determi...

  13. Fast gradient separation by very high pressure liquid chromatography: reproducibility of analytical data and influence of delay between successive runs.

    PubMed

    Stankovicha, Joseph J; Gritti, Fabrice; Beaver, Lois Ann; Stevensona, Paul G; Guiochon, Georges

    2013-11-29

    Five methods were used to implement fast gradient separations: constant flow rate, constant column-wall temperature, constant inlet pressure at moderate and high pressures (controlled by a pressure controller),and programmed flow constant pressure. For programmed flow constant pressure, the flow rates and gradient compositions are controlled using input into the method instead of the pressure controller. Minor fluctuations in the inlet pressure do not affect the mobile phase flow rate in programmed flow. There producibilities of the retention times, the response factors, and the eluted band width of six successive separations of the same sample (9 components) were measured with different equilibration times between 0 and 15 min. The influence of the length of the equilibration time on these reproducibilities is discussed. The results show that the average column temperature may increase from one separation to the next and that this contributes to fluctuation of the results.

  14. Analytical methods for the determination of carbon tetrachloride in soils.

    SciTech Connect

    Alvarado, J. S.; Spokas, K.; Taylor, J.

    1999-06-01

    Improved methods for the determination of carbon tetrachloride are described. These methods incorporate purge-and-trap concentration of heated dry samples, an improved methanol extraction procedure, and headspace sampling. The methods minimize sample pretreatment, accomplish solvent substitution, and save time. The methanol extraction and headspace sampling procedures improved the method detection limits and yielded better sensitivity, good recoveries, and good performance. Optimization parameters are shown. Results obtained with these techniques are compared for soil samples from contaminated sites.

  15. Team mental models: techniques, methods, and analytic approaches.

    PubMed

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  16. Application of an analytical method for solution of thermal hydraulic conservation equations

    SciTech Connect

    Fakory, M.R.

    1995-09-01

    An analytical method has been developed and applied for solution of two-phase flow conservation equations. The test results for application of the model for simulation of BWR transients are presented and compared with the results obtained from application of the explicit method for integration of conservation equations. The test results show that with application of the analytical method for integration of conservation equations, the Courant limitation associated with explicit Euler method of integration was eliminated. The results obtained from application of the analytical method (with large time steps) agreed well with the results obtained from application of explicit method of integration (with time steps smaller than the size imposed by Courant limitation). The results demonstrate that application of the analytical approach significantly improves the numerical stability and computational efficiency.

  17. Tank 48H Waste Composition and Results of Investigation of Analytical Methods

    SciTech Connect

    Walker , D.D.

    1997-04-02

    This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.

  18. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    SciTech Connect

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.

  19. Manual of analytical methods for the Industrial Hygiene Chemistry Laboratory

    SciTech Connect

    Greulich, K.A.; Gray, C.E.

    1991-08-01

    This Manual is compiled from techniques used in the Industrial Hygiene Chemistry Laboratory of Sandia National Laboratories in Albuquerque, New Mexico. The procedures are similar to those used in other laboratories devoted to industrial hygiene practices. Some of the methods are standard; some, modified to suit our needs; and still others, developed at Sandia. The authors have attempted to present all methods in a simple and concise manner but in sufficient detail to make them readily usable. It is not to be inferred that these methods are universal for any type of sample, but they have been found very reliable for the types of samples mentioned.

  20. Analytic method for calculating properties of random walks on networks

    NASA Technical Reports Server (NTRS)

    Goldhirsch, I.; Gefen, Y.

    1986-01-01

    A method for calculating the properties of discrete random walks on networks is presented. The method divides complex networks into simpler units whose contribution to the mean first-passage time is calculated. The simplified network is then further iterated. The method is demonstrated by calculating mean first-passage times on a segment, a segment with a single dangling bond, a segment with many dangling bonds, and a looplike structure. The results are analyzed and related to the applicability of the Einstein relation between conductance and diffusion.

  1. Downstream processing and chromatography based analytical methods for production of vaccines, gene therapy vectors, and bacteriophages

    PubMed Central

    Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš

    2015-01-01

    Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122

  2. Fast separation and quantification method for nitroguanidine and 2,4-dinitroanisole by ultrafast liquid chromatography-tandem mass spectrometry.

    PubMed

    Mu, Ruipu; Shi, Honglan; Yuan, Yuan; Karnjanapiboonwong, Adcharee; Burken, Joel G; Ma, Yinfa

    2012-04-01

    Explosives are now persistent environmental pollutants that are targets of remediation and monitoring in a wide array of environmental media. Nitroguanidine (NG) and 2,4-dinitroanisole (DNAN) are two insensitive energetic compounds recently used as munitions explosives. To protect our environment and human health, the levels of these compounds in soils and waters need to be monitored. However, no sensitive analytical methods, such as liquid chromatography-tandem mass spectrometry (LC-MS/MS), have been developed for detecting these new compounds at trace levels and to be concurrently applied to monitor the common explosives. In general, the concentrations of explosives in either soil or water samples are very low and widely distributed. Therefore, a fast and sensitive method is required to monitor those compounds and increase our ability to find and address the threats they pose to human health and ecological receptors. In this study, a fast and sensitive analytical method has been developed to quantitatively determine NG and DNAN in soil, tap water, and river water by using ultrafast LC-MS/MS. To make this method a comprehensive analytical technique for other explosives as well, it has included other commonly used explosives in the method development, such as octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), 1,3,5-trinitroper-hydro-1,3,5-triazine (RDX), 2,4,6-trinitrotoluene (TNT), 2-amino-4,6-dinitrotoluene (ADNT), and pentaerythritol tetranitrate (PETN). The method detection limits (MDLs) of these compounds in soil ranged from 0.2 to 5 ppb, and a good linearity was obtained over a concentration range of 0.5-200 ppb. The recoveries of some compounds are equal to or better than the current EPA methods but with much higher sensitivities.

  3. Base flow separation: A comparison of analytical and mass balance methods

    NASA Astrophysics Data System (ADS)

    Lott, Darline A.; Stewart, Mark T.

    2016-04-01

    Base flow is the ground water contribution to stream flow. Many activities, such as water resource management, calibrating hydrological and climate models, and studies of basin hydrology, require good estimates of base flow. The base flow component of stream flow is usually determined by separating a stream hydrograph into two components, base flow and runoff. Analytical methods, mathematical functions or algorithms used to calculate base flow directly from discharge, are the most widely used base flow separation methods and are often used without calibration to basin or gage-specific parameters other than basin area. In this study, six analytical methods are compared to a mass balance method, the conductivity mass-balance (CMB) method. The base flow index (BFI) values for 35 stream gages are obtained from each of the seven methods with each gage having at least two consecutive years of specific conductance data and 30 years of continuous discharge data. BFI is cumulative base flow divided by cumulative total discharge over the period of record of analysis. The BFI value is dimensionless, and always varies from 0 to 1. Areas of basins used in this study range from 27 km2 to 68,117 km2. BFI was first determined for the uncalibrated analytical methods. The parameters of each analytical method were then calibrated to produce BFI values as close to the CMB derived BFI values as possible. One of the methods, the power function (aQb + cQ) method, is inherently calibrated and was not recalibrated. The uncalibrated analytical methods have an average correlation coefficient of 0.43 when compared to CMB-derived values, and an average correlation coefficient of 0.93 when calibrated with the CMB method. Once calibrated, the analytical methods can closely reproduce the base flow values of a mass balance method. Therefore, it is recommended that analytical methods be calibrated against tracer or mass balance methods.

  4. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD

    EPA Science Inventory

    Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...

  5. Analytical method for promoting process capability of shock absorption steel.

    PubMed

    Sung, Wen-Pei; Shih, Ming-Hsiang; Chen, Kuen-Suan

    2003-01-01

    Mechanical properties and low cycle fatigue are two factors that must be considered in developing new type steel for shock absorption. Process capability and process control are significant factors in achieving the purpose of research and development programs. Often-used evaluation methods failed to measure process yield and process centering; so this paper uses Taguchi loss function as basis to establish an evaluation method and the steps for assessing the quality of mechanical properties and process control of an iron and steel manufacturer. The establishment of this method can serve the research and development and manufacturing industry and lay a foundation in enhancing its process control ability to select better manufacturing processes that are more reliable than decision making by using the other commonly used methods.

  6. General adaptive guidance using nonlinear programming constraint solving methods (FAST)

    NASA Astrophysics Data System (ADS)

    Skalecki, Lisa; Martin, Marc

    An adaptive, general purpose, constraint solving guidance algorithm called FAST (Flight Algorithm to Solve Trajectories) has been developed by the authors in response to the requirements for the Advanced Launch System (ALS). The FAST algorithm can be used for all mission phases for a wide range of Space Transportation Vehicles without code modification because of the general formulation of the nonlinear programming (NLP) problem, ad the general trajectory simulation used to predict constraint values. The approach allows on board re-targeting for severe weather and changes in payload or mission parameters, increasing flight reliability and dependability while reducing the amount of pre-flight analysis that must be performed. The algorithm is described in general in this paper. Three degree of freedom simulation results are presented for application of the algorithm to ascent and reentry phases of an ALS mission, and Mars aerobraking. Flight processor CPU requirement data is also shown.

  7. Computational Neutronics Methods and Transmutation Performance Analyses for Fast Reactors

    SciTech Connect

    R. Ferrer; M. Asgari; S. Bays; B. Forget

    2007-03-01

    The once-through fuel cycle strategy in the United States for the past six decades has resulted in an accumulation of Light Water Reactor (LWR) Spent Nuclear Fuel (SNF). This SNF contains considerable amounts of transuranic (TRU) elements that limit the volumetric capacity of the current planned repository strategy. A possible way of maximizing the volumetric utilization of the repository is to separate the TRU from the LWR SNF through a process such as UREX+1a, and convert it into fuel for a fast-spectrum Advanced Burner Reactor (ABR). The key advantage in this scenario is the assumption that recycling of TRU in the ABR (through pyroprocessing or some other approach), along with a low capture-to-fission probability in the fast reactor’s high-energy neutron spectrum, can effectively decrease the decay heat and toxicity of the waste being sent to the repository. The decay heat and toxicity reduction can thus minimize the need for multiple repositories. This report summarizes the work performed by the fuel cycle analysis group at the Idaho National Laboratory (INL) to establish the specific technical capability for performing fast reactor fuel cycle analysis and its application to a high-priority ABR concept. The high-priority ABR conceptual design selected is a metallic-fueled, 1000 MWth SuperPRISM (S-PRISM)-based ABR with a conversion ratio of 0.5. Results from the analysis showed excellent agreement with reference values. The independent model was subsequently used to study the effects of excluding curium from the transuranic (TRU) external feed coming from the LWR SNF and recycling the curium produced by the fast reactor itself through pyroprocessing. Current studies to be published this year focus on analyzing the effects of different separation strategies as well as heterogeneous TRU target systems.

  8. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  9. A new analytic expression for fast calculation of the transient near and far field of a rectangular baffled piston.

    PubMed

    Ortega, Alejandra; Tong, Ling; D'hooge, Jan

    2014-04-01

    Essential to (cardiac) 3D ultrasound are 2D matrix array transducer technology and the associated (two-stage) beam forming. Given the large number of degrees of freedom and the complexity of this problem, simulation tools play an important role. Hereto, the impulse response (IR) method is commonly used. Unfortunately, given the large element count of 2D array transducers, simulation times become significant jeopardizing the efficacy of the design process. The aim of this study was therefore to derive a new analytical expression to more efficiently calculate the IR in order to speed up the calculation process. To compare accuracy and computation time, the reference and the proposed method were implemented in MATLAB and contrasted. For all points of observation tested, the IR with both methods was identical. The mean calculation time however reduced in average by a factor of 3.93±0.03 times. The proposed IR method therefore speeds up the calculation time of the IR of an individual transducer element while remaining perfectly accurate. This new expression will be particularly relevant for 2D matrix transducer design where computation times remain currently a bottle neck in the design process. PMID:24447860

  10. An analytical method to predict efficiency of aircraft gearboxes

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.; Black, J. D.

    1984-01-01

    A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.

  11. Teaching Analytical Method Development in an Undergraduate Instrumental Analysis Course

    ERIC Educational Resources Information Center

    Lanigan, Katherine C.

    2008-01-01

    Method development and assessment, central components of carrying out chemical research, require problem-solving skills. This article describes a pedagogical approach for teaching these skills through the adaptation of published experiments and application of group-meeting style discussions to the curriculum of an undergraduate instrumental…

  12. Method and apparatus for automated processing and aliquoting of whole blood samples for analysis in a centrifugal fast analyzer

    DOEpatents

    Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.

    1988-01-01

    A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises (1) a whole blood sample disc, (2) a serum sample disc, (3) a sample preparation rotor, and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc in capillary tubes filled by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analytical rotor for analysis by conventional methods.

  13. Hyperspectral imaging based method for fast characterization of kidney stone types.

    PubMed

    Blanco, Francisco; López-Mesas, Montserrat; Serranti, Silvia; Bonifazi, Giuseppe; Havel, Josef; Valiente, Manuel

    2012-07-01

    The formation of kidney stones is a common and highly studied disease, which causes intense pain and presents a high recidivism. In order to find the causes of this problem, the characterization of the main compounds is of great importance. In this sense, the analysis of the composition and structure of the stone can give key information about the urine parameters during the crystal growth. But the usual methods employed are slow, analyst dependent and the information obtained is poor. In the present work, the near infrared (NIR)-hyperspectral imaging technique was used for the analysis of 215 samples of kidney stones, including the main types usually found and their mixtures. The NIR reflectance spectra of the analyzed stones showed significant differences that were used for their classification. To do so, a method was created by the use of artificial neural networks, which showed a probability higher than 90% for right classification of the stones. The promising results, robust methodology, and the fast analytical process, without the need of an expert assistance, lead to an easy implementation at the clinical laboratories, offering the urologist a rapid diagnosis that shall contribute to minimize urolithiasis recidivism. PMID:22894510

  14. Hyperspectral imaging based method for fast characterization of kidney stone types

    NASA Astrophysics Data System (ADS)

    Blanco, Francisco; López-Mesas, Montserrat; Serranti, Silvia; Bonifazi, Giuseppe; Havel, Josef; Valiente, Manuel

    2012-07-01

    The formation of kidney stones is a common and highly studied disease, which causes intense pain and presents a high recidivism. In order to find the causes of this problem, the characterization of the main compounds is of great importance. In this sense, the analysis of the composition and structure of the stone can give key information about the urine parameters during the crystal growth. But the usual methods employed are slow, analyst dependent and the information obtained is poor. In the present work, the near infrared (NIR)-hyperspectral imaging technique was used for the analysis of 215 samples of kidney stones, including the main types usually found and their mixtures. The NIR reflectance spectra of the analyzed stones showed significant differences that were used for their classification. To do so, a method was created by the use of artificial neural networks, which showed a probability higher than 90% for right classification of the stones. The promising results, robust methodology, and the fast analytical process, without the need of an expert assistance, lead to an easy implementation at the clinical laboratories, offering the urologist a rapid diagnosis that shall contribute to minimize urolithiasis recidivism.

  15. Advanced and In Situ Analytical Methods for Solar Fuel Materials.

    PubMed

    Chan, Candace K; Tüysüz, Harun; Braun, Artur; Ranjan, Chinmoy; La Mantia, Fabio; Miller, Benjamin K; Zhang, Liuxian; Crozier, Peter A; Haber, Joel A; Gregoire, John M; Park, Hyun S; Batchellor, Adam S; Trotochaud, Lena; Boettcher, Shannon W

    2016-01-01

    In situ and operando techniques can play important roles in the development of better performing photoelectrodes, photocatalysts, and electrocatalysts by helping to elucidate crucial intermediates and mechanistic steps. The development of high throughput screening methods has also accelerated the evaluation of relevant photoelectrochemical and electrochemical properties for new solar fuel materials. In this chapter, several in situ and high throughput characterization tools are discussed in detail along with their impact on our understanding of solar fuel materials.

  16. EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A

    EPA Science Inventory

    SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...

  17. DEMONSTRATION BULLETIN: FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - U.S. ENVIRONMENTAL PROTECTION AGENCY

    EPA Science Inventory

    The Superfund Innovative Technology Evaluation (SITE) Program evaluates new technologies to assess their effectiveness. This bulletin summarizes results from the 1993 SITE demonstration of the Field Analytical Screening Program (FASP) Pentachlorophenol (PCP) Method to determine P...

  18. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    PubMed

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed. PMID:26753274

  19. A Vocal-Based Analytical Method for Goose Behaviour Recognition

    PubMed Central

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037

  20. A vocal-based analytical method for goose behaviour recognition.

    PubMed

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system.

  1. Method for using fast fluidized bed dry bottom coal gasification

    DOEpatents

    Snell, George J.; Kydd, Paul H.

    1983-01-01

    Carbonaceous solid material such as coal is gasified in a fast fluidized bed gasification system utilizing dual fluidized beds of hot char. The coal in particulate form is introduced along with oxygen-containing gas and steam into the fast fluidized bed gasification zone of a gasifier assembly wherein the upward superficial gas velocity exceeds about 5.0 ft/sec and temperature is 1500.degree.-1850.degree. F. The resulting effluent gas and substantial char are passed through a primary cyclone separator, from which char solids are returned to the fluidized bed. Gas from the primary cyclone separator is passed to a secondary cyclone separator, from which remaining fine char solids are returned through an injection nozzle together with additional steam and oxygen-containing gas to an oxidation zone located at the bottom of the gasifier, wherein the upward gas velocity ranges from about 3-15 ft/sec and is maintained at 1600.degree.-200.degree. F. temperature. This gasification arrangement provides for increased utilization of the secondary char material to produce higher overall carbon conversion and product yields in the process.

  2. Using an analytical geometry method to improve tiltmeter data presentation

    USGS Publications Warehouse

    Su, W.-J.

    2000-01-01

    The tiltmeter is a useful tool for geologic and geotechnical applications. To obtain full benefit from the tiltmeter, easy and accurate data presentations should be used. Unfortunately, the most commonly used method for tilt data reduction now may yield inaccurate and low-resolution results. This article describes a simple, accurate, and high-resolution approach developed at the Illinois State Geological Survey for data reduction and presentation. The orientation of tiltplates is determined first by using a trigonometric relationship, followed by a matrix transformation, to obtain the true amount of rotation change of the tiltplate at any given time. The mathematical derivations used for the determination and transformation are then coded into an integrated PC application by adapting the capabilities of commercial spreadsheet, database, and graphics software. Examples of data presentation from tiltmeter applications in studies of landfill covers, characterizations of mine subsidence, and investigations of slope stability are also discussed.

  3. Analytical methods of laser spectroscopy for biomedical applications

    NASA Astrophysics Data System (ADS)

    Martyshkin, Dmitri V.

    Different aspects of the application of laser spectroscopy in biomedical research have been considered. A growing demand for molecular sensing techniques in biomedical and environmental research has led the introduction of existing spectroscopic techniques, as well as development of new methods. The applications of laser-induced fluorescence, Raman scattering, cavity ring-down spectroscopy, and laser-induced breakdown spectroscopy for the monitoring of superoxide dismutase (SOD) and hemoglobin levels, the study of the characteristics of light-curing dental restorative materials, and the environmental monitoring of levels of toxic metal ion is presented. The development of new solid-state tunable laser sources based on color center crystals for these applications is presented as well.

  4. A simple parallel analytical method of prenatal screening.

    PubMed

    Li, Ding; Yang, Hao; Zhang, Wen-Hong; Pan, Hao; Wen, Dong-Qing; Han, Feng-Chan; Guo, Hui-Fang; Wang, Xiao-Ming; Yan, Xiao-Jun

    2006-01-01

    Protein microarray has progressed rapidly in the past few years, but it is still hard to popularize it in many developing countries or small hospitals owing to the technical expertise required in practice. We developed a cheap and easy-to-use protein microarray based on dot immunogold filtration assay for parallel analysis of ToRCH-related antibodies including Toxoplasma gondii, rubella virus, cytomegalovirus and herpes simplex virus type 1 and 2 in sera of pregnant women. It does not require any expensive instruments and the assay results can be clearly recognized by the naked eye. We analyzed 186 random sera of outpatients at the gynecological department with our microarray and commercial ELISA kit, and the results showed there was no significant difference between the two detection methods. Validated by clinical application, the microarray is easy to use and has a unique advantage in cost and time. It is more suitable for mass prenatal screening or epidemiological screening than the ELISA format.

  5. Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 4, Organic methods

    SciTech Connect

    Not Available

    1993-08-01

    This interim notice covers the following: extractable organic halides in solids, total organic halides, analysis by gas chromatography/Fourier transform-infrared spectroscopy, hexadecane extracts for volatile organic compounds, GC/MS analysis of VOCs, GC/MS analysis of methanol extracts of cryogenic vapor samples, screening of semivolatile organic extracts, GPC cleanup for semivolatiles, sample preparation for GC/MS for semi-VOCs, analysis for pesticides/PCBs by GC with electron capture detection, sample preparation for pesticides/PCBs in water and soil sediment, report preparation, Florisil column cleanup for pesticide/PCBs, silica gel and acid-base partition cleanup of samples for semi-VOCs, concentrate acid wash cleanup, carbon determination in solids using Coulometrics` CO{sub 2} coulometer, determination of total carbon/total organic carbon/total inorganic carbon in radioactive liquids/soils/sludges by hot persulfate method, analysis of solids for carbonates using Coulometrics` Model 5011 coulometer, and soxhlet extraction.

  6. Fast repetition rate (FRR) fluorometer and method for measuring fluorescence and photosynthetic parameters

    DOEpatents

    Kolber, Z.; Falkowski, P.

    1995-06-20

    A fast repetition rate fluorometer device and method for measuring in vivo fluorescence of phytoplankton or higher plants chlorophyll and photosynthetic parameters of phytoplankton or higher plants is revealed. The phytoplankton or higher plants are illuminated with a series of fast repetition rate excitation flashes effective to bring about and measure resultant changes in fluorescence yield of their Photosystem II. The series of fast repetition rate excitation flashes has a predetermined energy per flash and a rate greater than 10,000 Hz. Also, disclosed is a flasher circuit for producing the series of fast repetition rate flashes. 14 figs.

  7. Fast repetition rate (FRR) fluorometer and method for measuring fluorescence and photosynthetic parameters

    DOEpatents

    Kolber, Zbigniew; Falkowski, Paul

    1995-06-20

    A fast repetition rate fluorometer device and method for measuring in vivo fluorescence of phytoplankton or higher plants chlorophyll and photosynthetic parameters of phytoplankton or higher plants by illuminating the phytoplankton or higher plants with a series of fast repetition rate excitation flashes effective to bring about and measure resultant changes in fluorescence yield of their Photosystem II. The series of fast repetition rate excitation flashes has a predetermined energy per flash and a rate greater than 10,000 Hz. Also, disclosed is a flasher circuit for producing the series of fast repetition rate flashes.

  8. A Study of Instructional Methods Used in Fast-Paced Classes

    ERIC Educational Resources Information Center

    Lee, Seon-Young; Olszewski-Kubilius, Paula

    2006-01-01

    This study involved 15 secondary-level teachers who taught fast-paced classes at a university based summer program and similar regularly paced classes in their local schools in order to examine how teachers differentiate or modify instructional methods and content selections for fast-paced classes. Interviews were conducted with the teachers…

  9. An introduction to clinical microeconomic analysis: purposes and analytic methods.

    PubMed

    Weintraub, W S; Mauldin, P D; Becker, E R

    1994-06-01

    The recent concern with health care economics has fostered the development of a new discipline that is generally called clinical microeconomics. This is a discipline in which microeconomic methods are used to study the economics of specific medical therapies. It is possible to perform stand alone cost analyses, but more profound insight into the medical decision making process may be accomplished by combining cost studies with measures of outcome. This is most often accomplished with cost-effectiveness or cost-utility studies. In cost-effectiveness studies there is one measure of outcome, often death. In cost-utility studies there are multiple measures of outcome, which must be grouped together to give an overall picture of outcome or utility. There are theoretical limitations to the determination of utility that must be accepted to perform this type of analysis. A summary statement of outcome is quality adjusted life years (QALYs), which is utility time socially discounted survival. Discounting is used because people value a year of future life less than a year of present life. Costs are made up of in-hospital direct, professional, follow-up direct, and follow-up indirect costs. Direct costs are for medical services. Indirect costs reflect opportunity costs such as lost time at work. Cost estimates are often based on marginal costs, or the cost for one additional procedure of the same type. Finally an overall statistic may be generated as cost per unit increase in effectiveness, such as dollars per QALY.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:10151059

  10. A Novel and Fast Purification Method for Nucleoside Transporters.

    PubMed

    Hao, Zhenyu; Thomsen, Maren; Postis, Vincent L G; Lesiuk, Amelia; Sharples, David; Wang, Yingying; Bartlam, Mark; Goldman, Adrian

    2016-01-01

    Nucleoside transporters (NTs) play critical biological roles in humans, and to understand the molecular mechanism of nucleoside transport requires high-resolution structural information. However, the main bottleneck for structural analysis of NTs is the production of pure, stable, and high quality native protein for crystallization trials. Here we report a novel membrane protein expression and purification strategy, including construction of a high-yield membrane protein expression vector, and a new and fast purification protocol for NTs. The advantages of this strategy are the improved time efficiency, leading to high quality, active, stable membrane proteins, and the efficient use of reagents and consumables. Our strategy might serve as a useful point of reference for investigating NTs and other membrane proteins by clarifying the technical points of vector construction and improvements of membrane protein expression and purification. PMID:27376071

  11. A Novel and Fast Purification Method for Nucleoside Transporters

    PubMed Central

    Hao, Zhenyu; Thomsen, Maren; Postis, Vincent L. G.; Lesiuk, Amelia; Sharples, David; Wang, Yingying; Bartlam, Mark; Goldman, Adrian

    2016-01-01

    Nucleoside transporters (NTs) play critical biological roles in humans, and to understand the molecular mechanism of nucleoside transport requires high-resolution structural information. However, the main bottleneck for structural analysis of NTs is the production of pure, stable, and high quality native protein for crystallization trials. Here we report a novel membrane protein expression and purification strategy, including construction of a high-yield membrane protein expression vector, and a new and fast purification protocol for NTs. The advantages of this strategy are the improved time efficiency, leading to high quality, active, stable membrane proteins, and the efficient use of reagents and consumables. Our strategy might serve as a useful point of reference for investigating NTs and other membrane proteins by clarifying the technical points of vector construction and improvements of membrane protein expression and purification. PMID:27376071

  12. Analytical recovery of protozoan enumeration methods: have drinking water QMRA models corrected or created bias?

    PubMed

    Schmidt, P J; Emelko, M B; Thompson, M E

    2013-05-01

    Quantitative microbial risk assessment (QMRA) is a tool to evaluate the potential implications of pathogens in a water supply or other media and is of increasing interest to regulators. In the case of potentially pathogenic protozoa (e.g. Cryptosporidium oocysts and Giardia cysts), it is well known that the methods used to enumerate (oo)cysts in samples of water and other media can have low and highly variable analytical recovery. In these applications, QMRA has evolved from ignoring analytical recovery to addressing it in point-estimates of risk, and then to addressing variation of analytical recovery in Monte Carlo risk assessments. Often, variation of analytical recovery is addressed in exposure assessment by dividing concentration values that were obtained without consideration of analytical recovery by random beta-distributed recovery values. A simple mathematical proof is provided to demonstrate that this conventional approach to address non-constant analytical recovery in drinking water QMRA will lead to overestimation of mean pathogen concentrations. The bias, which can exceed an order of magnitude, is greatest when low analytical recovery values are common. A simulated dataset is analyzed using a diverse set of approaches to obtain distributions representing temporal variation in the oocyst concentration, and mean annual risk is then computed from each concentration distribution using a simple risk model. This illustrative example demonstrates that the bias associated with mishandling non-constant analytical recovery and non-detect samples can cause drinking water systems to be erroneously classified as surpassing risk thresholds.

  13. A capture-gated fast neutron detection method

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Yang, Yi-Gang; Tai, Yang; Zhang, Zhi

    2016-07-01

    To address the problem of the shortage of neutron detectors used in radiation portal monitors (RPMs), caused by the 3He supply crisis, research on a cadmium-based capture-gated fast neutron detector is presented in this paper. The detector is composed of many 1 cm × 1 cm × 20 cm plastic scintillator cuboids covered by 0.1 mm thick film of cadmium. The detector uses cadmium to absorb thermal neutrons and produce capture γ-rays to indicate the detection of neutrons, and uses plastic scintillator to moderate neutrons and register γ-rays. This design removes the volume competing relationship in traditional 3He counter-based fast neutron detectors, which hinders enhancement of the neutron detection efficiency. Detection efficiency of 21.66% ± 1.22% has been achieved with a 40.4 cm × 40.4 cm × 20 cm overall detector volume. This detector can measure both neutrons and γ-rays simultaneously. A small detector (20.2 cm × 20.2 cm × 20 cm) demonstrated a 3.3 % false alarm rate for a 252Cf source with a neutron yield of 1841 n/s from 50 cm away within 15 s measurement time. It also demonstrated a very low (<0.06%) false alarm rate for a 3.21×105 Bq 137Cs source. This detector offers a potential single-detector replacement for both neutron and the γ-ray detectors in RPM systems. Supported by National Natural Science Foundation of China (11175098, 11375095)

  14. Approximate Analytical Solutions of the Regularized Long Wave Equation Using the Optimal Homotopy Perturbation Method

    PubMed Central

    Căruntu, Bogdan

    2014-01-01

    The paper presents the optimal homotopy perturbation method, which is a new method to find approximate analytical solutions for nonlinear partial differential equations. Based on the well-known homotopy perturbation method, the optimal homotopy perturbation method presents an accelerated convergence compared to the regular homotopy perturbation method. The applications presented emphasize the high accuracy of the method by means of a comparison with previous results. PMID:25003150

  15. Catalytic fast co-pyrolysis of biomass and food waste to produce aromatics: Analytical Py-GC/MS study.

    PubMed

    Zhang, Bo; Zhong, Zhaoping; Min, Min; Ding, Kuan; Xie, Qinglong; Ruan, Roger

    2015-01-01

    In this study, catalytic fast co-pyrolysis (co-CFP) of corn stalk and food waste (FW) was carried out to produce aromatics using quantitative pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS), and ZSM-5 zeolite in the hydrogen form was employed as the catalyst. Co-CFP temperature and a parameter called hydrogen to carbon effective ratio (H/C(eff) ratio) were examined for their effects on the relative content of aromatics. Experimental results showed that co-CFP temperature of 600 °C was optimal for the formation of aromatics and other organic pyrolysis products. Besides, H/C(eff) ratio had an important influence on product distribution. The yield of total organic pyrolysis products and relative content of aromatics increased non-linearly with increasing H/C(eff) ratio. There was an apparent synergistic effect between corn stalk and FW during co-CFP process, which promoted the production of aromatics significantly. Co-CFP of biomass and FW was an effective method to produce aromatics and other petrochemicals.

  16. Catalytic fast co-pyrolysis of biomass and food waste to produce aromatics: Analytical Py-GC/MS study.

    PubMed

    Zhang, Bo; Zhong, Zhaoping; Min, Min; Ding, Kuan; Xie, Qinglong; Ruan, Roger

    2015-01-01

    In this study, catalytic fast co-pyrolysis (co-CFP) of corn stalk and food waste (FW) was carried out to produce aromatics using quantitative pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS), and ZSM-5 zeolite in the hydrogen form was employed as the catalyst. Co-CFP temperature and a parameter called hydrogen to carbon effective ratio (H/C(eff) ratio) were examined for their effects on the relative content of aromatics. Experimental results showed that co-CFP temperature of 600 °C was optimal for the formation of aromatics and other organic pyrolysis products. Besides, H/C(eff) ratio had an important influence on product distribution. The yield of total organic pyrolysis products and relative content of aromatics increased non-linearly with increasing H/C(eff) ratio. There was an apparent synergistic effect between corn stalk and FW during co-CFP process, which promoted the production of aromatics significantly. Co-CFP of biomass and FW was an effective method to produce aromatics and other petrochemicals. PMID:25864028

  17. An efficient and fast analytical procedure for the bromine determination in waste electrical and electronic equipment plastics.

    PubMed

    Taurino, R; Cannio, M; Mafredini, T; Pozzi, P

    2014-01-01

    In this study, X-ray fluorescence (XRF) spectroscopy was used, in combination with micro-Raman spectroscopy, for a fast determination of bromine concentration and then of brominated flame retardants (BFRs) compounds in waste electrical and electronic equipments. Different samples from different recycling industries were characterized to evaluate the sorting performances of treatment companies. This investigation must be considered of prime research interest since the impact of BFRs on the environment and their potential risk on human health is an actual concern. Indeed, the new European Restriction of Hazardous Substances Directive (RoHS 2011/65/EU) demands that plastics with BFRs concentration above 0.1%, being potential health hazards, are identified and eliminated from the recycling process. Our results show the capability and the potential of Raman spectroscopy, together with XRF analysis, as effective tools for the rapid detection of BFRs in plastic materials. In particular, the use of these two techniques in combination can be considered as a promising method suitable for quality control applications in the recycling industry.

  18. A Comparative Evaluation of Analytical Methods to Allocate Individual Marks from a Team Mark

    ERIC Educational Resources Information Center

    Nepal, Kali

    2012-01-01

    This study presents a comparative evaluation of analytical methods to allocate individual marks from a team mark. Only the methods that use or can be converted into some form of mathematical equations are analysed. Some of these methods focus primarily on the assessment of the quality of teamwork product (product assessment) while the others put…

  19. Integrative Mixed Methods Data Analytic Strategies in Research on School Success in Challenging Circumstances

    ERIC Educational Resources Information Center

    Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia

    2008-01-01

    There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…

  20. 75 FR 49930 - Stakeholder Meeting Regarding Re-Evaluation of Currently Approved Total Coliform Analytical Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-16

    ... Methods AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The Environmental...) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as developing a reference coliform/non-coliform library...

  1. Waste Tank Organic Safety Program: Analytical methods development. Progress report, FY 1994

    SciTech Connect

    Campbell, J.A.; Clauss, S.A.; Grant, K.E.

    1994-09-01

    The objectives of this task are to develop and document extraction and analysis methods for organics in waste tanks, and to extend these methods to the analysis of actual core samples to support the Waste Tank organic Safety Program. This report documents progress at Pacific Northwest Laboratory (a) during FY 1994 on methods development, the analysis of waste from Tank 241-C-103 (Tank C-103) and T-111, and the transfer of documented, developed analytical methods to personnel in the Analytical Chemistry Laboratory (ACL) and 222-S laboratory. This report is intended as an annual report, not a completed work.

  2. Analytical volcano deformation modelling: A new and fast generalized point-source approach with application to the 2015 Calbuco eruption

    NASA Astrophysics Data System (ADS)

    Nikkhoo, M.; Walter, T. R.; Lundgren, P.; Prats-Iraola, P.

    2015-12-01

    Ground deformation at active volcanoes is one of the key precursors of volcanic unrest, monitored by InSAR and GPS techniques at high spatial and temporal resolution, respectively. Modelling of the observed displacements establishes the link between them and the underlying subsurface processes and volume change. The so-called Mogi model and the rectangular dislocation are two commonly applied analytical solutions that allow for quick interpretations based on the location, depth and volume change of pressurized spherical cavities and planar intrusions, respectively. Geological observations worldwide, however, suggest elongated, tabular or other non-equidimensional geometries for the magma chambers. How can these be modelled? Generalized models such as the Davis's point ellipsoidal cavity or the rectangular dislocation solutions, are geometrically limited and could barely improve the interpretation of data. We develop a new analytical artefact-free solution for a rectangular dislocation, which also possesses full rotational degrees of freedom. We construct a kinematic model in terms of three pairwise-perpendicular rectangular dislocations with a prescribed opening only. This model represents a generalized point source in the far field, and also performs as a finite dislocation model for planar intrusions in the near field. We show that through calculating the Eshelby's shape tensor the far-field displacements and stresses of any arbitrary triaxial ellipsoidal cavity can be reproduced by using this model. Regardless of its aspect ratios, the volume change of this model is simply the sum of the volume change of the individual dislocations. Our model can be integrated in any inversion scheme as simply as the Mogi model, profiting at the same time from the advantages of a generalized point source. After evaluating our model by using a boundary element method code, we apply it to ground displacements of the 2015 Calbuco eruption, Chile, observed by the Sentinel-1

  3. Long-stroke fast tool servo and a tool setting method for freeform optics fabrication

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Zhou, Xiaoqin; Liu, Zhiwei; Lin, Chao; Ma, Long

    2014-09-01

    Diamond turning assisted by fast tool servo is of high efficiency for the fabrication of freeform optics. This paper describes a long-stroke fast tool servo to obtain a large-amplitude tool motion. It has the advantage of low cost and higher stiffness and natural frequency than other flexure-based long-stroke fast tool servo systems. The fast tool servo is actuated by a voice coil motor and guided by a flexure-hinge structure. Open-loop and close-loop control tests are conducted on the testing platform. While fast tool servo system is an additional motion axis for a diamond turning machine, a tool center adjustment method is described to confirm tool center position in the machine tool coordinate system when the fast tool servo system is fixed on the diamond turning machine. Last, a sinusoidal surface is machined and the results demonstrate that the tool adjustment method is efficient and precise for a flexure-based fast tool servo system, and the fast tool servo system works well on the fabrication of freeform optics.

  4. Comparison of Analytical Methods: Direct Emission versus First-Derivative Fluorometric Methods for Quinine Determination in Tonic Waters

    NASA Astrophysics Data System (ADS)

    Pandey, Siddharth; Borders, Tammie L.; Hernández, Carmen E.; Roy, Lindsay E.; Reddy, Gaddum D.; Martinez, Geo L.; Jackson, Autumn; Brown, Guenevere; Acree, William E., Jr.

    1999-01-01

    An undergraduate laboratory experiment is designed for the quantitative determination of quinine in tonic water samples. It is based upon direct fluorescence emission and first-derivative spectroscopic methods. Unlike other published laboratory experiments, our method exposes students to the general method of derivative spectroscopy, an important, often-used analytical technique for eliminating sample matrix and background absorbance effects and for treating overlapped spectral bands. The statistical treatment allows students to compare concentrations directly calculated from the measured fluorescence emission intensity with values obtained from the first-derivative emission spectra, to ascertain whether there is a difference between the two analytical methods. Method selection and validation are important items routinely encountered by practicing analytical chemists.

  5. A fast and accurate method for echocardiography strain rate imaging

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  6. Selenium contaminated waters: An overview of analytical methods, treatment options and recent advances in sorption methods.

    PubMed

    Santos, Sílvia; Ungureanu, Gabriela; Boaventura, Rui; Botelho, Cidália

    2015-07-15

    Selenium is an essential trace element for many organisms, including humans, but it is bioaccumulative and toxic at higher than homeostatic levels. Both selenium deficiency and toxicity are problems around the world. Mines, coal-fired power plants, oil refineries and agriculture are important examples of anthropogenic sources, generating contaminated waters and wastewaters. For reasons of human health and ecotoxicity, selenium concentration has to be controlled in drinking-water and in wastewater, as it is a potential pollutant of water bodies. This review article provides firstly a general overview about selenium distribution, sources, chemistry, toxicity and environmental impact. Analytical techniques used for Se determination and speciation and water and wastewater treatment options are reviewed. In particular, published works on adsorption as a treatment method for Se removal from aqueous solutions are critically analyzed. Recent published literature has given particular attention to the development and search for effective adsorbents, including low-cost alternative materials. Published works mostly consist in exploratory findings and laboratory-scale experiments. Binary metal oxides and LDHs (layered double hydroxides) have presented excellent adsorption capacities for selenium species. Unconventional sorbents (algae, agricultural wastes and other biomaterials), in raw or modified forms, have also led to very interesting results with the advantage of their availability and low-cost. Some directions to be considered in future works are also suggested. PMID:25847169

  7. Selenium contaminated waters: An overview of analytical methods, treatment options and recent advances in sorption methods.

    PubMed

    Santos, Sílvia; Ungureanu, Gabriela; Boaventura, Rui; Botelho, Cidália

    2015-07-15

    Selenium is an essential trace element for many organisms, including humans, but it is bioaccumulative and toxic at higher than homeostatic levels. Both selenium deficiency and toxicity are problems around the world. Mines, coal-fired power plants, oil refineries and agriculture are important examples of anthropogenic sources, generating contaminated waters and wastewaters. For reasons of human health and ecotoxicity, selenium concentration has to be controlled in drinking-water and in wastewater, as it is a potential pollutant of water bodies. This review article provides firstly a general overview about selenium distribution, sources, chemistry, toxicity and environmental impact. Analytical techniques used for Se determination and speciation and water and wastewater treatment options are reviewed. In particular, published works on adsorption as a treatment method for Se removal from aqueous solutions are critically analyzed. Recent published literature has given particular attention to the development and search for effective adsorbents, including low-cost alternative materials. Published works mostly consist in exploratory findings and laboratory-scale experiments. Binary metal oxides and LDHs (layered double hydroxides) have presented excellent adsorption capacities for selenium species. Unconventional sorbents (algae, agricultural wastes and other biomaterials), in raw or modified forms, have also led to very interesting results with the advantage of their availability and low-cost. Some directions to be considered in future works are also suggested.

  8. Synergistic effect of combining two nondestructive analytical methods for multielemental analysis.

    PubMed

    Toh, Yosuke; Ebihara, Mitsuru; Kimura, Atsushi; Nakamura, Shoji; Harada, Hideo; Hara, Kaoru Y; Koizumi, Mitsuo; Kitatani, Fumito; Furutaka, Kazuyoshi

    2014-12-16

    We developed a new analytical technique that combines prompt gamma-ray analysis (PGA) and time-of-flight elemental analysis (TOF) by using an intense pulsed neutron beam at the Japan Proton Accelerator Research Complex. It allows us to obtain the results from both methods at the same time. Moreover, it can be used to quantify elemental concentrations in the sample, to which neither of these methods can be applied independently, if a new analytical spectrum (TOF-PGA) is used. To assess the effectiveness of the developed method, a mixed sample of Ag, Au, Cd, Co, and Ta, and the Gibeon meteorite were analyzed. The analytical capabilities were compared based on the gamma-ray peak selectivity and signal-to-noise ratios. TOF-PGA method showed high merits, although the capability may differ based on the target and coexisting elements. PMID:25371049

  9. A method based on stochastic resonance for the detection of weak analytical signal.

    PubMed

    Wu, Xiaojing; Guo, Weiming; Cai, Wensheng; Shao, Xueguang; Pan, Zhongxiao

    2003-12-23

    An effective method for detection of weak analytical signals with strong noise background is proposed based on the theory of stochastic resonance (SR). Compared with the conventional SR-based algorithms, the proposed algorithm is simplified by changing only one parameter to realize the weak signal detection. Simulation studies revealed that the method performs well in detection of analytical signals in very high level of noise background and is suitable for detecting signals with the different noise level by changing the parameter. Applications of the method to experimental weak signals of X-ray diffraction and Raman spectrum are also investigated. It is found that reliable results can be obtained.

  10. Conventional, Bayesian, and Modified Prony's methods for characterizing fast and slow waves in equine cancellous bone

    PubMed Central

    Groopman, Amber M.; Katz, Jonathan I.; Holland, Mark R.; Fujita, Fuminori; Matsukawa, Mami; Mizuno, Katsunori; Wear, Keith A.; Miller, James G.

    2015-01-01

    Conventional, Bayesian, and the modified least-squares Prony's plus curve-fitting (MLSP + CF) methods were applied to data acquired using 1 MHz center frequency, broadband transducers on a single equine cancellous bone specimen that was systematically shortened from 11.8 mm down to 0.5 mm for a total of 24 sample thicknesses. Due to overlapping fast and slow waves, conventional analysis methods were restricted to data from sample thicknesses ranging from 11.8 mm to 6.0 mm. In contrast, Bayesian and MLSP + CF methods successfully separated fast and slow waves and provided reliable estimates of the ultrasonic properties of fast and slow waves for sample thicknesses ranging from 11.8 mm down to 3.5 mm. Comparisons of the three methods were carried out for phase velocity at the center frequency and the slope of the attenuation coefficient for the fast and slow waves. Good agreement among the three methods was also observed for average signal loss at the center frequency. The Bayesian and MLSP + CF approaches were able to separate the fast and slow waves and provide good estimates of the fast and slow wave properties even when the two wave modes overlapped in both time and frequency domains making conventional analysis methods unreliable. PMID:26328678

  11. Conventional, Bayesian, and Modified Prony's methods for characterizing fast and slow waves in equine cancellous bone.

    PubMed

    Groopman, Amber M; Katz, Jonathan I; Holland, Mark R; Fujita, Fuminori; Matsukawa, Mami; Mizuno, Katsunori; Wear, Keith A; Miller, James G

    2015-08-01

    Conventional, Bayesian, and the modified least-squares Prony's plus curve-fitting (MLSP + CF) methods were applied to data acquired using 1 MHz center frequency, broadband transducers on a single equine cancellous bone specimen that was systematically shortened from 11.8 mm down to 0.5 mm for a total of 24 sample thicknesses. Due to overlapping fast and slow waves, conventional analysis methods were restricted to data from sample thicknesses ranging from 11.8 mm to 6.0 mm. In contrast, Bayesian and MLSP + CF methods successfully separated fast and slow waves and provided reliable estimates of the ultrasonic properties of fast and slow waves for sample thicknesses ranging from 11.8 mm down to 3.5 mm. Comparisons of the three methods were carried out for phase velocity at the center frequency and the slope of the attenuation coefficient for the fast and slow waves. Good agreement among the three methods was also observed for average signal loss at the center frequency. The Bayesian and MLSP + CF approaches were able to separate the fast and slow waves and provide good estimates of the fast and slow wave properties even when the two wave modes overlapped in both time and frequency domains making conventional analysis methods unreliable.

  12. Contextual and Analytic Qualities of Research Methods Exemplified in Research on Teaching

    ERIC Educational Resources Information Center

    Svensson, Lennart; Doumas, Kyriaki

    2013-01-01

    The aim of the present article is to discuss contextual and analytic qualities of research methods. The arguments are specified in relation to research on teaching. A specific investigation is used as an example to illustrate the general methodological approach. It is argued that research methods should be carefully grounded in an understanding of…

  13. Flammable gas safety program. Analytical methods development: FY 1994 progress report

    SciTech Connect

    Campbell, J.A.; Clauss, S.; Grant, K.; Hoopes, V.; Lerner, B.; Lucke, R.; Mong, G.; Rau, J.; Wahl, K.; Steele, R.

    1994-09-01

    This report describes the status of developing analytical methods to account for the organic components in Hanford waste tanks, with particular focus on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-101 (Tank 101-SY).

  14. Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field

    NASA Astrophysics Data System (ADS)

    Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.

    2011-03-01

    In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.

  15. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD AND ADULT DUPLICATE-DIET SAMPLES

    EPA Science Inventory

    Determinations of pesticides in food are often complicated by the presence of fats and require multiple cleanup steps before analysis. Cost-effective analytical methods are needed for conducting large-scale exposure studies. We examined two extraction methods, supercritical flu...

  16. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Duffy, Daniel Q. (Inventor); Tamkin, Glenn S. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  17. Biological Matrix Effects in Quantitative Tandem Mass Spectrometry-Based Analytical Methods: Advancing Biomonitoring

    PubMed Central

    Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd

    2015-01-01

    The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585

  18. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  19. An Analytical Investigation of Three General Methods of Calculating Chemical-Equilibrium Compositions

    NASA Technical Reports Server (NTRS)

    Zeleznik, Frank J.; Gordon, Sanford

    1960-01-01

    The Brinkley, Huff, and White methods for chemical-equilibrium calculations were modified and extended in order to permit an analytical comparison. The extended forms of these methods permit condensed species as reaction products, include temperature as a variable in the iteration, and permit arbitrary estimates for the variables. It is analytically shown that the three extended methods can be placed in a form that is independent of components. In this form the Brinkley iteration is identical computationally to the White method, while the modified Huff method differs only'slightly from these two. The convergence rates of the modified Brinkley and White methods are identical; and, further, all three methods are guaranteed to converge and will ultimately converge quadratically. It is concluded that no one of the three methods offers any significant computational advantages over the other two.

  20. Quartz Crystal Microbalance (QCM): An Alternative Analytical Method for Investigation in Real-Time of Liquid Properties

    NASA Astrophysics Data System (ADS)

    Cimpoca, Gh. V.; Radulescu, C.; Popescu, I. V.; Dulama, I. D.; Ionita, I.; Cimpoca, M.; Cernica, I.; Gavrila, R.

    2010-01-01

    In this paper we study the possibility to develop an alternative Analytical Method for Investigation in Real-Time of Liquid Properties, the layout and the operation with Quartz Crystal Microbalance (QCM) Systems. The quartz crystal microbalance (QCM) can be accepted as a powerful technique to monitor adsorption and desorption processes at interfaces in different chemical and biological areas. In our paper, Quartz Crystal Microbalance is used to monitor in real-time the polymer adsorption followed by azoic dye adsorption and then copolymer adsorption as well as optimization of interaction processes and determination of solution effects on the analytical signal. The solutions of azoic dye (5ṡ10-4 g/L, 5ṡ10-5 g/L and 5ṡ10-6 g/L in DMF) are adsorbed at gold electrodes of QCM and the sensor responses are estimated through decrease and increase of QCM frequency. Also, the response of the sensor at maleic anhydride (MA) copolymer with styrene St (MA-St copolymer concentration of solution: 5ṡ10-4 g/L; 5ṡ10-5 g/L and 5ṡ10-6 g/L in DMF) is fast, large, and reversible. The detailed investigation showed the fact that the Quartz Crystal Microbalance is a modern method to study a wider number of physical and chemical properties related to the surface and interfacial processes of synthesized copolymer leading to a higher reliability of the research results.

  1. Review of Properties and Analytical Methods for the Determination of Norfloxacin.

    PubMed

    Chierentin, Lucas; Salgado, Hérida Regina Nunes

    2016-01-01

    The first-generation quinolones have their greatest potency against Gram-negative bacteria, but newly developed molecules have exhibited increased potency against Gram-positive bacteria, and existing agents are available with additional activity against anaerobic microorganisms. Norfloxacin is a broad-spectrum antimicrobial fluoroquinolone used against Gram-positive and Gram-negative organisms (aerobic organisms). There are different analytical methods available to determine norfloxacin applied in quality control of this medicine in order to ensure its effectiveness and safety. The authors present an overview of the fourth generation of quinolones, followed by the properties, applications, and analytical methods of norfloxacin. These results show several existing analytical techniques that are flexible and broad-based methods of analysis in different matrices. This article focuses on bionalytical and pharmaceutical quality-control applications, such as thin-layer chromatography, microbiological assay, spectrophotometry, capillary electrophoresis (CE), and high-performance liquid chromatography (HPLC).

  2. Polyphenolic characterization and chromatographic methods for fast assessment of culinary Salvia species from South East Europe.

    PubMed

    Cvetkovikj, I; Stefkov, G; Acevska, J; Stanoeva, J Petreska; Karapandzova, M; Stefova, M; Dimitrovska, A; Kulevanova, S

    2013-03-22

    Although the knowledge and use of several Salvia species (Salvia officinalis, Salvia fruticosa, and Salvia pomifera) can be dated back to Greek Era and have a long history of culinary and effective medicinal use, still there is a remarkable interest concerning their chemistry and especially the polyphenolic composition. Despite the demand in the food and pharmaceutical industry for methods for fast quality assessment of the herbs and spices, even now there are no official requirements for the minimum content of polyphenols in sage covered by current regulations neither the European Pharmacopoeia monographs nor the ISO 11165 standard. In this work a rapid analytical method for extraction, characterization and quantification of the major polyphenolic constituents in Sage was developed. Various extractions (infusion - IE; ultrasound-assisted extraction - USE and microwave-assisted extraction - MWE) were performed and evaluated for their effectiveness. Along with the optimization of the mass-detector and chromatographic parameters, the applicability of three different reverse C18 stationary phases (extra-density bonded, core-shell technology and monolith column) for polyphenolics characterization was evaluated. A comprehensive overview of the very variable polyphenolic composition of 118 different plant samples of 68 populations of wild growing culinary Salvia species (S. officinalis: 101; S. fruticosa: 15; S. pomifera: 2) collected from South East Europe (SEE) was performed using HPLC-DAD-ESI-MS(n) and more than 50 different compounds were identified and quantified. With this work the knowledge about polyphenols of culinary Sage was expanded thus the possibility for gaining an insight into the chemodiversity of culinary Salvia species in South East Europe was unlocked.

  3. FAST TRACK COMMUNICATION: A stable toolkit method in quantum control

    NASA Astrophysics Data System (ADS)

    Belhadj, M.; Salomon, J.; Turinici, G.

    2008-09-01

    Recently the 'toolkit' discretization introduced to accelerate the numerical resolution of the time-dependent Schrödinger equation arising in quantum optimal control problems demonstrated good results on a large range of models. However, when coupling this class of methods with the so-called monotonically convergent algorithms, numerical instabilities affect the convergence of the discretized scheme. We present an adaptation of the 'toolkit' method which preserves the monotonicity of the procedure. The theoretical properties of the new algorithm are illustrated by numerical simulations.

  4. A Fast and Reliable Method for Surface Wave Tomography

    NASA Astrophysics Data System (ADS)

    Barmin, M. P.; Ritzwoller, M. H.; Levshin, A. L.

    - We describe a method to invert regional or global scale surface-wave group or phase-velocity measurements to estimate 2-D models of the distribution and strength of isotropic and azimuthally anisotropic velocity variations. Such maps have at least two purposes in monitoring the nuclear Comprehensive Test-Ban Treaty (CTBT): (1) They can be used as data to estimate the shear velocity of the crust and uppermost mantle and topography on internal interfaces which are important in event location, and (2) they can be used to estimate surface-wave travel-time correction surfaces to be used in phase-matched filters designed to extract low signal-to-noise surface-wave packets.The purpose of this paper is to describe one useful path through the large number of options available in an inversion of surface-wave data. Our method appears to provide robust and reliable dispersion maps on both global and regional scales. The technique we describe has a number of features that have motivated its development and commend its use: (1) It is developed in a spherical geometry; (2) the region of inference is defined by an arbitrary simple closed curve so that the method works equally well on local, regional, or global scales; (3) spatial smoothness and model amplitude constraints can be applied simultaneously; (4) the selection of model regularization and the smoothing parameters is highly flexible which allows for the assessment of the effect of variations in these parameters; (5) the method allows for the simultaneous estimation of spatial resolution and amplitude bias of the images; and (6) the method optionally allows for the estimation of azimuthal anisotropy.We present examples of the application of this technique to observed surface-wave group and phase velocities globally and regionally across Eurasia and Antarctica.

  5. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    SciTech Connect

    McKinley, M S; Brooks III, E D; Daffin, F

    2004-12-13

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

  6. Possibilities of Utilizing the Method of Analytical Hierarchy Process Within the Strategy of Corporate Social Business

    NASA Astrophysics Data System (ADS)

    Drieniková, Katarína; Hrdinová, Gabriela; Naňo, Tomáš; Sakál, Peter

    2010-01-01

    The paper deals with the analysis of the theory of corporate social responsibility, risk management and the exact method of analytic hierarchic process that is used in the decision-making processes. The Chapters 2 and 3 focus on presentation of the experience with the application of the method in formulating the stakeholders' strategic goals within the Corporate Social Responsibility (CSR) and simultaneously its utilization in minimizing the environmental risks. The major benefit of this paper is the application of Analytical Hierarchy Process (AHP).

  7. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  8. A Systematic Meta-Analytic Review of Evidence for the Effectiveness of the "Fast ForWord" Language Intervention Program

    ERIC Educational Resources Information Center

    Strong, Gemma K.; Torgerson, Carole J.; Torgerson, David; Hulme, Charles

    2011-01-01

    Background: Fast ForWord is a suite of computer-based language intervention programs designed to improve children's reading and oral language skills. The programs are based on the hypothesis that oral language difficulties often arise from a rapid auditory temporal processing deficit that compromises the development of phonological…

  9. Guidance for characterizing explosives contaminated soils: Sampling and selecting on-site analytical methods

    SciTech Connect

    Crockett, A.B.; Craig, H.D.; Jenkins, T.F.; Sisk, W.E.

    1996-09-01

    A large number of defense-related sites are contaminated with elevated levels of secondary explosives. Levels of contamination range from barely detectable to levels above 10% that need special handling due to the detonation potential. Characterization of explosives-contaminated sites is particularly difficult due to the very heterogeneous distribution of contamination in the environment and within samples. To improve site characterization, several options exist including collecting more samples, providing on-site analytical data to help direct the investigation, compositing samples, improving homogenization of samples, and extracting larger samples. On-site analytical methods are essential to more economical and improved characterization. On-site methods might suffer in terms of precision and accuracy, but this is more than offset by the increased number of samples that can be run. While verification using a standard analytical procedure should be part of any quality assurance program, reducing the number of samples analyzed by the more expensive methods can result in significantly reduced costs. Often 70 to 90% of the soil samples analyzed during an explosives site investigation do not contain detectable levels of contamination. Two basic types of on-site analytical methods are in wide use for explosives in soil, calorimetric and immunoassay. Calorimetric methods generally detect broad classes of compounds such as nitroaromatics or nitramines, while immunoassay methods are more compound specific. Since TNT or RDX is usually present in explosive-contaminated soils, the use of procedures designed to detect only these or similar compounds can be very effective.

  10. A method for fast selecting feature wavelengths from the spectral information of crop nitrogen

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Research on a method for fast selecting feature wavelengths from the nitrogen spectral information is necessary, which can determine the nitrogen content of crops. Based on the uniformity of uniform design, this paper proposed an improved particle swarm optimization (PSO) method. The method can ch...

  11. New simple method for fast and accurate measurement of volumes

    NASA Astrophysics Data System (ADS)

    Frattolillo, Antonio

    2006-04-01

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  12. Fast calculation method of computer-generated cylindrical hologram using wave-front recording surface.

    PubMed

    Zhao, Yu; Piao, Mei-lan; Li, Gang; Kim, Nam

    2015-07-01

    Fast calculation method for a computer-generated cylindrical hologram (CGCH) is proposed. The method consists of two steps: the first step is a calculation of a virtual wave-front recording surface (WRS), which is located between the 3D object and CGCH. In the second step, in order to obtain a CGCH, we execute the diffraction calculation based on the fast Fourier transform (FFT) from the WRS to the CGCH, which are in the same concentric arrangement. The computational complexity is dramatically reduced in comparison with direct integration method. The simulation results confirm that our proposed method is able to improve the computational speed of CGCH.

  13. Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces

    NASA Technical Reports Server (NTRS)

    Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John

    2011-01-01

    Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.

  14. Evaluation of sampling and analytical methods for the determination of chlorodifluoromethane in air.

    PubMed

    Seymour, M J; Lucas, M F

    1993-05-01

    In January 1989, the Occupational Safety and Health Administration (OSHA) published revised permissible exposure limits (PELs) for 212 compounds and established PELs for 164 additional compounds. In cases where regulated compounds did not have specific sampling and analytical methods, methods were suggested by OSHA. The National Institute for Occupational Safety and Health (NIOSH) Manual of Analytical Methods (NMAM) Method 1020, which was developed for 1,1,2-trichloro-1,2,2-trifluoroethane, was suggested by OSHA for the determination of chlorodifluoromethane in workplace air. Because this method was developed for a liquid and chlorodifluoromethane is a gas, the ability of NMAM Method 1020 to adequately sample and quantitate chlorodifluoromethane was questioned and tested by researchers at NIOSH. The evaluation of NMAM Method 1020 for chlorodifluoromethane showed that the capacity of the 100/50-mg charcoal sorbent bed was limited, the standard preparation procedure was incorrect for a gas analyte, and the analyte had low solubility in carbon disulfide. NMAM Method 1018 for dichlorodifluoromethane uses two coconut-shell charcoal tubes in series, a 400/200-mg tube followed by a 100/50-mg tube, which are desorbed with methylene chloride. This method was evaluated for chlorodifluoromethane. Test atmospheres, with chlorodifluoromethane concentrations from 0.5-2 times the PEL were generated. Modifications of NMAM Method 1018 included changes in the standard preparation procedure, and the gas chromatograph was equipped with a capillary column. These revisions to NMAM 1018 resulted in a 96.5% recovery and a total precision for the method of 7.1% for chlorodifluoromethane. No significant bias in the method was found. Results indicate that the revised NMAM Method 1018 is suitable for the determination of chlorodifluoromethane in workplace air.

  15. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    PubMed

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function.

  16. Analytical calculation of spectral phase of grism pairs by the geometrical ray tracing method

    NASA Astrophysics Data System (ADS)

    Rahimi, L.; Askari, A. A.; Saghafifar, H.

    2016-07-01

    The most optimum operation of a grism pair is practically approachable when an analytical expression of its spectral phase is in hand. In this paper, we have employed the accurate geometrical ray tracing method to calculate the analytical phase shift of a grism pair, at transmission and reflection configurations. As shown by the results, for a great variety of complicated configurations, the spectral phase of a grism pair is in the same form of that of a prism pair. The only exception is when the light enters into and exits from different facets of a reflection grism. The analytical result has been used to calculate the second-order dispersions of several examples of grism pairs in various possible configurations. All results are in complete agreement with those from ray tracing method. The result of this work can be very helpful in the optimal design and application of grism pairs at various configurations.

  17. Comparative analysis of methods for real-time analytical control of chemotherapies preparations.

    PubMed

    Bazin, Christophe; Cassard, Bruno; Caudron, Eric; Prognon, Patrice; Havard, Laurent

    2015-10-15

    Control of chemotherapies preparations are now an obligation in France, though analytical control is compulsory. Several methods are available and none of them is presumed as ideal. We wanted to compare them so as to determine which one could be the best choice. We compared non analytical (visual and video-assisted, gravimetric) and analytical (HPLC/FIA, UV/FT-IR, UV/Raman, Raman) methods thanks to our experience and a SWOT analysis. The results of the analysis show great differences between the techniques, but as expected none us them is without defects. However they can probably be used in synergy. Overall for the pharmacist willing to get involved, the implementation of the control for chemotherapies preparations must be widely anticipated, with the listing of every parameter, and remains according to us an analyst's job. PMID:26299761

  18. Design and structural verification of locomotive bogies using combined analytical and experimental methods

    NASA Astrophysics Data System (ADS)

    Manea, I.; Popa, G.; Girnita, I.; Prenta, G.

    2015-11-01

    The paper presents a practical methodology for design and structural verification of the locomotive bogie frames using a modern software package for design, structural verification and validation through combined, analytical and experimental methods. In the initial stage, the bogie geometry is imported from a CAD program into a finite element analysis program, such as Ansys. The analytical model validation is done by experimental modal analysis carried out on a finished bogie frame. The bogie frame own frequencies and own modes by both experimental and analytic methods are determined and the correlation analysis of the two types of models is performed. If the results are unsatisfactory, the structural optimization should be performed. If the results are satisfactory, the qualification procedures follow by static and fatigue tests carried out in a laboratory with international accreditation in the field. This paper presents an application made on bogie frames for the LEMA electric locomotive of 6000 kW.

  19. Transfer-matrix-based method for an analytical description of velocity-map-imaging spectrometers

    NASA Astrophysics Data System (ADS)

    Harb, M. M.; Cohen, S.; Papalazarou, E.; Lépine, F.; Bordas, C.

    2010-12-01

    We propose a simple and general analytical model describing the operation of a velocity-map-imaging spectrometer. We show that such a spectrometer, possibly equipped with a magnifying lens, can be efficiently modeled by combining analytical expressions for the axial potential distributions along with a transfer matrix method. The model leads transparently to the prediction of the instrument's operating conditions as well as to its resolution. A photoelectron velocity-map-imaging spectrometer with a magnifying lens, built and operated along the lines suggested by the model has been successfully employed for recording images at threshold photoionization of atomic lithium. The model's reliability is demonstrated by the fairly good agreement between experimental results and calculations. Finally, the limitations of the analytical method along with possible generalizations, extensions, and potential applications are also discussed. The model may serve as a guide for users interested in building and operating such spectrometers as well as a tutorial tool.

  20. Comparative analysis of methods for real-time analytical control of chemotherapies preparations.

    PubMed

    Bazin, Christophe; Cassard, Bruno; Caudron, Eric; Prognon, Patrice; Havard, Laurent

    2015-10-15

    Control of chemotherapies preparations are now an obligation in France, though analytical control is compulsory. Several methods are available and none of them is presumed as ideal. We wanted to compare them so as to determine which one could be the best choice. We compared non analytical (visual and video-assisted, gravimetric) and analytical (HPLC/FIA, UV/FT-IR, UV/Raman, Raman) methods thanks to our experience and a SWOT analysis. The results of the analysis show great differences between the techniques, but as expected none us them is without defects. However they can probably be used in synergy. Overall for the pharmacist willing to get involved, the implementation of the control for chemotherapies preparations must be widely anticipated, with the listing of every parameter, and remains according to us an analyst's job.

  1. Fast algorithms for glassy materials: methods and explorations

    NASA Astrophysics Data System (ADS)

    Middleton, A. Alan

    2014-03-01

    Glassy materials with frozen disorder, including random magnets such as spin glasses and interfaces in disordered materials, exhibit striking non-equilibrium behavior such as the ability to store a history of external parameters (memory). Precisely due to their glassy nature, direct simulation of models of these materials is very slow. In some fortunate cases, however, algorithms exist that exactly compute thermodynamic quantities. Such cases include spin glasses in two dimensions and interfaces and random field magnets in arbitrary dimensions at zero temperature. Using algorithms built using ideas developed by computer scientists and mathematicians, one can even directly sample equilibrium configurations in very large systems, as if one picked the configurations out of a ``hat'' of all configurations weighted by their Boltzmann factors. This talk will provide some of the background for these methods and discuss the connections between physics and computer science, as used by a number of groups. Recent applications of these methods to investigating phase transitions in glassy materials and to answering qualitative questions about the free energy landscape and memory effects will be discussed. This work was supported in part by NSF grant DMR-1006731. Creighton Thomas and David Huse also contributed to much of the work to be presented.

  2. An Overview of Conventional and Emerging Analytical Methods for the Determination of Mycotoxins

    PubMed Central

    Cigić, Irena Kralj; Prosen, Helena

    2009-01-01

    Mycotoxins are a group of compounds produced by various fungi and excreted into the matrices on which they grow, often food intended for human consumption or animal feed. The high toxicity and carcinogenicity of these compounds and their ability to cause various pathological conditions has led to widespread screening of foods and feeds potentially polluted with them. Maximum permissible levels in different matrices have also been established for some toxins. As these are quite low, analytical methods for determination of mycotoxins have to be both sensitive and specific. In addition, an appropriate sample preparation and pre-concentration method is needed to isolate analytes from rather complicated samples. In this article, an overview of methods for analysis and sample preparation published in the last ten years is given for the most often encountered mycotoxins in different samples, mainly in food. Special emphasis is on liquid chromatography with fluorescence and mass spectrometric detection, while in the field of sample preparation various solid-phase extraction approaches are discussed. However, an overview of other analytical and sample preparation methods less often used is also given. Finally, different matrices where mycotoxins have to be determined are discussed with the emphasis on their specific characteristics important for the analysis (human food and beverages, animal feed, biological samples, environmental samples). Various issues important for accurate qualitative and quantitative analyses are critically discussed: sampling and choice of representative sample, sample preparation and possible bias associated with it, specificity of the analytical method and critical evaluation of results. PMID:19333436

  3. Construction of quasi-periodic solutions of state-dependent delay differential equations by the parameterization method II: Analytic case

    NASA Astrophysics Data System (ADS)

    He, Xiaolong; de la Llave, Rafael

    2016-08-01

    We construct analytic quasi-periodic solutions of a state-dependent delay differential equation with quasi-periodically forcing. We show that if we consider a family of problems that depends on one dimensional parameters (with some non-degeneracy conditions), there is a positive measure set Π of parameters for which the system admits analytic quasi-periodic solutions. The main difficulty to be overcome is the appearance of small divisors and this is the reason why we need to exclude parameters. Our main result is proved by a Nash-Moser fast convergent method and is formulated in the a-posteriori format of numerical analysis. That is, given an approximate solution of a functional equation which satisfies some non-degeneracy conditions, we can find a true solution close to it. This is in sharp contrast with the finite regularity theory developed in [18]. We conjecture that the exclusion of parameters is a real phenomenon and not a technical difficulty. More precisely, for generic families of perturbations, the quasi-periodic solutions are only finitely differentiable in open sets in the complement of parameters set Π.

  4. A fast-convergence POCS seismic denoising and reconstruction method

    NASA Astrophysics Data System (ADS)

    Ge, Zi-Jian; Li, Jing-Ye; Pan, Shu-Lin; Chen, Xiao-Hong

    2015-06-01

    The efficiency, precision, and denoising capabilities of reconstruction algorithms are critical to seismic data processing. Based on the Fourier-domain projection onto convex sets (POCS) algorithm, we propose an inversely proportional threshold model that defines the optimum threshold, in which the descent rate is larger than in the exponential threshold in the large-coefficient section and slower than in the exponential threshold in the small-coefficient section. Thus, the computation efficiency of the POCS seismic reconstruction greatly improves without affecting the reconstructed precision of weak reflections. To improve the flexibility of the inversely proportional threshold, we obtain the optimal threshold by using an adjustable dependent variable in the denominator of the inversely proportional threshold model. For random noise attenuation by completing the missing traces in seismic data reconstruction, we present a weighted reinsertion strategy based on the data-driven model that can be obtained by using the percentage of the data-driven threshold in each iteration in the threshold section. We apply the proposed POCS reconstruction method to 3D synthetic and field data. The results suggest that the inversely proportional threshold model improves the computational efficiency and precision compared with the traditional threshold models; furthermore, the proposed reinserting weight strategy increases the SNR of the reconstructed data.

  5. Fast synthesize ZnO quantum dots via ultrasonic method.

    PubMed

    Yang, Weimin; Zhang, Bing; Ding, Nan; Ding, Wenhao; Wang, Lixi; Yu, Mingxun; Zhang, Qitu

    2016-05-01

    Green emission ZnO quantum dots were synthesized by an ultrasonic sol-gel method. The ZnO quantum dots were synthesized in various ultrasonic temperature and time. Photoluminescence properties of these ZnO quantum dots were measured. Time-resolved photoluminescence decay spectra were also taken to discover the change of defects amount during the reaction. Both ultrasonic temperature and time could affect the type and amount of defects in ZnO quantum dots. Total defects of ZnO quantum dots decreased with the increasing of ultrasonic temperature and time. The dangling bonds defects disappeared faster than the optical defects. Types of optical defects first changed from oxygen interstitial defects to oxygen vacancy and zinc interstitial defects. Then transformed back to oxygen interstitial defects again. The sizes of ZnO quantum dots would be controlled by both ultrasonic temperature and time as well. That is, with the increasing of ultrasonic temperature and time, the sizes of ZnO quantum dots first decreased then increased. Moreover, concentrated raw materials solution brought larger sizes and more optical defects of ZnO quantum dots.

  6. Calibration coefficient of reference brachytherapy ionization chamber using analytical and Monte Carlo methods.

    PubMed

    Kumar, Sudhir; Srinivasan, P; Sharma, S D

    2010-06-01

    A cylindrical graphite ionization chamber of sensitive volume 1002.4 cm(3) was designed and fabricated at Bhabha Atomic Research Centre (BARC) for use as a reference dosimeter to measure the strength of high dose rate (HDR) (192)Ir brachytherapy sources. The air kerma calibration coefficient (N(K)) of this ionization chamber was estimated analytically using Burlin general cavity theory and by the Monte Carlo method. In the analytical method, calibration coefficients were calculated for each spectral line of an HDR (192)Ir source and the weighted mean was taken as N(K). In the Monte Carlo method, the geometry of the measurement setup and physics related input data of the HDR (192)Ir source and the surrounding material were simulated using the Monte Carlo N-particle code. The total photon energy fluence was used to arrive at the reference air kerma rate (RAKR) using mass energy absorption coefficients. The energy deposition rates were used to simulate the value of charge rate in the ionization chamber and N(K) was determined. The Monte Carlo calculated N(K) agreed within 1.77 % of that obtained using the analytical method. The experimentally determined RAKR of HDR (192)Ir sources, using this reference ionization chamber by applying the analytically estimated N(K), was found to be in agreement with the vendor quoted RAKR within 1.43%.

  7. 40 CFR 260.21 - Petitions for equivalent testing or analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Petitions for equivalent testing or analytical methods. 260.21 Section 260.21 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions §...

  8. Knowledge, Skills, and Abilities for Entry-Level Business Analytics Positions: A Multi-Method Study

    ERIC Educational Resources Information Center

    Cegielski, Casey G.; Jones-Farmer, L. Allison

    2016-01-01

    It is impossible to deny the significant impact from the emergence of big data and business analytics on the fields of Information Technology, Quantitative Methods, and the Decision Sciences. Both industry and academia seek to hire talent in these areas with the hope of developing organizational competencies. This article describes a multi-method…

  9. COMPARISON OF ANALYTICAL METHODS FOR THE MEASUREMENT OF NON-VIABLE BIOLOGICAL PM

    EPA Science Inventory

    The paper describes a preliminary research effort to develop a methodology for the measurement of non-viable biologically based particulate matter (PM), analyzing for mold, dust mite, and ragweed antigens and endotoxins. Using a comparison of analytical methods, the research obj...

  10. Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods

    ERIC Educational Resources Information Center

    Zhang, Ying

    2011-01-01

    Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…

  11. Simple and fast cosine approximation method for computer-generated hologram calculation.

    PubMed

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ito, Tomoyoshi

    2015-12-14

    The cosine function is a heavy computational operation in computer-generated hologram (CGH) calculation; therefore, it is implemented by substitution methods such as a look-up table. However, the computational load and required memory space of such methods are still large. In this study, we propose a simple and fast cosine function approximation method for CGH calculation. As a result, we succeeded in creating CGH with sufficient quality and made the calculation time 1.6 times as fast at maximum compared to using the look-up table of the cosine function on CPU implementation.

  12. Fast high-throughput method for the determination of acidity constants by capillary electrophoresis. II. Acidic internal standards.

    PubMed

    Cabot, Joan Marc; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí

    2010-12-24

    A fast method for the determination of acidity constants by CZE has been recently developed. This method is based on the use of an internal standard of pK(a) similar to that of the analyte. In this paper we establish the reference pK(a) values of a set of 24 monoprotic neutral acids of varied structure that we propose as internal standards. These compounds cover the most usual working pH range in CZE and facilitate the selection of adequate internal standards for a given determination. The reference pK(a) values of the acids have been established by the own internal standard method, i.e. from the mobility differences between different acids of similar pK(a) in the same pH buffers. The determined pK(a) values have been contrasted to the literature pK(a) values and confirmed by determination of the pK(a) values of some acids of the set by the classical CE method. Some systematic deviations of mobilities have been observed in NaOH buffer in reference to the other used buffers, overcoming the use of NaOH in the classical CE method. However, the deviations affect in a similar degree to the test compounds and internal standards allowing thus, the use of NaOH buffer in the internal standard method. This fact demonstrates the better performance of the internal standard method over the classical method to correct mobility deviations, which together with its fastness makes it an interesting method for the routine determination of accurate pK(a) values of new pharmaceutical drugs and drug precursors.

  13. A minireview of analytical methods for the geographical origin analysis of teas (Camellia sinensis).

    PubMed

    Ye, N S

    2012-01-01

    Chemical compositions in tea leaves are influenced by their growing surrounding, and the content of these components are related to the quality of teas. The determination of the concentration of chemical composition in teas will predict the ranking of teas and indicate the geographical origins. This overview concerns an investigation of analytical methods that are being used for the determination of the geographical origin of tea. The analytical approaches have been subdivided into three groups: spectroscopic techniques, chromatographic techniques, and other techniques. The advantages, drawbacks, and reported applications concerning geographical authenticity are discussed.

  14. Reconceptualizing vulnerability: deconstruction and reconstruction as a postmodern feminist analytical research method.

    PubMed

    Glass, Nel; Davis, Kierrynn

    2004-01-01

    Nursing research informed by postmodern feminist perspectives has prompted many debates in recent times. While this is so, nurse researchers who have been tempted to break new ground have had few examples of appropriate analytical methods for a research design informed by the above perspectives. This article presents a deconstructive/reconstructive secondary analysis of a postmodern feminist ethnography in order to provide an analytical exemplar. In doing so, previous notions of vulnerability as a negative state have been challenged and reconstructed. PMID:15206680

  15. Synthetic cathinone pharmacokinetics, analytical methods, and toxicological findings from human performance and postmortem cases.

    PubMed

    Ellefsen, Kayla N; Concheiro, Marta; Huestis, Marilyn A

    2016-05-01

    Synthetic cathinones are commonly abused novel psychoactive substances (NPS). We present a comprehensive systematic review addressing in vitro and in vivo synthetic cathinone pharmacokinetics, analytical methods for detection and quantification in biological matrices, and toxicological findings from human performance and postmortem toxicology cases. Few preclinical administration studies examined synthetic cathinone pharmacokinetic profiles (absorption, distribution, metabolism, and excretion), and only one investigated metabolite pharmacokinetics. Synthetic cathinone metabolic profiling studies, primarily with human liver microsomes, elucidated metabolite structures and identified suitable biomarkers to extend detection windows beyond those provided by parent compounds. Generally, cathinone derivatives underwent ketone reduction, carbonylation of the pyrrolidine ring, and oxidative reactions, with phase II metabolites also detected. Reliable analytical methods are necessary for cathinone identification in biological matrices to document intake and link adverse events to specific compounds and concentrations. NPS analytical methods are constrained in their ability to detect new emerging synthetic cathinones due to limited commercially available reference standards and continuous development of new analogs. Immunoassay screening methods are especially affected, but also gas-chromatography and liquid-chromatography mass spectrometry confirmation methods. Non-targeted high-resolution-mass spectrometry screening methods are advantageous, as they allow for retrospective data analysis and easier addition of new synthetic cathinones to existing methods. Lack of controlled administration studies in humans complicate interpretation of synthetic cathinones in biological matrices, as dosing information is typically unknown. Furthermore, antemortem and postmortem concentrations often overlap and the presence of other psychoactive substances are typically found in combination

  16. A simple analytical method for heterogeneity corrections in low dose rate prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Hueso-González, Fernando; Vijande, Javier; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André

    2015-07-01

    In low energy brachytherapy, the presence of tissue heterogeneities contributes significantly to the discrepancies observed between treatment plan and delivered dose. In this work, we present a simplified analytical dose calculation algorithm for heterogeneous tissue. We compare it with Monte Carlo computations and assess its suitability for integration in clinical treatment planning systems. The algorithm, named as RayStretch, is based on the classic equivalent path length method and TG-43 reference data. Analytical and Monte Carlo dose calculations using Penelope2008 are compared for a benchmark case: a prostate patient with calcifications. The results show a remarkable agreement between simulation and algorithm, the latter having, in addition, a high calculation speed. The proposed analytical model is compatible with clinical real-time treatment planning systems based on TG-43 consensus datasets for improving dose calculation and treatment quality in heterogeneous tissue. Moreover, the algorithm is applicable for any type of heterogeneities.

  17. Homotopy Perturbation Method-Based Analytical Solution for Tide-Induced Groundwater Fluctuations.

    PubMed

    Munusamy, Selva Balaji; Dhar, Anirban

    2016-05-01

    The groundwater variations in unconfined aquifers are governed by the nonlinear Boussinesq's equation. Analytical solution for groundwater fluctuations in coastal aquifers under tidal forcing can be solved using perturbation methods. However, the perturbation parameters should be properly selected and predefined for traditional perturbation methods. In this study, a new dimensional, higher-order analytical solution for groundwater fluctuations is proposed by using the homotopy perturbation method with a virtual perturbation parameter. Parameter-expansion method is used to remove the secular terms generated during the solution process. The solution does not require any predefined perturbation parameter and valid for higher values of amplitude parameter A/D, where A is the amplitude of the tide and D is the aquifer thickness.

  18. Analytical method for analyzing c-channel stiffener made of laminate composite

    NASA Astrophysics Data System (ADS)

    Kumton, Tattchapong

    Composite materials play the important role in the aviation industry. Conventional materials such as aluminum were replaced by composite material on the main structures. The objective of this study focuses on development of analytical method to analyze the laminated composite structure with C-channel cross-section. A lamination theory base closed-form solution was developed to analysis ply stresses on the C-channel cross-section. The developed method contains the effects of coupling due to unsymmetrical of both laminate and structural configuration levels. The present method also included the expression of the sectional properties such as centroid, axial and bending stiffnesses of cross-section. The results obtain from analytical method showed an excellent agreement with finite element results.

  19. Dynamic buckling analysis of delaminated composite plates using semi-analytical finite strip method

    NASA Astrophysics Data System (ADS)

    Ovesy, H. R.; Totounferoush, A.; Ghannadpour, S. A. M.

    2015-05-01

    The delamination phenomena can become of paramount importance when the design of the composite plates is concerned. In the current study, the effect of through-the-width delamination on dynamic buckling behavior of a composite plate is studied by implementing semi-analytical finite strip method. In this method, the energy and work integrations are computed analytically due to the implementation of trigonometric functions. Moreover, the method can lead to converged results with comparatively small number of degrees of freedom. These features have made the method quite efficient. To account for delamination effects, displacement field is enriched by adding appropriate terms. Also, the penetration of the delamination surfaces is prevented by incorporating an appropriate contact scheme into the time response analysis. Some selected results are validated against those available in the literature.

  20. Analytical Method for Reduction of Residual Stress Using Low Frequency and Ultrasonic Vibrations

    NASA Astrophysics Data System (ADS)

    Aoki, Shigeru; Kurita, Katsumi; Koshimizu, Shigeomi; Nishimura, Tadashi; Hiroi, Tetsumaro; Hirai, Seiji

    Welding is widely used for construction of many structures. It is well known that residual stress is generated near the bead because of locally given heat. Tensile residual stress on the surface degrades fatigue strength. On the other hand, welding is used for repair of mold and die. In this case, reduction of residual stress is required because of protection from crack of welded part in mold and die. In this paper, a new method for reduction of residual stress of welded joint is proposed for repair welding of mold and die. In this method, low frequency and ultrasonic vibrations are used during welding. Thick plates are used as specimens of mold and die. Residual stresses are reduced when low frequency and ultrasonic vibrations are used during welding. Experimental results are examined by simulation method using an analytical model. One mass model considering plastic deformation is used as an analytical model. Experimental results are demonstrated by simulation method.

  1. A fast level set method for synthetic aperture radar ocean image segmentation.

    PubMed

    Huang, Xiaoxia; Huang, Bo; Li, Hongga

    2009-01-01

    Segmentation of high noise imagery like Synthetic Aperture Radar (SAR) images is still one of the most challenging tasks in image processing. While level set, a novel approach based on the analysis of the motion of an interface, can be used to address this challenge, the cell-based iterations may make the process of image segmentation remarkably slow, especially for large-size images. For this reason fast level set algorithms such as narrow band and fast marching have been attempted. Built upon these, this paper presents an improved fast level set method for SAR ocean image segmentation. This competent method is dependent on both the intensity driven speed and curvature flow that result in a stable and smooth boundary. Notably, it is optimized to track moving interfaces for keeping up with the point-wise boundary propagation using a single list and a method of fast up-wind scheme iteration. The list facilitates efficient insertion and deletion of pixels on the propagation front. Meanwhile, the local up-wind scheme is used to update the motion of the curvature front instead of solving partial differential equations. Experiments have been carried out on extraction of surface slick features from ERS-2 SAR images to substantiate the efficacy of the proposed fast level set method.

  2. Method and apparatus for automated processing and aliquoting of whole blood samples for analysis in a centrifugal fast analyzer

    DOEpatents

    Burtis, C.A.; Johnson, W.F.; Walker, W.A.

    1985-08-05

    A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises: (1) a whole blood sample disc; (2) a serum sample disc; (3) a sample preparation rotor; and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analyticaly rotor for conventional methods. 5 figs.

  3. HPLC-HRMS method for fast phytochelatins determination in plants. Application to analysis of Clinopodium vulgare L.

    PubMed

    Bardarov, Krum; Naydenov, Mladen; Djingova, Rumyana

    2015-09-01

    An optimized analytical method based on C8 core-shell reverse phase chromatographic separation and high resolution mass spectral (HRMS) detection is developed for a fast analysis of unbound phytochelatins (PCs) in plants. Its application to analysis of Clinopodium vulgare L. is demonstrated where proper PCs liberating and preservation conditions were employed using dithiotreitol in the extraction step. A baseline separation of glutathione (GSH) and phytochelatins from 2 to 5 (PC2-PC5) for 3 min was achieved at conventional HPLC backpressure, with detection limits from 3 ppt (for GSH) to 2.5 ppb (for PC5). It is shown, that the use of HRMS with tandem mass spectral (MS/MS) capabilities permits additional wide range screening ability for iso-phytochelatins and PC similar compounds, based on exact mass and fragment spectra in a post acquisition manner. PMID:26003687

  4. HPLC-HRMS method for fast phytochelatins determination in plants. Application to analysis of Clinopodium vulgare L.

    PubMed

    Bardarov, Krum; Naydenov, Mladen; Djingova, Rumyana

    2015-09-01

    An optimized analytical method based on C8 core-shell reverse phase chromatographic separation and high resolution mass spectral (HRMS) detection is developed for a fast analysis of unbound phytochelatins (PCs) in plants. Its application to analysis of Clinopodium vulgare L. is demonstrated where proper PCs liberating and preservation conditions were employed using dithiotreitol in the extraction step. A baseline separation of glutathione (GSH) and phytochelatins from 2 to 5 (PC2-PC5) for 3 min was achieved at conventional HPLC backpressure, with detection limits from 3 ppt (for GSH) to 2.5 ppb (for PC5). It is shown, that the use of HRMS with tandem mass spectral (MS/MS) capabilities permits additional wide range screening ability for iso-phytochelatins and PC similar compounds, based on exact mass and fragment spectra in a post acquisition manner.

  5. Status report on analytical methods to support the disinfectant/disinfection by-products regulation

    SciTech Connect

    Not Available

    1992-08-01

    The U.S. EPA is developng national regulations to control disinfectants and disinfection by-products in public drinking water supplies. Twelve disinfectants and disinfection by-products are identified for possible regulation under this rule. The document summarizes the analytical methods that EPA intends to propose as compliance monitoring methods. A discussion of surrogate measurements that are being considered for inclusion in the regulation is also provided.

  6. Flight and Analytical Methods for Determining the Coupled Vibration Response of Tandem Helicopters

    NASA Technical Reports Server (NTRS)

    Yeates, John E , Jr; Brooks, George W; Houbolt, John C

    1957-01-01

    Chapter one presents a discussion of flight-test and analysis methods for some selected helicopter vibration studies. The use of a mechanical shaker in flight to determine the structural response is reported. A method for the analytical determination of the natural coupled frequencies and mode shapes of vibrations in the vertical plane of tandem helicopters is presented in Chapter two. The coupled mode shapes and frequencies are then used to calculate the response of the helicopter to applied oscillating forces.

  7. Characterization of rice starch and protein obtained by a fast alkaline extraction method.

    PubMed

    Souza, Daiana de; Sbardelotto, Arthur Francisco; Ziegler, Denize Righetto; Marczak, Ligia Damasceno Ferreira; Tessaro, Isabel Cristina

    2016-01-15

    This study evaluated the characteristics of rice starch and protein obtained by a fast alkaline extraction method on rice flour (RF) derived from broken rice. The extraction was conducted using 0.18% NaOH at 30°C for 30min followed by centrifugation to separate the starch rich and the protein rich fractions. This fast extraction method allowed to obtain an isoelectric precipitation protein concentrate (IPPC) with 79% protein and a starchy product with low protein content. The amino acid content of IPPC was practically unchanged compared to the protein in RF. The proteins of the IPPC underwent denaturation during extraction and some of the starch suffered the cold gelatinization phenomenon, due to the alkaline treatment. With some modifications, the fast method can be interesting in a technological point of view as it enables process cost reduction and useful ingredients obtention to the food and chemical industries. PMID:26258699

  8. Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Detrixhe, Miles; Gibou, Frédéric

    2016-10-01

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.

  9. Development of a neutronics calculation method for designing commercial type Japanese sodium-cooled fast reactor

    SciTech Connect

    Takeda, T.; Shimazu, Y.; Hibi, K.; Fujimura, K.

    2012-07-01

    Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of this project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)

  10. Effect-Based Screening Methods for Water Quality Characterization Will Augment Conventional Analyte-by-Analyte Chemical Methods in Research As Well As Regulatory Monitoring

    EPA Science Inventory

    Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...

  11. Analytical studies on an extended car following model for mixed traffic flow with slow and fast vehicles

    NASA Astrophysics Data System (ADS)

    Li, Zhipeng; Xu, Xun; Xu, Shangzhi; Qian, Yeqing; Xu, Juan

    2016-07-01

    The car-following model is extended to take into account the characteristics of mixed traffic flow containing fast and slow vehicles. We conduct the linear stability analysis to the extended model with finding that the traffic flow can be stabilized with the increase of the percentage of the slow vehicle. It also can be concluded that the stabilization of the traffic flow closely depends on not only the average value of two maximum velocities characterizing two vehicle types, but also the standard deviation of the maximum velocities among all vehicles, when the percentage of the slow vehicles is the same as that of the fast ones. With increase of the average maximum velocity, the traffic flow becomes more and more unstable, while the increase of the standard deviation takes negative effect in stabilizing the traffic system. The direct numerical results are in good agreement with those of theoretical analysis. Moreover, the relation between the flux and the traffic density is investigated to simulate the effects of the percentage of slow vehicles on traffic flux in the whole density regions.

  12. Adaptive integral method with fast Gaussian gridding for solving combined field integral equations

    NASA Astrophysics Data System (ADS)

    Bakır, O.; Baǧ; Cı, H.; Michielssen, E.

    Fast Gaussian gridding (FGG), a recently proposed nonuniform fast Fourier transform algorithm, is used to reduce the memory requirements of the adaptive integral method (AIM) for accelerating the method of moments-based solution of combined field integral equations pertinent to the analysis of scattering from three-dimensional perfect electrically conducting surfaces. Numerical results that demonstrate the efficiency and accuracy of the AIM-FGG hybrid in comparison to an AIM-accelerated solver, which uses moment matching to project surface sources onto an auxiliary grid, are presented.

  13. An analytical method of ultra-trace tellurium for samples of sea- and environmental-water.

    PubMed

    Jingru, A; Qing, Z

    1983-01-01

    This paper presents a method for the concentration of tellurium in sulfhydral cotton fiber. The mechanism of Te-Re catalytic polarographic behaviour has been studied. The optimal conditions of systems are proposed. An analytical procedure of preconcentration with sulfhydral cotton fiber and catalytic polarographic determination of ultra-trace tellurium is presented. This method exhibits good selectivity and is simple and easy. It is also one o;f the most sensitive analytical methods of tellurium at present. The procedure is demonstrated successfully for the determination of background levels of tellurium in a variety of natural water. This is the first reported determination of tellurium in sea water, filling a gap in the literature of oceanic geochemistry. It was found that the content of tellurium in South China sea water is 8 X 10(-10) g/l, that in East China sea water is 4-7 X 10(-10) g/l.

  14. Trigonometric and hyperbolic functions method for constructing analytic solutions to nonlinear plane magnetohydrodynamics equilibrium equations

    SciTech Connect

    Moawad, S. M.

    2015-02-15

    In this paper, we present a solution method for constructing exact analytic solutions to magnetohydrodynamics (MHD) equations. The method is constructed via all the trigonometric and hyperbolic functions. The method is applied to MHD equilibria with mass flow. Applications to a solar system concerned with the properties of coronal mass ejections that affect the heliosphere are presented. Some examples of the constructed solutions which describe magnetic structures of solar eruptions are investigated. Moreover, the constructed method can be applied to a variety classes of elliptic partial differential equations which arise in plasma physics.

  15. Semi analytical solution of second order fuzzy Riccati equation by homotopy perturbation method

    NASA Astrophysics Data System (ADS)

    Jameel, A. F.; Ismail, Ahmad Izani Md

    2014-07-01

    In this work, the Homotopy Perturbation Method (HPM) is formulated to find a semi-analytical solution of the Fuzzy Initial Value Problem (FIVP) involving nonlinear second order Riccati equation. This method is based upon homotopy perturbation theory. This method allows for the solution of the differential equation to be calculated in the form of an infinite series in which the components can be easily calculated. The effectiveness of the algorithm is demonstrated by solving nonlinear second order fuzzy Riccati equation. The results indicate that the method is very effective and simple to apply.

  16. Analytical methods for the determination of personal care products in human samples: an overview.

    PubMed

    Jiménez-Díaz, I; Zafra-Gómez, A; Ballesteros, O; Navalón, A

    2014-11-01

    Personal care products (PCPs) are organic chemicals widely used in everyday human life. Nowadays, preservatives, UV-filters, antimicrobials and musk fragrances are widely used PCPs. Different studies have shown that some of these compounds can cause adverse health effects, such as genotoxicity, which could even lead to mutagenic or carcinogenic effects, or estrogenicity because of their endocrine disruption activity. Due to the absence of official monitoring protocols, there is an increasing demand of analytical methods that allow the determination of those compounds in human samples in order to obtain more information regarding their behavior and fate in the human body. The complexity of the biological matrices and the low concentration levels of these compounds make necessary the use of advanced sample treatment procedures that afford both, sample clean-up, to remove potentially interfering matrix components, as well as the concentration of analytes. In the present work, a review of the more recent analytical methods published in the scientific literature for the determination of PCPs in human fluids and tissue samples, is presented. The work focused on sample preparation and the analytical techniques employed.

  17. Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods

    NASA Astrophysics Data System (ADS)

    Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii

    2016-10-01

    Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85–100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis

  18. Simultaneous determination of antazoline and naphazoline by the net analyte signal standard addition method and spectrophotometric technique.

    PubMed

    Asadpour-Zeynali, Karim; Ghavami, Raoof; Esfandiari, Roghayeh; Soheili-Azad, Payam

    2010-01-01

    A novel net analyte signal standard addition method (NASSAM) was used for simultaneous determination of the drugs anthazoline and naphazoline. The NASSAM can be applied for determination of analytes in the presence of known interferents. The proposed method is used to eliminate the calibration and prediction steps of multivariate calibration methods; the determination is carried out in a single step for each analyte. The accuracy of the predictions against the H-point standard addition method is independent of the shape of the analyte and interferent spectra. The net analyte signal concept was also used to calculate multivariate analytical figures of merit, such as LOD, selectivity, and sensitivity. The method was successfully applied to the simultaneous determination of anthazoline and naphazoline in a commercial eye drop sample.

  19. Development of a fast DNA extraction method for sea food and marine species identification.

    PubMed

    Tagliavia, Marcello; Nicosia, Aldo; Salamone, Monica; Biondo, Girolama; Bennici, Carmelo Daniele; Mazzola, Salvatore; Cuttitta, Angela

    2016-07-15

    The authentication of food components is one of the key issues in food safety. Similarly taxonomy, population and conservation genetics as well as food web structure analysis, also rely on genetic analyses including the DNA barcoding technology. In this scenario we developed a fast DNA extraction method without any purification step from fresh and processed seafood, suitable for any PCR analysis. The protocol allows the fast DNA amplification from any sample, including fresh, stored and processed seafood and from any waste of industrial fish processing, independently of the sample storage method. Therefore, this procedure is particularly suitable for the fast processing of samples and to carry out investigations for the authentication of seafood by means of DNA analysis. PMID:26948627

  20. Development of a fast DNA extraction method for sea food and marine species identification.

    PubMed

    Tagliavia, Marcello; Nicosia, Aldo; Salamone, Monica; Biondo, Girolama; Bennici, Carmelo Daniele; Mazzola, Salvatore; Cuttitta, Angela

    2016-07-15

    The authentication of food components is one of the key issues in food safety. Similarly taxonomy, population and conservation genetics as well as food web structure analysis, also rely on genetic analyses including the DNA barcoding technology. In this scenario we developed a fast DNA extraction method without any purification step from fresh and processed seafood, suitable for any PCR analysis. The protocol allows the fast DNA amplification from any sample, including fresh, stored and processed seafood and from any waste of industrial fish processing, independently of the sample storage method. Therefore, this procedure is particularly suitable for the fast processing of samples and to carry out investigations for the authentication of seafood by means of DNA analysis.

  1. Comparison of segmentation using fast marching and geodesic active contours methods for bone

    NASA Astrophysics Data System (ADS)

    Bilqis, A.; Widita, R.

    2016-03-01

    Image processing is important in diagnosing diseases or damages of human organs. One of the important stages of image processing is segmentation process. Segmentation is a separation process of the image into regions of certain similar characteristics. It is used to simplify the image to make an analysis easier. The case raised in this study is image segmentation of bones. Bone's image segmentation is a way to get bone dimensions, which is needed in order to make prosthesis that is used to treat broken or cracked bones. Segmentation methods chosen in this study are fast marching and geodesic active contours. This study uses ITK (Insight Segmentation and Registration Toolkit) software. The success of the segmentation was then determined by calculating its accuracy, sensitivity, and specificity. Based on the results, the Active Contours method has slightly higher accuracy and sensitivity values than the fast marching method. As for the value of specificity, fast marching has produced three image results that have higher specificity values compared to those of geodesic active contour's. The result also indicates that both methods have succeeded in performing bone's image segmentation. Overall, geodesic active contours method is quite better than fast marching in segmenting bone images.

  2. Analytical Method for the Identification and Assay of Kojic Acid, Methylparaben, and Propylparaben in Cosmetic Products Using UPLC: Application of ISO 12787:2011 Standard.

    PubMed

    Qadir, Muhammad Abdul; Ahmed, Mahmood; Shafiq, Muhammad Imtiaz; Ali, Amir; Sadiq, Asma

    2016-09-01

    A straightforward, fast UPLC method is developed for the identification and quantification of kojic acid (KA), methylparaben (MP), and propylparaben (PP) in 15 cosmetic products (skin whitening creams and lotions). Chromatographic separations for KA, MP, and PP were obtained in 3.5 min on an Acquity BEH-C18 column (100 × 2.1 mm, 1.7 μm particle size) as the stationary phase at 260 nm (diode-array detector), with the mobile phase comprising a mixture of 0.01 M dibasic potassium phosphate and methanol-acetonitrile (50 + 50). Validation studies were performed according to in-house established criteria. There was a linear function of concentrations over the range of 0.4-1.6 μg/mL for KA, MP, and PP. The LOQ for all components was 0.2 μg/mL using the S/N method. Good separation of analytes was observed, with acceptable values of resolution and tailing. The analytical approach defined in the ISO 12787:2011 standard ("Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques") was used for the assay of cosmetic samples. PMID:27329740

  3. Application of quality by design to the development of analytical separation methods.

    PubMed

    Orlandini, Serena; Pinzauti, Sergio; Furlanetto, Sandra

    2013-01-01

    Recent pharmaceutical regulatory documents have stressed the critical importance of applying quality by design (QbD) principles for in-depth process understanding to ensure that product quality is built in by design. This article outlines the application of QbD concepts to the development of analytical separation methods, for example chromatography and capillary electrophoresis. QbD tools, for example risk assessment and design of experiments, enable enhanced quality to be integrated into the analytical method, enabling earlier understanding and identification of variables affecting method performance. A QbD guide is described, from identification of quality target product profile to definition of control strategy, emphasizing the main differences from the traditional quality by testing (QbT) approach. The different ways several authors have treated single QbD steps of method development are reviewed and compared. In a final section on outlook, attention is focused on general issues which have arisen from the surveyed literature, and on the need to change the researcher's mindset from the QbT to QbD approach as an important analytical trend for the near future.

  4. An analytical sensitivity method for use in integrated aeroservoelastic aircraft design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.

  5. Comparison of five analytical methods for the determination of peroxide value in oxidized ghee.

    PubMed

    Mehta, Bhavbhuti M; Darji, V B; Aparnathi, K D

    2015-10-15

    In the present study, a comparison of five peroxide analytical methods was performed using oxidized ghee. The methods included the three iodometric titration viz. Bureau of Indian Standard (BIS), Association of Analytical Communities (AOAC) and American Oil Chemists' Society (AOCS), and two colorimetric methods, the ferrous xylenol orange (FOX) and ferric thiocyanate (International Dairy Federation, IDF) methods based on oxidation of iron. Six ghee samples were stored at 80 °C to accelerate deterioration and sampled periodically (every 48 h) for peroxides. Results were compared using the five methods for analysis as well as a flavor score (9 point hedonic scale). The correlation coefficients obtained using the different methods were in the order: FOX (-0.836) > IDF (-0.821) > AOCS (-0.798) > AOAC (-0.795) > BIS (-0.754). Thus, among the five methods used for determination of peroxide value of ghee during storage, the highest coefficient of correlation was obtained for the FOX method. The high correlations between the FOX and flavor data indicated that FOX was the most suitable method tested to determine peroxide value in oxidized ghee. PMID:25952892

  6. Environmental equity research: review with focus on outdoor air pollution research methods and analytic tools.

    PubMed

    Miao, Qun; Chen, Dongmei; Buzzelli, Michael; Aronson, Kristan J

    2015-01-01

    The objective of this study was to review environmental equity research on outdoor air pollution and, specifically, methods and tools used in research, published in English, with the aim of recommending the best methods and analytic tools. English language publications from 2000 to 2012 were identified in Google Scholar, Ovid MEDLINE, and PubMed. Research methodologies and results were reviewed and potential deficiencies and knowledge gaps identified. The publications show that exposure to outdoor air pollution differs by social factors, but findings are inconsistent in Canada. In terms of study designs, most were small and ecological and therefore prone to the ecological fallacy. Newer tools such as geographic information systems, modeling, and biomarkers offer improved precision in exposure measurement. Higher-quality research using large, individual-based samples and more precise analytic tools are needed to provide better evidence for policy-making to reduce environmental inequities.

  7. An analytical solution of temperature response in multilayered materials for transient methods

    SciTech Connect

    Araki, N.; Makino, A.; Ishiguro, T.; Mihara, J. )

    1992-05-01

    Transient methods, such as those with pulse- or stepwise heating, have often been used to measure thermal diffusivities of various materials including layered materials. The objective of the present study is to derive an analytical solution of the temperature rise in a multilayered material, the front surface of which is subjected to pulse- or stepwise heating. The Laplace transformation has been used to obtain the analytical solution. This solution will enable one to establish the appropriate measurement method for thermophysical properties of the multilayered material. It is also shown that the present solution can be extended to functionally gradient materials (FGM), in which thermophysical properties as well as compositions change continuously. 4 refs., 3 figs.

  8. Analytical approximate solution of the cooling problem by Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Alizadeh, Ebrahim; Sedighi, Kurosh; Farhadi, Mousa; Ebrahimi-Kebria, H. R.

    2009-02-01

    The Adomian decomposition method (ADM) can provide analytical approximation or approximated solution to a rather wide class of nonlinear (and stochastic) equations without linearization, perturbation, closure approximation, or discretization methods. In the present work, ADM is employed to solve the momentum and energy equations for laminar boundary layer flow over flat plate at zero incidences with neglecting the frictional heating. A trial and error strategy has been used to obtain the constant coefficient in the approximated solution. ADM provides an analytical solution in the form of an infinite power series. The effect of Adomian polynomial terms is considered and shows that the accuracy of results is increased with the increasing of Adomian polynomial terms. The velocity and thermal profiles on the boundary layer are calculated. Also the effect of the Prandtl number on the thermal boundary layer is obtained. Results show ADM can solve the nonlinear differential equations with negligible error compared to the exact solution.

  9. A fast operator perturbation method for the solution of the special relativistic equation of radiative transfer in spherical symmetry

    NASA Technical Reports Server (NTRS)

    Hauschildt, P. H.

    1992-01-01

    A fast method for the solution of the radiative transfer equation in rapidly moving spherical media, based on an approximate Lambda-operator iteration, is described. The method uses the short characteristic method and a tridiagonal approximate Lambda-operator to achieve fast convergence. The convergence properties and the CPU time requirements of the method are discussed for the test problem of a two-level atom with background continuum absorption and Thomson scattering. Details of the actual implementation for fast vector and parallel computers are given. The method is accurate and fast enough to be incorporated in radiation-hydrodynamic calculations.

  10. SRC-I demonstration plant analytical laboratory methods manual. Final technical report

    SciTech Connect

    Klusaritz, M.L.; Tewari, K.C.; Tiedge, W.F.; Skinner, R.W.; Znaimer, S.

    1983-03-01

    This manual is a compilation of analytical procedures required for operation of a Solvent-Refined Coal (SRC-I) demonstration or commercial plant. Each method reproduced in full includes a detailed procedure, a list of equipment and reagents, safety precautions, and, where possible, a precision statement. Procedures for the laboratory's environmental and industrial hygiene modules are not included. Required American Society for Testing and Materials (ASTM) methods are cited, and ICRC's suggested modifications to these methods for handling coal-derived products are provided.

  11. Analytic method for three-center nuclear attraction integrals: a generalization of the Gegenbauer addition theorem

    SciTech Connect

    Weatherford, C.A.

    1988-01-01

    A completely analytic method for evaluating three-center nuclear-attraction integrals for STOS is presented. The method exploits a separation of the STO into an evenly loaded solid harmonic and a OS STO. The harmonics are translated to the molecular center of mass in closed finite terms. The OS STO is translated using the Gegenbauer addition theorem; ls STOS are translated using a single parametric differentiation of the OS formula. Explicit formulas for the integrals are presented for arbitrarily located atoms. A numerical example is given to illustrate the method.

  12. Analytical solutions for determining residual stresses in two-dimensional domains using the contour method

    PubMed Central

    Kartal, Mehmet E.

    2013-01-01

    The contour method is one of the most prevalent destructive techniques for residual stress measurement. Up to now, the method has involved the use of the finite-element (FE) method to determine the residual stresses from the experimental measurements. This paper presents analytical solutions, obtained for a semi-infinite strip and a finite rectangle, which can be used to calculate the residual stresses directly from the measured data; thereby, eliminating the need for an FE approach. The technique is then used to determine the residual stresses in a variable-polarity plasma-arc welded plate and the results show good agreement with independent neutron diffraction measurements. PMID:24204187

  13. Analytical method for determining quantum well exciton properties in a magnetic field

    NASA Astrophysics Data System (ADS)

    Stépnicki, Piotr; Piétka, Barbara; Morier-Genoud, François; Deveaud, Benoît; Matuszewski, Michał

    2015-05-01

    We develop an analytical approximate method for determining the Bohr radii of Wannier-Mott excitons in thin quantum wells under the influence of magnetic field perpendicular to the quantum well plane. Our hybrid variational-perturbative method allows us to obtain simple closed formulas for exciton binding energies and optical transition rates. We confirm the reliability of our method through exciton-polariton experiments realized in a GaAs/AlAs microcavity with an 8 nm InxGa1 -xAs quantum well and magnetic field strengths as high as 14 T.

  14. A new validated analytical method for the quality control of red ginseng products

    PubMed Central

    Kim, Il-Woung; Cha, Kyu-Min; Wee, Jae Joon; Ye, Michael B.; Kim, Si-Kwan

    2013-01-01

    The main active components of Panax ginseng are ginsenosides. Ginsenoside Rb1 and Rg1 are accepted as marker substances for quality control worldwide. The analytical methods currently used to detect these two compounds unfairly penalize steamed and dried (red) P. ginseng preparations, because it has a lower content of those ginsenosides than white ginseng. To manufacture red ginseng products from fresh ginseng, the ginseng roots are exposed to high temperatures for many hours. This heating process converts the naturally occurring ginsenoside Rb1 and Rg1 into artifact ginsenosides such as ginsenoside Rg3, Rg5, Rh1, and Rh2, among others. This study highlights the absurdity of the current analytical practice by investigating the time-dependent changes in the crude saponin and the major natural and artifact ginsenosides contents during simmering. The results lead us to recommend (20S)- and (20R)-ginsenoside Rg3 as new reference materials to complement the current P. ginseng preparation reference materials ginsenoside Rb1 and Rg1. An attempt has also been made to establish validated qualitative and quantitative analytical procedures for these four compounds that meet International Conference of Harmonization (ICH) guidelines for specificity, linearity, range, accuracy, precision, detection limit, quantitation limit, robustness and system suitability. Based on these results, we suggest a validated analytical procedure which conforms to ICH guidelines and equally values the contents of ginsenosides in white and red ginseng preparations. PMID:24235862

  15. A new validated analytical method for the quality control of red ginseng products.

    PubMed

    Kim, Il-Woung; Cha, Kyu-Min; Wee, Jae Joon; Ye, Michael B; Kim, Si-Kwan

    2013-10-01

    The main active components of Panax ginseng are ginsenosides. Ginsenoside Rb1 and Rg1 are accepted as marker substances for quality control worldwide. The analytical methods currently used to detect these two compounds unfairly penalize steamed and dried (red) P. ginseng preparations, because it has a lower content of those ginsenosides than white ginseng. To manufacture red ginseng products from fresh ginseng, the ginseng roots are exposed to high temperatures for many hours. This heating process converts the naturally occurring ginsenoside Rb1 and Rg1 into artifact ginsenosides such as ginsenoside Rg3, Rg5, Rh1, and Rh2, among others. This study highlights the absurdity of the current analytical practice by investigating the time-dependent changes in the crude saponin and the major natural and artifact ginsenosides contents during simmering. The results lead us to recommend (20S)- and (20R)-ginsenoside Rg3 as new reference materials to complement the current P. ginseng preparation reference materials ginsenoside Rb1 and Rg1. An attempt has also been made to establish validated qualitative and quantitative analytical procedures for these four compounds that meet International Conference of Harmonization (ICH) guidelines for specificity, linearity, range, accuracy, precision, detection limit, quantitation limit, robustness and system suitability. Based on these results, we suggest a validated analytical procedure which conforms to ICH guidelines and equally values the contents of ginsenosides in white and red ginseng preparations.

  16. A Fast and Robust Ellipse-Detection Method Based on Sorted Merging

    PubMed Central

    Ren, Guanghui; Zhao, Yaqin; Jiang, Lihui

    2014-01-01

    A fast and robust ellipse-detection method based on sorted merging is proposed in this paper. This method first represents the edge bitmap approximately with a set of line segments and then gradually merges the line segments into elliptical arcs and ellipses. To achieve high accuracy, a sorted merging strategy is proposed: the merging degrees of line segments/elliptical arcs are estimated, and line segments/elliptical arcs are merged in descending order of the merging degrees, which significantly improves the merging accuracy. During the merging process, multiple properties of ellipses are utilized to filter line segment/elliptical arc pairs, making the method very efficient. In addition, an ellipse-fitting method is proposed that restricts the maximum ratio of the semimajor axis and the semiminor axis, further improving the merging accuracy. Experimental results indicate that the proposed method is robust to outliers, noise, and partial occlusion and is fast enough for real-time applications. PMID:24782661

  17. Analytical methods in bioassay-directed investigations of mutagenicity of air particulate material.

    PubMed

    Marvin, Christopher H; Hewitt, L Mark

    2007-01-01

    The combination of short-term bioassays and analytical chemical techniques has been successfully used in the identification of a variety of mutagenic compounds in complex mixtures. Much of the early work in the field of bioassay-directed fractionation resulted from the development of a short-term bacterial assay employing Salmonella typhimurium; this assay is commonly known as the Ames assay. Ideally, analytical methods for assessment of mutagenicity of any environmental matrix should exhibit characteristics including high capacity, good selectivity, good analytical resolution, non-destructiveness, and reproducibility. A variety of extraction solvents have been employed in investigations of mutagenicity of air particulate; sequential combination of dichloromethane followed by methanol is most popular. Soxhlet extraction has been the most common extraction method, followed by sonication. Attempts at initial fractionation using different extraction solvents have met with limited success and highlight the need for fractionation schemes applicable to moderately polar and polar mutagenic compounds. Fractionation methods reported in the literature are reviewed according to three general schemas: (i) acid/base/neutral partitioning followed by fractionation using open-column chromatography and/or HPLC; (ii) fractionation based on normal-phase (NP) HPLC using a cyanopropyl or chemically similar stationary phase; and (iii) fractionation by open-column chromatography followed by NP-HPLC. The HPLC methods may be preparative, semi-preparative, or analytical scale. Variations based on acid/base/neutral partitioning followed by a chromatographic separation have also been employed. Other lesser-used approaches involve fractionation based on ion-exchange and thin-layer chromatographies. Although some of the methodologies used in contemporary studies of mutagenicity of air particulate do not represent significant advances in technology over the past 30 years, their simplicity, low

  18. Method and apparatus for continuous fluid leak monitoring and detection in analytical instruments and instrument systems

    DOEpatents

    Weitz, Karl K.; Moore, Ronald J.

    2010-07-13

    A method and device are disclosed that provide for detection of fluid leaks in analytical instruments and instrument systems. The leak detection device includes a collection tube, a fluid absorbing material, and a circuit that electrically couples to an indicator device. When assembled, the leak detection device detects and monitors for fluid leaks, providing a preselected response in conjunction with the indicator device when contacted by a fluid.

  19. A comparative evaluation of analytical methods to allocate individual marks from a team mark

    NASA Astrophysics Data System (ADS)

    Nepal, Kali

    2012-08-01

    This study presents a comparative evaluation of analytical methods to allocate individual marks from a team mark. Only the methods that use or can be converted into some form of mathematical equations are analysed. Some of these methods focus primarily on the assessment of the quality of teamwork product (product assessment) while the others put greater emphasis on the assessment of teamwork performance (process assessment). The remaining methods try to strike a balance between product assessment and process assessment. To discuss the characteristics of these methods, graphical plots generated by the mathematical equations that collectively cover all possible team learning scenarios are discussed. Finally, a typical teamwork example is used to simplify the discussions. Although each of the methods discussed has its own merits for a particular application scenario, recent methods are relatively better in terms of a number of evaluation criteria.

  20. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    PubMed

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy.

  1. Determination of proline in honey: comparison between official methods, optimization and validation of the analytical methodology.

    PubMed

    Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe

    2014-05-01

    The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. PMID:24360478

  2. A novel analytical approximation technique for highly nonlinear oscillators based on the energy balance method

    NASA Astrophysics Data System (ADS)

    Hosen, Md. Alal; Chowdhury, M. S. H.; Ali, Mohammad Yeakub; Ismail, Ahmad Faris

    In the present paper, a novel analytical approximation technique has been proposed based on the energy balance method (EBM) to obtain approximate periodic solutions for the focus generalized highly nonlinear oscillators. The expressions of the natural frequency-amplitude relationship are obtained using a novel analytical way. The accuracy of the proposed method is investigated on three benchmark oscillatory problems, namely, the simple relativistic oscillator, the stretched elastic wire oscillator (with a mass attached to its midpoint) and the Duffing-relativistic oscillator. For an initial oscillation amplitude A0 = 100, the maximal relative errors of natural frequency found in three oscillators are 2.1637%, 0.0001% and 1.201%, respectively, which are much lower than the errors found using the existing methods. It is highly remarkable that an excellent accuracy of the approximate natural frequency has been found which is valid for the whole range of large values of oscillation amplitude as compared with the exact ones. Very simple solution procedure and high accuracy that is found in three benchmark problems reveal the novelty, reliability and wider applicability of the proposed analytical approximation technique.

  3. Instrumental characterization of odour: a combination of olfactory and analytical methods.

    PubMed

    Zarra, T; Naddeo, V; Belgiorno, V; Reiser, M; Kranert, M

    2009-01-01

    Odour emissions are a major environmental issue in wastewater treatment plants and are considered to be the main cause of disturbance noticed by the exposed population. Odour measurement is carried out using analytical or sensorial methods. Sensorial analysis, being assigned to the "human sensor", is the cause of a considerable uncertainty. In this study a correlation between analytical and sensorial methods was investigated. A novel tool was used to both define odour indexes and characterise the odour sources and the volatile substances that cause annoyance in a wastewater treatment plant, with the aim to remove the subjective component in the measure of the odours and define the induced impact. The sources and the main chemical substances responsible for the olfactory annoyances were identified. Around 36 different substances were detected, with more than half being smell relevant components as well as responsible. Dimethyl disulphide was identified as key compound. Results highlight the applicability of highly correlation between analytical and sensorial methods in odour emission monitoring.

  4. Analytical perturbation methods for studying a transversely isotropic medium in multipole acoustic logging

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liu, Xi-Feng; Yuan, Wen; Wang, Xiao-Tian

    2014-06-01

    A new analytical perturbation method is developed in this study to investigate the general reflection coefficients in the frequency-wavenumber domain of the acoustic field in a fluid-filled borehole surrounded by a transversely isotropic medium (TIM). The transversely isotropic medium with a symmetric axis parallel to the borehole axis, which is usually called a VTI medium, was adopted because its exact solutions exists, and a corresponding isotropic medium was adopted as a reference state of perturbation solution. The general reflection coefficients were originally calculated by using the perturbation method and were compared with the analytical solutions. The zero-, first- and second-order perturbation solutions for the general reflection coefficients excited by monopole, dipole and quadrupole sources were investigated for a transversely isotropic elastic solid. The results showed that the general reflection coefficients obtained by using the perturbation solutions and the analytical solutions were similar for all three sources. In summary, our study demonstrated that the perturbation method is valid and effective in acoustical logging. This work provided a theoretical foundation for extending perturbation analyses to complicated anisotropic acoustical logging applications.

  5. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  6. Analytical methods of the U.S. Geological Survey's New York District Water-Analysis Laboratory

    USGS Publications Warehouse

    Lawrence, Gregory B.; Lincoln, Tricia A.; Horan-Ross, Debra A.; Olson, Mark L.; Waldron, Laura A.

    1995-01-01

    The New York District of the U.S. Geological Survey (USGS) in Troy, N.Y., operates a water-analysis laboratory for USGS watershed-research projects in the Northeast that require analyses of precipitation and of dilute surface water and soil water for major ions; it also provides analyses of certain chemical constituents in soils and soil gas samples. This report presents the methods for chemical analyses of water samples, soil-water samples, and soil-gas samples collected in wateshed-research projects. The introduction describes the general materials and technicques for eachmethod and explains the USGS quality-assurance program and data-management procedures; it also explains the use of cross reference to the three most commonly used methods manuals for analysis of dilute waters. The body of the report describes the analytical procedures for (1) solution analysis, (2) soil analysis, and (3) soil-gas analysis. The methods are presented in alphabetical order by constituent. The method for each constituent is preceded by (1) reference codes for pertinent sections of the three manuals mentioned above, (2) a list of the method's applications, and (3) a summary of the procedure. The methods section for each constitutent contains the following categories: instrumentation and equipment, sample preservation and storage, reagents and standards, analytical procedures, quality control, maintenance, interferences, safety considerations, and references. Sufficient information is presented for each method to allow the resulting data to be appropriately used in environmental samples.

  7. The expansion in Gegenbauer polynomials: A simple method for the fast computation of the Gegenbauer coefficients

    NASA Astrophysics Data System (ADS)

    De Micheli, Enrico; Viano, Giovanni Alberto

    2013-04-01

    We present a simple and fast algorithm for the computation of the Gegenbauer transform, which is known to be very useful in the development of spectral methods for the numerical solution of ordinary and partial differential equations of physical interest. We prove that the coefficients of the expansion of a function f(x) in Gegenbauer (also known as ultraspherical) polynomials coincide with the Fourier coefficients of a suitable integral transform of the function f(x). This allows to compute N Gegenbauer coefficients in O(Nlog2N) operations by means of a single Fast Fourier Transform of the integral transform of f(x). We also show that the inverse Gegenbauer transform is expressible as the Abel-type transform of a suitable Fourier series. This fact produces a novel algorithm for the fast evaluation of Gegenbauer expansions.

  8. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

    2014-06-01

    Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

  9. Pesticides in near-surface aquifers: An assessment using highly sensitive analytical methods and tritium

    SciTech Connect

    Kolpin, D.W.; Goolsby, D.A.; Thurman, E.M.

    1995-11-01

    In 1992, the U.S. Geological Survey (USGS) determined the distribution of pesticides in near-surface aquifers of the Midwestern USA to be much more widespread than originally determined during a 1991 USGS study. The frequency of pesticide detection increased from 28.4% during the 1991 study to 59.0% during the 1992 study. This increase in pesticide detection was primarily the result of a more sensitive analytical method that used reporting limits as much as 20 times lower than previously available and a threefold increase in the number of pesticide metabolites analyzed. No pesticide concentrations exceeded the U.S. Environmental Protection Agency`s (USEPAs) maximum contaminant levels or health advisory levels for drinking water. However, five of the six most frequently detected compounds during 1992 were pesticide metabolites that currently do not have drinking water standards determined. The frequent presence of pesticide metabolites for this study documents the importance of obtaining information on these compounds to understand the fate and transport of pesticides in the hydrologic system. It appears that the 56 parent compounds analyzed follow similar pathways through the hydrologic system as atrazine. When atrazine was detected by routine or sensitive analytical methods, there was an increased likelihood of detecting additional parent compounds. As expected, the frequency of pesticide detection was highly dependent on the analytical reporting limit. The number of atrazine detections more than doubled as the reporting limit decreased from 0.10 to 0.01 {mu}g/L. The 1992 data provided no indication that the frequency of pesticide detection would level off as improved analytical methods provide concentrations below 0.003 {mu}g/L. A relation was determined between groundwater age and the frequency of pesticide detection. 30 refs., 4 figs., 3 tabs.

  10. Analytical method for the identification and assay of 12 phthalates in cosmetic products: application of the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques".

    PubMed

    Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine

    2012-08-31

    Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices.

  11. Fast multiscale Gaussian beam methods for wave equations in bounded convex domains

    SciTech Connect

    Bao, Gang; Lai, Jun; Qian, Jianliang

    2014-03-15

    Motivated by fast multiscale Gaussian wavepacket transforms and multiscale Gaussian beam methods which were originally designed for pure initial-value problems of wave equations, we develop fast multiscale Gaussian beam methods for initial boundary value problems of wave equations in bounded convex domains in the high frequency regime. To compute the wave propagation in bounded convex domains, we have to take into account reflecting multiscale Gaussian beams, which are accomplished by enforcing reflecting boundary conditions during beam propagation and carrying out suitable reflecting beam summation. To propagate multiscale beams efficiently, we prove that the ratio of the squared magnitude of beam amplitude and the beam width is roughly conserved, and accordingly we propose an effective indicator to identify significant beams. We also prove that the resulting multiscale Gaussian beam methods converge asymptotically. Numerical examples demonstrate the accuracy and efficiency of the method.

  12. Fast calculation of spherical computer generated hologram using spherical wave spectrum method.

    PubMed

    Jackin, Boaz Jessie; Yatagai, Toyohiko

    2013-01-14

    A fast calculation method for computer generation of spherical holograms in proposed. This method is based on wave propagation defined in spectral domain and in spherical coordinates. The spherical wave spectrum and transfer function were derived from boundary value solutions to the scalar wave equation. It is a spectral propagation formula analogous to angular spectrum formula in cartesian coordinates. A numerical method to evaluate the derived formula is suggested, which uses only N(logN)2 operations for calculations on N sampling points. Simulation results are presented to verify the correctness of the proposed method. A spherical hologram for a spherical object was generated and reconstructed successfully using the proposed method.

  13. Control rod reactivity measurement by rod-drop method at a fast critical assembly

    SciTech Connect

    Song, L.; Yin, Y.; Lian, X.; Zheng, C.

    2012-07-01

    Rod-drop experiments were carried out to estimate the reactivity of the control rod of a fast critical assembly operated by CAEP. Two power monitor systems were used to obtain the power level and integration method was used to process the data. Three experiments were performed. The experimental results of the reactivity from the two power monitor systems were consistent and showed a reasonable range of reactivity compared to results from positive period method. (authors)

  14. Phonon dispersion on Ag (100) surface: A modified analytic embedded atom method study

    NASA Astrophysics Data System (ADS)

    Xiao-Jun, Zhang; Chang-Le, Chen

    2016-01-01

    Within the harmonic approximation, the analytic expression of the dynamical matrix is derived based on the modified analytic embedded atom method (MAEAM) and the dynamics theory of surface lattice. The surface phonon dispersions along three major symmetry directions , and X¯M¯ are calculated for the clean Ag (100) surface by using our derived formulas. We then discuss the polarization and localization of surface modes at points X¯ and M¯ by plotting the squared polarization vectors as a function of the layer index. The phonon frequencies of the surface modes calculated by MAEAM are compared with the available experimental and other theoretical data. It is found that the present results are generally in agreement with the referenced experimental or theoretical results, with a maximum deviation of 10.4%. The agreement shows that the modified analytic embedded atom method is a reasonable many-body potential model to quickly describe the surface lattice vibration. It also lays a significant foundation for studying the surface lattice vibration in other metals. Project supported by the National Natural Science Foundation of China (Grant Nos. 61471301 and 61078057), the Scientific Research Program Funded by Shaanxi Provincial Education Department, China (Grant No. 14JK1301), and the Specialized Research Fund for the Doctoral Program of Higher Education, China (Grant No. 20126102110045).

  15. An intercomparison study of analytical methods used for quantification of levoglucosan in ambient aerosol filter samples

    NASA Astrophysics Data System (ADS)

    Yttri, K. E.; Schnelle-Kreis, J.; Maenhaut, W.; Abbaszade, G.; Alves, C.; Bjerke, A.; Bonnier, N.; Bossi, R.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.

    2015-01-01

    The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning (BB) aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wildfire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for BB particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied high-performance anion-exchange chromatography (HPAEC), four used high-performance liquid chromatography (HPLC) or ultra-performance liquid chromatography (UPLC) and six resorted to gas chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 20%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e. for 33% of the

  16. An intercomparison study of analytical methods used for quantification of levoglucosan in ambient aerosol filter samples

    NASA Astrophysics Data System (ADS)

    Yttri, K. E.; Schnelle-Kreiss, J.; Maenhaut, W.; Alves, C.; Bossi, R.; Bjerke, A.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Gülcin, A.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.

    2014-07-01

    The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wild fire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for biomass burning particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied High-Performance Anion-Exchange Chromatography (HPAEC), four used High-Performance Liquid Chromatography (HPLC) or Ultra-Performance Liquid Chromatography (UPLC), and six resorted to Gas Chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 23%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e., for 33% of

  17. Lead-210 in animal and human bone: A new analytical method

    SciTech Connect

    Fisenne, I.M. )

    1994-01-01

    Lead-210 delivers the highest radiation dose to the skeleton of any naturally occurring radionuclide. A robust analytical method for the accurate determination of its concentration in bone was developed which minimizes the use of hazardous chemicals. Dry-ashing experiments showed that no substantial loss of [sup 210]Pb occurred at [le]700[degrees]C. Additional experiments showed that no loss of [sup 222]Rn occurred from dry-ashed bone. Ashed human-bone samples from three US regional areas were analyzed for [sup 210]Pb and [sup 226]Ra using the new method. 9 refs., 3 figs., 1 tab.

  18. Interpolation method for accurate affinity ranking of arrayed ligand-analyte interactions.

    PubMed

    Schasfoort, Richard B M; Andree, Kiki C; van der Velde, Niels; van der Kooi, Alex; Stojanović, Ivan; Terstappen, Leon W M M

    2016-05-01

    The values of the affinity constants (kd, ka, and KD) that are determined by label-free interaction analysis methods are affected by the ligand density. This article outlines a surface plasmon resonance (SPR) imaging method that yields high-throughput globally fitted affinity ranking values using a 96-plex array. A kinetic titration experiment without a regeneration step has been applied for various coupled antibodies binding to a single antigen. Globally fitted rate (kd and ka) and dissociation equilibrium (KD) constants for various ligand densities and analyte concentrations are exponentially interpolated to the KD at Rmax = 100 RU response level (KD(R100)).

  19. Field sampling and selecting on-site analytical methods for explosives in soil

    SciTech Connect

    Crockett, A.B.; Craig, H.D.; Jenkins, T.F.; Sisk, W.E.

    1996-12-01

    A large number of defense-related sites are contaminated with elevated levels of secondary explosives. Levels of contamination range from barely detectable to levels above 10% that need special handling because of the detonation potential. Characterization of explosives-contaminated sites is particularly difficult because of the very heterogeneous distribution of contamination in the environment and within samples. To improve site characterization, several options exist including collecting more samples, providing on-site analytical data to help direct the investigation, compositing samples, improving homogenization of the samples, and extracting larger samples. This publication is intended to provide guidance to Remedial Project Managers regarding field sampling and on-site analytical methods for detecting and quantifying secondary explosive compounds in soils, and is not intended to include discussions of the safety issues associated with sites contaminated with explosive residues.

  20. Analytic second derivatives of the energy in the fragment molecular orbital method.

    PubMed

    Nakata, Hiroya; Nagata, Takeshi; Fedorov, Dmitri G; Yokojima, Satoshi; Kitaura, Kazuo; Nakamura, Shinichiro

    2013-04-28

    We developed the analytic second derivatives of the energy for the fragment molecular orbital (FMO) method. First we derived the analytic expressions and then introduced some approximations related to the first and second order coupled perturbed Hartree-Fock equations. We developed a parallel program for the FMO Hessian with approximations in GAMESS and used it to calculate infrared (IR) spectra and Gibbs free energies and to locate the transition states in SN2 reactions. The accuracy of the Hessian is demonstrated in comparison to ab initio results for polypeptides and a water cluster. By using the two residues per fragment division, we achieved the accuracy of 3 cm(-1) in the reduced mean square deviation of vibrational frequencies from ab initio for all three polyalanine isomers, while the zero point energy had the error not exceeding 0.3 kcal/mol. The role of the secondary structure on IR spectra, zero point energies, and Gibbs free energies is discussed.