Fast Analytical Methods for Macroscopic Electrostatic Models in Biomolecular Simulations*
Xu, Zhenli; Cai, Wei
2013-01-01
We review recent developments of fast analytical methods for macroscopic electrostatic calculations in biological applications, including the Poisson–Boltzmann (PB) and the generalized Born models for electrostatic solvation energy. The focus is on analytical approaches for hybrid solvation models, especially the image charge method for a spherical cavity, and also the generalized Born theory as an approximation to the PB model. This review places much emphasis on the mathematical details behind these methods. PMID:23745011
A semi-analytical numerical method for fast metamaterial absorber design
NASA Astrophysics Data System (ADS)
Song, Y. C.; Ding, J.; Guo, C. J.
2015-09-01
In this paper, a semi-analytical numerical approach utilizing a novel non-grounded model and interpolation technique is introduced to design the frequency selective surface (FSS) based metamaterial absorbers (MAs) with dramatically reduced time consumption. Different from commonly used trial-and-error technology, our method mainly utilize the numerically computed FSS layer impedance with slow-varying feature in the vicinity of operating frequency. The introduced non-grounded model establishes the quantitative relationship between geometry parameters and equivalent lumped circuit components in conventional transmission line (TL) model with reasonable accuracy. The interpolation technique, on the other hand, promises a relative sparse parameter sweep. The detailed design flow as well as analytical explanation with carefully deduced expressions is presented. With the purpose of validating the proposed method and analytical models, a MA with slotted patches is designed through both the semi-analytical numerical approach and the trial-and-error method, where an over 2300 times acceleration is observed. Additionally, results from the analytical computation and full wave simulation agree well with each other.
Fast and "green" method for the analytical monitoring of haloketones in treated water.
Serrano, María; Silva, Manuel; Gallego, Mercedes
2014-09-01
Several groups of organic compounds have emerged as being particularly relevant as environmental pollutants, including disinfection by-products (DBPs). Haloketones (HKs), which belong to the unregulated volatile fraction of DBPs, have become a priority because of their occurrence in drinking water at concentrations below 1μg/L. The absence of a comprehensive method for HKs has led to the development of the first method for determining fourteen of these species. In an effort to miniaturise, this study develops a micro liquid-liquid extraction (MLLE) method adapted from EPA Method 551.1. In this method practically, the whole extract (50μL) was injected into a programmed temperature vaporiser-gas chromatography-mass spectrometer in order to improve sensitivity. The method was validated by comparing it to EPA Method 551.1 and showed relevant advantages such as: lower sample pH (1.5), higher aqueous/organic volume ratio (60), lower solvent consumption (200μL) and fast and cost-saving operation. The MLLE method achieved detection limits ranging from 6 to 60ng/L (except for 1,1,3-tribromo-3-chloroacetone, 120ng/L) with satisfactory precision (RSD, ∼6%) and high recoveries (95-99%). An evaluation was carried out of the influence of various dechlorinating agents as well as of the sample pH on the stability of the fourteen HKs in treated water. To ensure the HKs integrity for at least 1 week during storage at 4°C, the samples were acidified at pH ∼1.5, which coincides with the sample pH required for MLLE. The green method was applied to the speciation of fourteen HKs in tap and swimming pool waters, where one and seven chlorinated species, respectively, were found. The concentration of 1.1-dichloroacetone in swimming pool water increased ∼25 times in relation to tap water. PMID:25042440
Enantioselective Liquid-Solid Extraction (ELSE)--An Unexplored, Fast, and Precise Analytical Method.
Ulatowski, Filip; Hamankiewicz, Paulina; Jurczak, Janusz
2015-09-14
A novel method of evaluating the enantioselectivity of chiral receptors is investigated. It involves extraction of an ionic guest in racemic form from an ion-exchange resin to the organic solvent, where it is bound by a chiral receptor. The enantioselectivity of the examined receptor is determined simply by measuring the enantiomeric excess of the extracted guest. We show that the concept is viable for neutral receptors binding chiral organic anions extracted into acetonitile. This method was determined to be more accurate and far less time-consuming than the classical titrations. Multiple racemic guests can be applied to a resin in a single experiment, giving the method a very high throughput. PMID:26263300
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
NASA Astrophysics Data System (ADS)
Samin, Adib; Lahti, Erik; Zhang, Jinsuo
2015-08-01
Cyclic voltammetry is a powerful tool that is used for characterizing electrochemical processes. Models of cyclic voltammetry take into account the mass transport of species and the kinetics at the electrode surface. Analytical solutions of these models are not well-known due to the complexity of the boundary conditions. In this study we present closed form analytical solutions of the planar voltammetry model for two soluble species with fast electron transfer and equal diffusivities using the eigenfunction expansion method. Our solution methodology does not incorporate Laplace transforms and yields good agreement with the numerical solution. This solution method can be extended to cases that are more general and may be useful for benchmarking purposes.
Samin, Adib; Lahti, Erik; Zhang, Jinsuo
2015-08-15
Cyclic voltammetry is a powerful tool that is used for characterizing electrochemical processes. Models of cyclic voltammetry take into account the mass transport of species and the kinetics at the electrode surface. Analytical solutions of these models are not well-known due to the complexity of the boundary conditions. In this study we present closed form analytical solutions of the planar voltammetry model for two soluble species with fast electron transfer and equal diffusivities using the eigenfunction expansion method. Our solution methodology does not incorporate Laplace transforms and yields good agreement with the numerical solution. This solution method can be extended to cases that are more general and may be useful for benchmarking purposes.
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland). PMID:26666658
Kim, Junghyun; Suh, Joon Hyuk; Cho, Hyun-Deok; Kang, Wonjae; Choi, Yong Seok; Han, Sang Beom
2016-01-01
A multi-class, multi-residue analytical method based on LC-MS/MS detection was developed for the screening and confirmation of 28 veterinary drug and metabolite residues in flatfish, shrimp and eel. The chosen veterinary drugs are prohibited or unauthorised compounds in Korea, which were categorised into various chemical classes including nitroimidazoles, benzimidazoles, sulfones, quinolones, macrolides, phenothiazines, pyrethroids and others. To achieve fast and simultaneous extraction of various analytes, a simple and generic liquid extraction procedure using EDTA-ammonium acetate buffer and acetonitrile, without further clean-up steps, was applied to sample preparation. The final extracts were analysed by ultra-high-performance liquid chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS). The method was validated for each compound in each matrix at three different concentrations (5, 10 and 20 ng g(-1)) in accordance with Codex guidelines (CAC/GL 71-2009). For most compounds, the recoveries were in the range of 60-110%, and precision, expressed as the relative standard deviation (RSD), was in the range of 5-15%. The detection capabilities (CCβs) were below or equal to 5 ng g(-1), which indicates that the developed method is sufficient to detect illegal fishery products containing the target compounds above the residue limit (10 ng g(-1)) of the new regulatory system (Positive List System - PLS). PMID:26751111
Fast micromagnetic simulations using an analytic mathematical model
NASA Astrophysics Data System (ADS)
Tsiantos, Vassilios; Miles, Jim
2006-02-01
In this paper an analytic mathematical model is presented for fast micromagnetic simulations. In dynamic micromagnetic simulations the Landau-Lifshitz-Gilbert (LLG) equation is solved for the observation of the reversal magnetisation mechanisms. In stiff micromagnetic simulations the large system of ordinary differential equations has to be solved with an appropriate method, such as the Backward Differentiation Formulas (BDF) method, which leads to the solution of a large linear system. The latter is solved efficiently employing matrix-free techniques, such as Krylov methods with preconditioning. Within the Krylov methods framework a product of a matrix times a vector is involved which is usually approximated with directional differences. This paper provides an analytic mathematical model to calculate efficiently this product, leading to more accurate calculations and consequently faster micromagnetic simulations due to better convergence properties.
Analytical model for fast-shock ignition
Ghasemi, S. A. Farahbod, A. H.; Sobhanian, S.
2014-07-15
A model and its improvements are introduced for a recently proposed approach to inertial confinement fusion, called fast-shock ignition (FSI). The analysis is based upon the gain models of fast ignition, shock ignition and considerations for the fast electrons penetration into the pre-compressed fuel to examine the formation of an effective central hot spot. Calculations of fast electrons penetration into the dense fuel show that if the initial electron kinetic energy is of the order ∼4.5 MeV, the electrons effectively reach the central part of the fuel. To evaluate more realistically the performance of FSI approach, we have used a quasi-two temperature electron energy distribution function of Strozzi (2012) and fast ignitor energy formula of Bellei (2013) that are consistent with 3D PIC simulations for different values of fast ignitor laser wavelength and coupling efficiency. The general advantages of fast-shock ignition in comparison with the shock ignition can be estimated to be better than 1.3 and it is seen that the best results can be obtained for the fuel mass around 1.5 mg, fast ignitor laser wavelength ∼0.3 micron and the shock ignitor energy weight factor about 0.25.
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Fast profiling of food by analytical pyrolysis.
Halket, J M; Schulten, H R
1988-03-01
The analytical application of direct pyrolysis (Py) field ionization (FI)-mass spectrometry (MS) und Curie-point pyrolysis gas chromatography-mass spectrometry (Py-GC/FIMS) to various whole foodstuffs is described for the first time. The former technique yields highly differentiated information from the sample in typically 15 min, namely the molecular weight distribution of released volatiles and pyrolysis products in a single spectrum which, owing to the good reproducibility and high significance of the resulting data, has previously been shown to be suitable for the application of chemometric methods. Such mass spectral peaks are further characterized and assigned by high resolution mass measurement and/or by electron ionization after Curie-point pyrolysis and gas chromatographic separation of the components. In this first report, typical results are presented for ground roasted coffee, rosehip tea, wheatmeal biscuit, chocolate drink powder and milk chocolate. The FI mass spectrum obtained from the latter sample is compared with those obtained using the complementary soft ionization techniques of chemical ionization (CI) and direct chemical ionization (DCI). PMID:3369241
Boisson, F; Bekaert, V; Reilhac, A; Wurtz, J; Brasse, D
2015-03-21
In SPECT imaging, improvement or deterioration of performance is mostly due to collimator design. Classical SPECT systems mainly use parallel hole or pinhole collimators. Rotating slat collimators (RSC) can be an interesting alternative to optimize the tradeoff between detection efficiency and spatial resolution. The present study was conducted using a RSC system for small animal imaging called CLiR. The CLiR system was used in planar mode only. In a previous study, planar 2D projections were reconstructed using the well-known filtered backprojection algorithm (FBP). In this paper, we investigated the use of the statistical reconstruction algorithm maximum likelihood expectation maximization (MLEM) to reconstruct 2D images with the CLiR system using a probability matrix calculated using an analytic approach. The primary objective was to propose a method to quickly generate a light system matrix, which facilitates its handling and storage, while providing accurate and reliable performance. Two other matrices were calculated using GATE Monte Carlo simulations to investigate the performance obtained using the matrix calculated analytically. The first matrix calculated using GATE took all the physics processes into account, where the second did not consider for the scattering, as the analytical matrix did not take this physics process into account either. 2D images were reconstructed using FBP and MLEM with the three different probability matrices. Both simulated and experimental data were used. A comparative study of these images was conducted using different metrics: the modulation transfert function, the signal-to-noise ratio and quantification measurement. All the results demonstrated the suitability of using a probability matrix calculated analytically. It provided similar results in terms of spatial resolution (about 0.6 mm with differences <5%), signal-to-noise ratio (differences <10%), or quality of image. PMID:25716556
NASA Astrophysics Data System (ADS)
Boisson, F.; Bekaert, V.; Reilhac, A.; Wurtz, J.; Brasse, D.
2015-03-01
In SPECT imaging, improvement or deterioration of performance is mostly due to collimator design. Classical SPECT systems mainly use parallel hole or pinhole collimators. Rotating slat collimators (RSC) can be an interesting alternative to optimize the tradeoff between detection efficiency and spatial resolution. The present study was conducted using a RSC system for small animal imaging called CLiR. The CLiR system was used in planar mode only. In a previous study, planar 2D projections were reconstructed using the well-known filtered backprojection algorithm (FBP). In this paper, we investigated the use of the statistical reconstruction algorithm maximum likelihood expectation maximization (MLEM) to reconstruct 2D images with the CLiR system using a probability matrix calculated using an analytic approach. The primary objective was to propose a method to quickly generate a light system matrix, which facilitates its handling and storage, while providing accurate and reliable performance. Two other matrices were calculated using GATE Monte Carlo simulations to investigate the performance obtained using the matrix calculated analytically. The first matrix calculated using GATE took all the physics processes into account, where the second did not consider for the scattering, as the analytical matrix did not take this physics process into account either. 2D images were reconstructed using FBP and MLEM with the three different probability matrices. Both simulated and experimental data were used. A comparative study of these images was conducted using different metrics: the modulation transfert function, the signal-to-noise ratio and quantification measurement. All the results demonstrated the suitability of using a probability matrix calculated analytically. It provided similar results in terms of spatial resolution (about 0.6 mm with differences <5%), signal-to-noise ratio (differences <10%), or quality of image.
Peruga, Aranzazu; Hidalgo, Carmen; Sancho, Juan V; Hernández, Félix
2013-09-13
Pyrethrins are natural insecticides derived from chrysanthemum flowers containing a mixture of six components: pyrethrin I, cinerin I, jasmolin I, pyrethrin II, cinerin II, and jasmolin II. In this work, a rapid and sensitive LC-(ESI)-MS/MS method has been developed for the individual quantification and confirmation of pyrethrin residues in fruit and vegetable samples by monitoring two specific transitions for each pyrethrin component under Selected Reaction Monitoring (SRM) mode. Samples were extracted with acetone/water or acetone, depending on the sample type, and raw extracts were directly injected in the LC-MS/MS system. Method validation was carried out evaluating linearity, accuracy, precision, specificity, limit of quantification (LOQ) and limit of detection (LOD) in eight types of fruit and vegetable samples at 0.05mg/kg and 0.5mg/kg (referred to the sum of all pyrethrins). The method based on acetone/water (70:30) extraction led to satisfactory recoveries (70-110%) and good precision (below 14%) for all pyrethrin components in lettuce, pepper, strawberry and potato. The method based on acetone extraction allowed satisfactory recoveries for lettuce, cucumber, tomato and rice samples with recoveries between 71 and 107% and RSDs below 15%. For pistachio samples, satisfactory results were obtained only for some analytes and extracts were also injected using APCI interface, but the lower sensitivity achieved allowed only the validation at 0.5mg/kg. The analytical methodology developed was applied to the analysis of fruit and vegetable samples. PMID:23938081
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.; Berry, Ray A.
1999-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream.
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.; Berry, R.A.
1999-08-10
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream. 8 figs.
ANALYTICAL METHOD DEVELOPMENT FOR PHENOLS
This project focused on the development of an analytical method for the analysis of phenols in drinking water. The need for this project is associated with the recently published Contaminant Candidate List (CCL). The following phenolic compounds are listed on the current CCL, a...
Analytical methods under emergency conditions
Sedlet, J.
1983-01-01
This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references. (ACR)
Waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.
1995-05-01
The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department`s goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision.
NASA Astrophysics Data System (ADS)
Shannon, Andrew; Mustill, Alexander J.; Wyatt, Mark
2015-03-01
Dust grains migrating under Poynting-Robertson drag may be trapped in mean-motion resonances with planets. Such resonantly trapped grains are observed in the Solar system. In extrasolar systems, the exozodiacal light produced by dust grains is expected to be a major obstacle to future missions attempting to directly image terrestrial planets. The patterns made by resonantly trapped dust, however, can be used to infer the presence of planets, and the properties of those planets, if the capture and evolution of the grains can be modelled. This has been done with N-body methods, but such methods are computationally expensive, limiting their usefulness when considering large, slowly evolving grains, and for extrasolar systems with unknown planets and parent bodies, where the possible parameter space for investigation is large. In this work, we present a semi-analytic method for calculating the capture and evolution of dust grains in resonance, which can be orders of magnitude faster than N-body methods. We calibrate the model against N-body simulations, finding excellent agreement for Earth to Neptune mass planets, for a variety of grain sizes, initial eccentricities, and initial semimajor axes. We then apply the model to observations of dust resonantly trapped by the Earth. We find that resonantly trapped, asteroidally produced grains naturally produce the `trailing blob' structure in the zodiacal cloud, while to match the intensity of the blob, most of the cloud must be composed of cometary grains, which owing to their high eccentricity are not captured, but produce a smooth disc.
Liu, J; Bourland, J
2014-06-01
Purpose: To analytically estimate first-order x-ray scatter for kV cone beam x-ray imaging with high computational efficiency. Methods: In calculating first-order scatter using the Klein-Nishina formula, we found that by integrating the point-to-point scatter along an interaction line, a “pencil-beam” scatter kernel (BSK) can be approximated to a quartic expression when the imaging field is small. This BSK model for monoenergetic, 100keV x-rays has been verified on homogeneous cube and cylinder water phantoms by comparing with the exact implementation of KN formula. For heterogeneous medium, the water-equivalent length of a BSK was acquired with an improved Siddon's ray-tracing algorithm, which was also used in calculating pre- and post- scattering attenuation. To include the electron binding effect for scattering of low-kV photons, the mean corresponding scattering angle is determined from the effective point of scattered photons of a BSK. The behavior of polyenergetic x-rays was also investigated for 120kV x-rays incident to a sandwiched infinite heterogeneous slab phantom, with the electron binding effect incorporated. Exact computation and Monte Carlo simulations were performed for comparisons, using the EGSnrc code package. Results: By reducing the 3D volumetric target (o(n{sup 3})) to 2D pencil-beams (o(n{sup 2})), the computation expense can be generally lowered by n times, which our experience verifies. The scatter distribution on a flat detector shows high agreement between the analytic BSK model and exact calculations. The pixel-to-pixel differences are within (-2%, 2%) for the homogeneous cube and cylinder phantoms and within (0, 6%) for the heterogeneous slab phantom. However, the Monte Carlo simulation shows increased deviation of the BSK model toward detector periphery. Conclusion: The proposed BSK model, accommodating polyenergetic x-rays and electron binding effect at low kV, shows great potential in efficiently estimating the first
A fast neighbor joining method.
Li, J F
2015-01-01
With the rapid development of sequencing technologies, an increasing number of sequences are available for evolutionary tree reconstruction. Although neighbor joining is regarded as the most popular and fastest evolutionary tree reconstruction method [its time complexity is O(n(3)), where n is the number of sequences], it is not sufficiently fast to infer evolutionary trees containing more than a few hundred sequences. To increase the speed of neighbor joining, we herein propose FastNJ, a fast implementation of neighbor joining, which was motivated by RNJ and FastJoin, two improved versions of conventional neighbor joining. The main difference between FastNJ and conventional neighbor joining is that, in the former, many pairs of nodes selected by the rule used in RNJ are joined in each iteration. In theory, the time complexity of FastNJ can reach O(n(2)) in the best cases. Experimental results show that FastNJ yields a significant increase in speed compared to RNJ and conventional neighbor joining with a minimal loss of accuracy. PMID:26345805
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
Fast gas chromatography for pesticide residues analysis using analyte protectants.
Kirchner, Michal; Húsková, Renáta; Matisová, Eva; Mocák, Ján
2008-04-01
Fast GC-MS with narrow-bore columns combined with effective sample preparation technique (QuEChERS method) was used for evaluation of various calibration approaches in pesticide residues analysis. In order to compare the performance of analyte protectants (APs) with matrix-matched standards calibration curves of selected pesticides were searched in terms of linearity of responses, repeatability of measurements and reached limit of quantifications utilizing the following calibration standards in the concentration range 1-500 ng mL(-1)(the equivalent sample concentration 1-500 microg kg(-1)): in neat solvent (acetonitrile) with/without addition of APs, matrix-matched standards with/without addition of APs. For APs results are in a good agreement with matrix-matched standards. To evaluate errors of determination of concentration synthetic samples at concentration level of pesticides 50 ng mL(-1) (50 microg kg(-1)) were analyzed and quantified using the above given standards. For less troublesome pesticides very good estimation of concentration was obtained utilizing APs, while for more troublesome pesticides such as methidathion, malathion, phosalone and deltamethrin significant overestimation reaching up to 80% occurred. According to presented results APs can be advantegously used for "easy" pesticides determination. For "difficult" pesticides an alternative calibration approach is required for samples potentially violating MRLs. An example of real sample measurement is shown. In this paper also the use of internal standards (triphenylphosphate (TPP) and heptachlor (HEPT)) for peak areas normalization is discussed in terms of repeatability of measurements and quantitative data obtained. TPP normalization provided slightly better results than the use of absolute peak areas measurements on the contrary to HEPT. PMID:17920613
2013-01-01
Background The aim of this paper was the validation of a new analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals (Ag, Cd, Co, Cr, Cu, Ni, Pb and Zn) in soil after microwave assisted digestion in aqua regia. Determinations were performed on the ContrAA 300 (Analytik Jena) air-acetylene flame spectrometer equipped with xenon short-arc lamp as a continuum radiation source for all elements, double monochromator consisting of a prism pre-monocromator and an echelle grating monochromator, and charge coupled device as detector. For validation a method-performance study was conducted involving the establishment of the analytical performance of the new method (limits of detection and quantification, precision and accuracy). Moreover, the Bland and Altman statistical method was used in analyzing the agreement between the proposed assay and inductively coupled plasma optical emission spectrometry as standardized method for the multielemental determination in soil. Results The limits of detection in soil sample (3σ criterion) in the high-resolution continuum source flame atomic absorption spectrometry method were (mg/kg): 0.18 (Ag), 0.14 (Cd), 0.36 (Co), 0.25 (Cr), 0.09 (Cu), 1.0 (Ni), 1.4 (Pb) and 0.18 (Zn), close to those in inductively coupled plasma optical emission spectrometry: 0.12 (Ag), 0.05 (Cd), 0.15 (Co), 1.4 (Cr), 0.15 (Cu), 2.5 (Ni), 2.5 (Pb) and 0.04 (Zn). Accuracy was checked by analyzing 4 certified reference materials and a good agreement for 95% confidence interval was found in both methods, with recoveries in the range of 94–106% in atomic absorption and 97–103% in optical emission. Repeatability found by analyzing real soil samples was in the range 1.6–5.2% in atomic absorption, similar with that of 1.9–6.1% in optical emission spectrometry. The Bland and Altman method showed no statistical significant difference
Wilson, Lydia J; Newhauser, Wayne D
2015-01-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833
NASA Astrophysics Data System (ADS)
Jagetic, Lydia J.; Newhauser, Wayne D.
2015-06-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
Jagetic, Lydia J; Newhauser, Wayne D
2015-06-21
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity,...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze...
HTGR analytical methods and design verification
Neylan, A.J.; Northup, T.E.
1982-05-01
Analytical methods for the high-temperature gas-cooled reactor (HTGR) include development, update, verification, documentation, and maintenance of all computer codes for HTGR design and analysis. This paper presents selected nuclear, structural mechanics, seismic, and systems analytical methods related to the HTGR core. This paper also reviews design verification tests in the reactor core, reactor internals, steam generator, and thermal barrier.
Fast neutron imaging device and method
Popov, Vladimir; Degtiarenko, Pavel; Musatov, Igor V.
2014-02-11
A fast neutron imaging apparatus and method of constructing fast neutron radiography images, the apparatus including a neutron source and a detector that provides event-by-event acquisition of position and energy deposition, and optionally timing and pulse shape for each individual neutron event detected by the detector. The method for constructing fast neutron radiography images utilizes the apparatus of the invention.
Analytical methods for solving the Boltzmann equation
NASA Astrophysics Data System (ADS)
Struminskii, V. V.
The principal analytical methods for solving the Boltzmann equation are reviewed, and a very general solution is proposed. The method makes it possible to obtain a solution to the Cauchy problem for the nonlinear Boltzmann equation and thus determine the applicability regions for the various analytical methods. The method proposed here also makes it possible to demonstrate that Hilbert's theorem of macroscopic causality does not apply and Hilbert's paradox does not exist.
Method of identity analyte-binding peptides
Kauvar, Lawrence M.
1990-01-01
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4-20 amino acids for specific affinity to the analyte.
Method of identity analyte-binding peptides
Kauvar, L.M.
1990-10-16
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4--20 amino acids for specific affinity to the analyte. 5 figs.
Method and apparatus for detecting an analyte
Allendorf, Mark D.; Hesketh, Peter J.
2011-11-29
We describe the use of coordination polymers (CP) as coatings on microcantilevers for the detection of chemical analytes. CP exhibit changes in unit cell parameters upon adsorption of analytes, which will induce a stress in a static microcantilever upon which a CP layer is deposited. We also describe fabrication methods for depositing CP layers on surfaces.
Analytical Methods in Mesoscopic Systems
NASA Astrophysics Data System (ADS)
Mason, Douglas Joseph
The prospect of designing technologies around the quantum behavior of mesoscopic devices is enticing. This thesis present several tools to facilitate the process of calculating and analyzing the quantum properties of such devices - resonance, boundary conditions, and the quantum-classical correspondence are major themes that we study with these tools. In Chapter 1, we begin by laying the groundwork for the tools that follow by defining the Hamiltonian, the Green's function, the scattering matrix, and the Landauer formalism for ballistic conduction. In Chapter 2, we present an efficient and easy-to-implement algorithm called the Outward Wave Algorithm, which calculates the conductance function and scattering density matrix when a system is coupled to an environment in a variety of geometries and contexts beyond the simple two-lead schematic. In Chapter 3, we present a unique geometry and numerical method called the Boundary Reflectin Matrix that allows us to calculate the full scattering matrix from arbitrary boundaries of a lattice system, and introduce the phenomenon of internal Bragg diffraction. In Chapter 4, we present a new method for visualizing wavefunctions called the Husimi map, which uses measurement by coherent states to form a bridge between the quantum flux operator and semiclassics. We extend the formalism from Chapter 4 to lattice systems in Chapter 5, and comment on our results in Chapter 3 and other work in the literature. These three tools - the Outward Wave Algorithm, the Boundary Reflection Matrix, and the Husimi map - work together to throw light on our interpretation of resonance and scattering in quantum systems, effectively codifying the expertise developed in semiclassics over the past few decades in an efficient and robust package. The data and images that they make available promise to help design better technologies based on quantum scattering.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
2002-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
1998-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
2002-09-24
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.
1998-05-12
A fast quench reactor includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This ``freezes`` the desired end product(s) in the heated equilibrium reaction stage. 7 figs.
Analytical Methods for Trace Metals. Training Manual.
ERIC Educational Resources Information Center
Office of Water Program Operations (EPA), Cincinnati, OH. National Training and Operational Technology Center.
This training manual presents material on the theoretical concepts involved in the methods listed in the Federal Register as approved for determination of trace metals. Emphasis is on laboratory operations. This course is intended for chemists and technicians with little or no experience in analytical methods for trace metals. Students should have…
Methods of Analyte Concentration in a Capillary
NASA Astrophysics Data System (ADS)
Kubalczyk, Paweł; Bald, Edward
Online sample concentration techniques in capillary electrophoresis separations have rapidly grown in popularity over the past few years. During the concentration process, diluted analytes in long injected sample are concentrated into a short zone, then the analytes are separated and detected. A large number of contributions have been published on this subject proposing many names for procedures utilizing the same concentration principles. This chapter brings a unified view on concentration, describes the basic principles utilized, and shows a list of recognized current operational procedures. Several online concentration methods based on velocity gradient techniques are described, in which the electrophoretic velocities of the analyte molecules are manipulated by field amplification, sweeping and isotachophoretic migration, resulting in the online concentration of the analyte.
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
....89 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... 136 of this title. This need only be accomplished if the laboratory will be processing source...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Public Health Association (APHA), the American Water Works Association (AWWA) and the Water...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Public Health Association (APHA), the American Water Works Association (AWWA) and the Water...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Association (APHA), the American Water Works Association (AWWA) and the Water Pollution Control...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS SERVICES AND GENERAL INFORMATION...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS POULTRY AND EGG PRODUCTS Mandatory...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Association (APHA), the American Water Works Association (AWWA) and the Water Pollution Control...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Public Health Association (APHA), the American Water Works Association (AWWA) and the Water...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Association (APHA), the American Water Works Association (AWWA) and the Water Pollution Control...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS PROCESSED FRUITS AND VEGETABLES...
Surface Analytical Methods Applied to Magnesium Corrosion.
Dauphin-Ducharme, Philippe; Mauzeroll, Janine
2015-08-01
Understanding magnesium alloy corrosion is of primary concern, and scanning probe techniques are becoming key analytical characterization methods for that purpose. This Feature presents recent trends in this field as the progressive substitution of steel and aluminum car components by magnesium alloys to reduce the overall weight of vehicles is an irreversible trend. PMID:25826577
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS POULTRY AND EGG PRODUCTS Mandatory...
Transcutaneous Analyte Measuring Methods (TAMM), phase 2
NASA Astrophysics Data System (ADS)
Schlager, Kenneth J.
1991-11-01
The primary objectives of the first quarter of Phase 2 TAMM were the following: the design of a near infrared (NIR)-800 photodiode array spectrometer, two of which would be used in clinical testing during 1992; the development of advanced pattern recognition software for analyzing the data collected with the spectrometer; and the establishment of an ongoing, internal test program with the B1-102 infrared analyzer. The major effect during the first three months of the project was in developing the analytical software NETGEN. NETGEN is a set of analytical programs that combine the best features of neural networks and genetic algorithms. Artificial neural networks (ANNs) are a form of distributed parallel processing of information that attempts to simulate the human brain. For application in TAMM, ANNs are an alternative to previous pattern recognition methods used for predicting blood analyte concentrations from NIR spectra.
Analytic sequential methods for detecting network intrusions
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Walker, Ernest
2014-05-01
In this paper, we propose an analytic sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. We have developed explicit formulae for quick determination of the parameters of the new detection algorithm.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
An Analytical Method of Estimating Turbine Performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1948-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.
The multigrid method: Fast relaxation
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Brandt, A.
1976-01-01
A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse (e.g. 4 x 2 mesh cells) to fine (e.g. 64 x 32); the coarser grids are used to diminish the magnitude of the smooth part of the residuals, hopefully with far less total work than would be required with optimal iterations on the finest grid. To date the method was applied quite successfully to the solution of the transonic small-disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is the example studied, with meshes of both constant and variable step size.
FAST TRACK COMMUNICATION: Uniqueness of static black holes without analyticity
NASA Astrophysics Data System (ADS)
Chruściel, Piotr T.; Galloway, Gregory J.
2010-08-01
We show that the hypothesis of analyticity in the uniqueness theory of vacuum, or electrovacuum, static black holes is not needed. More generally, we show that prehorizons covering a closed set cannot occur in well-behaved domains of outer communications.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Secondary waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S.; Schilling, J.B.
1995-07-01
The characterization phase of site remediation is an important and costly part of the process. Because toxic solvents and other hazardous materials are used in common analytical methods, characterization is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the volume or form of hazardous waste produced either in the sample preparation step or in the measurement step. The authors are examining alternative methods in the areas of inorganic, radiological, and organic analysis. For determining inorganic constituents, alternative methods were studied for sample introduction into inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as their associated waste volumes, were compared with the conventional approaches. In the radiological area, the authors are comparing conventional methods for gross {alpha}/{beta} measurements of soil samples to an alternative method that uses high-pressure microwave dissolution. For determination of organic constituents, microwave-assisted extraction was studied for RCRA regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil; polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil. Extraction efficiencies were determined under varying conditions of time, temperature, microwave power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in conventional extraction methods to about 30 mL. Extraction results varied from one matrix to another. In most cases, the microwave-assisted extraction technique was as efficient as the more common Soxhlet or sonication extraction techniques.
Directory of Analytical Methods, Department 1820
Whan, R.E.
1986-01-01
The Materials Characterization Department performs chemical, physical, and thermophysical analyses in support of programs throughout the Laboratories. The department has a wide variety of techniques and instruments staffed by experienced personnel available for these analyses, and we strive to maintain near state-of-the-art technology by continued updates. We have prepared this Directory of Analytical Methods in order to acquaint you with our capabilities and to help you identify personnel who can assist with your analytical needs. The descriptions of the various capabilities are requester-oriented and have been limited in length and detail. Emphasis has been placed on applications and limitations with notations of estimated analysis time and alternative or related techniques. A short, simplified discussion of underlying principles is also presented along with references if more detail is desired. The contents of this document have been organized in the order: bulky analysis, microanalysis, surface analysis, optical and thermal property measurements.
A pragmatic overview of fast multipole methods
Strickland, J.H.; Baty, R.S.
1995-12-01
A number of physics problems can be modeled by a set of N elements which have pair-wise interactions with one another. A direct solution technique requires computational effort which is O(N{sup 2}). Fast multipole methods (FMM) have been widely used in recent years to obtain solutions to these problems requiring a computational effort of only 0 (N lnN) or O (N). In this paper we present an overview of several variations of the fast multipole method along with examples of its use in solving a variety of physical problems.
Analytical methods for toxic gases from thermal degradation of polymers
NASA Technical Reports Server (NTRS)
Hsu, M.-T. S.
1977-01-01
Toxic gases evolved from the thermal oxidative degradation of synthetic or natural polymers in small laboratory chambers or in large scale fire tests are measured by several different analytical methods. Gas detector tubes are used for fast on-site detection of suspect toxic gases. The infrared spectroscopic method is an excellent qualitative and quantitative analysis for some toxic gases. Permanent gases such as carbon monoxide, carbon dioxide, methane and ethylene, can be quantitatively determined by gas chromatography. Highly toxic and corrosive gases such as nitrogen oxides, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and sulfur dioxide should be passed into a scrubbing solution for subsequent analysis by either specific ion electrodes or spectrophotometric methods. Low-concentration toxic organic vapors can be concentrated in a cold trap and then analyzed by gas chromatography and mass spectrometry. The limitations of different methods are discussed.
The greening of PCB analytical methods
Erickson, M.D.; Alvarado, J.S.; Aldstadt, J.H.
1995-12-01
Green chemistry incorporates waste minimization, pollution prevention and solvent substitution. The primary focus of green chemistry over the past decade has been within the chemical industry; adoption by routine environmental laboratories has been slow because regulatory standard methods must be followed. A related paradigm, microscale chemistry has gained acceptance in undergraduate teaching laboratories, but has not been broadly applied to routine environmental analytical chemistry. We are developing green and microscale techniques for routine polychlorinated biphenyl (PCB) analyses as an example of the overall potential within the environmental analytical community. Initial work has focused on adaptation of commonly used routine EPA methods for soils and oils. Results of our method development and validation demonstrate that: (1) Solvent substitution can achieve comparable results and eliminate environmentally less-desirable solvents, (2) Microscale extractions can cut the scale of the analysis by at least a factor of ten, (3) We can better match the amount of sample used with the amount needed for the GC determination step, (4) The volume of waste generated can be cut by at least a factor of ten, and (5) Costs are reduced significantly in apparatus, reagent consumption, and labor.
Delgado-Aparicio, L.; Tritz, K.; Kramer, T.; Stutman, D.; Finkentha, M.; Hill, K.; Bitter, M.
2010-08-26
A new set of analytic formulae describes the transmission of soft X-ray (SXR) continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler, Rev. Sci. Instrum., 20, 599, (1999)]. The new analytic formulae can improve the interpretation of the experimental results and thus contribute in obtaining fast teperature measurements in between intermittent Thomson Scattering data.
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Analytical methods to assess nanoparticle toxicity.
Marquis, Bryce J; Love, Sara A; Braun, Katherine L; Haynes, Christy L
2009-03-01
During the past 20 years, improvements in nanoscale materials synthesis and characterization have given scientists great control over the fabrication of materials with features between 1 and 100 nm, unlocking many unique size-dependent properties and, thus, promising many new and/or improved technologies. Recent years have found the integration of such materials into commercial goods; a current estimate suggests there are over 800 nanoparticle-containing consumer products (The Project on Emerging Nanotechnologies Consumer Products Inventory, , accessed Oct. 2008), accounting for 147 billion USD in products in 2007 (Nanomaterials state of the market Q3 2008: stealth success, broad impact, Lux Research Inc., New York, NY, 2008). Despite this increase in the prevalence of engineered nanomaterials, there is little known about their potential impacts on environmental health and safety. The field of nanotoxicology has formed in response to this lack of information and resulted in a flurry of research studies. Nanotoxicology relies on many analytical methods for the characterization of nanomaterials as well as their impacts on in vitro and in vivo function. This review provides a critical overview of these techniques from the perspective of an analytical chemist, and is intended to be used as a reference for scientists interested in conducting nanotoxicological research as well as those interested in nanotoxicological assay development. PMID:19238274
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
Analytical Methods for Immunogenetic Population Data
Mack, Steven J.; Gourraud, Pierre-Antoine; Single, Richard M.; Thomson, Glenys; Hollenbach, Jill A.
2014-01-01
In this chapter, we describe analyses commonly applied to immunogenetic population data, along with software tools that are currently available to perform those analyses. Where possible, we focus on tools that have been developed specifically for the analysis of highly polymorphic immunogenetic data. These analytical methods serve both as a means to examine the appropriateness of a dataset for testing a specific hypothesis, as well as a means of testing hypotheses. Rather than treat this chapter as a protocol for analyzing any population dataset, each researcher and analyst should first consider their data, the possible analyses, and any available tools in light of the hypothesis being tested. The extent to which the data and analyses are appropriate to each other should be determined before any analyses are performed. PMID:22665237
Analytical methods for optical remote sensing
Spellicy, R.L.
1997-12-31
Optical monitoring systems are very powerful because of their ability to see many compounds simultaneously as well as their ability to report results in real time. However, these strengths also present unique problems to analysis of the resulting data and validation of observed results. Today, many FTIR and UV-DOAS systems are in use. Some of these are manned systems supporting short term tests while others are totally unmanned systems which are expected to operate without intervention for weeks or months at a time. The analytical methods needed to support both the diversity of compounds and the diversity of applications is challenging. In this paper, the fundamental concepts of spectral analysis for IR/UV systems are presented. This is followed by examples of specific field data from both short term measurement programs looking at unique sources and long-term unmanned monitoring systems looking at ambient air.
Pyrroloquinoline quinone: Metabolism and analytical methods
Smidt, C.R.
1990-01-01
Pyrroloquinoline quinone (PQQ) functions as a cofactor for bacterial oxidoreductases. Whether or not PQQ serves as a cofactor in higher plants and animals remains controversial. Nevertheless, strong evidence exists that PQQ has nutritional importance. In highly purified, chemically defined diets PQQ stimulates animal growth. Further PQQ deprivation impairs connective tissue maturation, particularly when initiated in utero and throughout perinatal development. The study addresses two main objectives: (1) to elucidate basic aspects of the metabolism of PQQ in animals, and (2) to develop and improve existing analytical methods for PQQ. To study intestinal absorption of PQQ, ten mice were administered [[sup 14]C]-PQQ per os. PQQ was readily absorbed (62%) in the lower intestine and was excreted by the kidney within 24 hours. Significant amounts of labeled-PQQ were retained only by skin and kidney. Three approaches were taken to answer the question whether or not PQQ is synthesized by the intestinal microflora of mice. First, dietary antibiotics had no effect on fecal PQQ excretion. Then, no bacterial isolates could be identified that are known to synthesize PQQ. Last, cecal contents were incubated anaerobically with radiolabeled PQQ-precursors with no label appearing in isolated PQQ. Thus, intestinal PQQ synthesis is unlikely. Analysis of PQQ in biological samples is problematic since PQQ forms adducts with nucleophilic compounds and binds to the protein fraction. Existing analytical methods are reviewed and a new approach is introduced that allows for detection of PQQ in animal tissue and foods. PQQ is freed from proteins by ion exchange chromatography, purified on activated silica cartridges, detected by a colorimetric redox-cycling assay, and identified by mass spectrometry. That compounds with the properties of PQQ may be nutritionally important offers interesting areas for future investigation.
Fast analytic simulation toolkit for generation of 4D PET-MR data from real dynamic MR acquisitions
NASA Astrophysics Data System (ADS)
Tsoumpas, C.; Buerger, C.; Mollet, P.; Marsden, P. K.
2011-09-01
This work introduces and evaluates a fast analytic simulation toolkit (FAST) for simulating dynamic PET-MR data from real MR acquisitions. Realistic radiotracer values are assigned to segmented MR images. PET data are generated using analytic forward-projections (including attenuation and Poisson statistics) with the reconstruction software STIR, which is also used to produce the PET images that are spatially and temporally correlated with the real MR images. The simulation is compared with the GATE Monte Carlo package, which has more accurate physical modelling but it is 150 times slower compared to FAST for ten respiratory positions and 7000× slower, when repeating the simulation. The region of interest for mean values and coefficients of variation obtained with FAST and GATE, from 65 million and 104 million coincidences, respectively, were compared. Agreement between the two different simulation methods is good. In particular, the percentage differences of the mean values are: 10% for liver, and 19% for the myocardium and a warm lesion. The utility of FAST is demonstrated with the simulation of multiple volunteers with different breathing patterns. The package will be used for studying the performance of reconstruction, motion correction and attenuation correction algorithms for dynamic simultaneous PET-MR data.
An overview of fast multipole methods
Strickland, J.H.; Baty, R.S.
1995-11-01
A number of physics problems may be cast in terms of Hilbert-Schmidt integral equations. In many cases, the integrals tend to be zero over a large portion of the domain of interest. All of the information is contained in compact regions of the domain which renders their use very attractive from the standpoint of efficient numerical computation. Discrete representation of these integrals leads to a system of N elements which have pair-wise interactions with one another. A direct solution technique requires computational effort which is O(N{sup 2}). Fast multipole methods (FMM) have been widely used in recent years to obtain solutions to these problems requiring a computational effort of only O(Nln N) or O(N). In this paper we present an overview of several variations of the fast multipole method along with examples of its use in solving a variety of physical problems.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS... § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must...
Fractional tiers in fast multipole method calculations
NASA Astrophysics Data System (ADS)
White, Christopher A.; Head-Gordon, Martin
1996-08-01
One defining characteristic of the fast multipole calculation is the number of tiers (depth of tree) used to group the particles. For three dimensions, the standard boxing scheme restricts the number of lowest level boxes to be a power of eight. We present a method which through a simple scaling of the particle coordinates allows an arbitrary number of lowest level boxes. Consequently, one can better balance the near-field and far-field work by minimizing the variation in the number of particles per lowest level box from its optimal value. Test calculations show systems where this method gives a speedup approaching two times.
Fast multipole methods for particle dynamics
Kurzak, J.; Pettitt, B. M.
2008-01-01
The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems. PMID:19194526
Novel applications of fast neutron interrogation methods
NASA Astrophysics Data System (ADS)
Gozani, Tsahi
1994-12-01
The development of non-intrusive inspection methods for contraband consisting primarily of carbon, nitrogen, oxygen, and hydrogen requires the use of fast neutrons. While most elements can be sufficiently well detected by the thermal neutron capture process, some important ones, e.g., carbon and in particular oxygen, cannot be detected by this process. Fortunately, fast neutrons, with energies above the threshold for inelastic scattering, stimulate relatively strong and specific gamma ray lines from these elements. The main lines are: 6.13 for O, 4.43 for C, and 5.11, 2.31 and 1.64 MeV for N. Accelerator-generated neutrons in the energy range of 7 to 15 MeV are being considered as interrogating radiations in a variety of non-intrusive inspection systems for contraband, from explosives to drugs and from coal to smuggled, dutiable goods. In some applications, mostly for inspection of small items such as luggage, the decision process involves a rudimentary imaging, akin to emission tomography, to obtain the localized concentration of various elements. This technique is called FNA — Fast Neutron Analysis. While this approach offers improvements over the TNA (Thermal Neutron Analysis), it is not applicable to large objects such as shipping containers and trucks. For these challenging applications, a collimated beam of neutrons is rastered along the height of the moving object. In addition, the neutrons are generated in very narrow nanosecond pulses. The point of their interaction inside the object is determined by the time of flight (TOF) method, that is measuring the time elapsed from the neutron generation to the time of detection of the stimulated gamma rays. This technique, called PFNA (Pulsed Fast Neutron Analysis), thus directly provides the elemental, and by inference, the chemical composition of the material at every volume element (voxel) of the object. The various neutron-based techniques are briefly described below.
40 CFR 141.25 - Analytical methods for radioactivity.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods for radioactivity. 141.25 Section 141.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements § 141.25 Analytical methods for radioactivity....
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...
Weaver, Abigail A; Reiser, Hannah; Barstis, Toni; Benvenuti, Michael; Ghosh, Debarati; Hunckler, Michael; Joy, Brittney; Koenig, Leah; Raddell, Kellie; Lieberman, Marya
2013-07-01
Reports of low-quality pharmaceuticals have been on the rise in the past decade, with the greatest prevalence of substandard medicines in developing countries, where lapses in manufacturing quality control or breaches in the supply chain allow substandard medicines to reach the marketplace. Here, we describe inexpensive test cards for fast field screening of pharmaceutical dosage forms containing beta lactam antibiotics or combinations of the four first-line antituberculosis (TB) drugs. The devices detect the active pharmaceutical ingredients (APIs) ampicillin, amoxicillin, rifampicin, isoniazid, ethambutol, and pyrazinamide and also screen for substitute pharmaceuticals, such as acetaminophen and chloroquine that may be found in counterfeit pharmaceuticals. The tests can detect binders and fillers such as chalk, talc, and starch not revealed by traditional chromatographic methods. These paper devices contain 12 lanes, separated by hydrophobic barriers, with different reagents deposited in the lanes. The user rubs some of the solid pharmaceutical across the lanes and dips the edge of the paper into water. As water climbs up the lanes by capillary action, it triggers a library of different chemical tests and a timer to indicate when the tests are completed. The reactions in each lane generate colors to form a "color bar code" which can be analyzed visually by comparison with standard outcomes. Although quantification of the APIs is poor compared with conventional analytical methods, the sensitivity and selectivity for the analytes is high enough to pick out suspicious formulations containing no API or a substitute API as well as formulations containing APIs that have been "cut" with inactive ingredients. PMID:23725012
Weaver, Abigail A.; Reiser, Hannah; Barstis, Toni; Benvenuti, Michael; Ghosh, Debarati; Hunckler, Michael; Joy, Brittney; Koenig, Leah; Raddell, Kellie; Lieberman, Marya
2013-01-01
Reports of low quality pharmaceuticals have been on the rise in the last decade with the greatest prevalence of substandard medicines in developing countries, where lapses in manufacturing quality control or breaches in the supply chain allow substandard medicines to reach the marketplace. Here, we describe inexpensive test cards for fast field screening of pharmaceutical dosage forms containing beta lactam antibiotics or combinations of the four first-line antituberculosis (TB) drugs. The devices detect the active pharmaceutical ingredients (APIs) ampicillin, amoxicillin, rifampicin, isoniazid, ethambutol, and pyrazinamide, and also screen for substitute pharmaceuticals such as acetaminophen and chloroquine that may be found in counterfeit pharmaceuticals. The tests can detect binders and fillers like chalk, talc, and starch not revealed by traditional chromatographic methods. These paper devices contain twelve lanes, separated by hydrophobic barriers, with different reagents deposited in the lanes. The user rubs some of the solid pharmaceutical across the lanes and dips the edge of the paper into water. As water climbs up the lanes by capillary action, it triggers a library of different chemical tests and a timer to indicate when the tests are completed. The reactions in each lane generate colors to form a “color bar code” which can be analyzed visually by comparison to standard outcomes. While quantification of the APIs is poor compared to conventional analytical methods, the sensitivity and selectivity for the analytes is high enough to pick out suspicious formulations containing no API or a substitute API, as well as formulations containing APIs that have been “cut” with inactive ingredients. PMID:23725012
Analytical estimates of electron quasi-linear diffusion by fast magnetosonic waves
NASA Astrophysics Data System (ADS)
Mourenas, D.; Artemyev, A. V.; Agapitov, O. V.; Krasnoselskikh, V.
2013-06-01
Quantifying the loss of relativistic electrons from the Earth's radiation belts requires to estimate the effects of many kinds of observed waves, ranging from ULF to VLF. Analytical estimates of electron quasi-linear diffusion coefficients for whistler-mode chorus and hiss waves of arbitrary obliquity have been recently derived, allowing useful analytical approximations for lifetimes. We examine here the influence of much lower frequency and highly oblique, fast magnetosonic waves (also called ELF equatorial noise) by means of both approximate analytical formulations of the corresponding diffusion coefficients and full numerical simulations. Further analytical developments allow us to identify the most critical wave and plasma parameters necessary for a strong impact of fast magnetosonic waves on electron lifetimes and acceleration in the simultaneous presence of chorus, hiss, or lightning-generated waves, both inside and outside the plasmasphere. In this respect, a relatively small frequency over ion gyrofrequency ratio appears more favorable, and other propitious circumstances are characterized. This study should be useful for a comprehensive appraisal of the potential effect of fast magnetosonic waves throughout the magnetosphere.
NASA Astrophysics Data System (ADS)
Kurylyk, Barret L.; Irvine, Dylan J.
2016-02-01
This study details the derivation and application of a new analytical solution to the one-dimensional, transient conduction-advection equation that is applied to trace vertical subsurface fluid fluxes. The solution employs a flexible initial condition that allows for nonlinear temperature-depth profiles, providing a key improvement over most previous solutions. The boundary condition is composed of any number of superimposed step changes in surface temperature, and thus it accommodates intermittent warming and cooling periods due to long-term changes in climate or land cover. The solution is verified using an established numerical model of coupled groundwater flow and heat transport. A new computer program FAST (Flexible Analytical Solution using Temperature) is also presented to facilitate the inversion of this analytical solution to estimate vertical groundwater flow. The program requires surface temperature history (which can be estimated from historic climate data), subsurface thermal properties, a present-day temperature-depth profile, and reasonable initial conditions. FAST is written in the Python computing language and can be run using a free graphical user interface. Herein, we demonstrate the utility of the analytical solution and FAST using measured subsurface temperature and climate data from the Sendia Plain, Japan. Results from these illustrative examples highlight the influence of the chosen initial and boundary conditions on estimated vertical flow rates.
Green analytical method development for statin analysis.
Assassi, Amira Louiza; Roy, Claude-Eric; Perovitch, Philippe; Auzerie, Jack; Hamon, Tiphaine; Gaudin, Karen
2015-02-01
Green analytical chemistry method was developed for pravastatin, fluvastatin and atorvastatin analysis. HPLC/DAD method using ethanol-based mobile phase with octadecyl-grafted silica with various grafting and related-column parameters such as particle sizes, core-shell and monolith was studied. Retention, efficiency and detector linearity were optimized. Even for column with particle size under 2 μm, the benefit of keeping efficiency within a large range of flow rate was not obtained with ethanol based mobile phase compared to acetonitrile one. Therefore the strategy to shorten analysis by increasing the flow rate induced decrease of efficiency with ethanol based mobile phase. An ODS-AQ YMC column, 50 mm × 4.6 mm, 3 μm was selected which showed the best compromise between analysis time, statin separation, and efficiency. HPLC conditions were at 1 mL/min, ethanol/formic acid (pH 2.5, 25 mM) (50:50, v/v) and thermostated at 40°C. To reduce solvent consumption for sample preparation, 0.5mg/mL concentration of each statin was found the highest which respected detector linearity. These conditions were validated for each statin for content determination in high concentrated hydro-alcoholic solutions. Solubility higher than 100mg/mL was found for pravastatin and fluvastatin, whereas for atorvastatin calcium salt the maximum concentration was 2mg/mL for hydro-alcoholic binary mixtures between 35% and 55% of ethanol in water. Using atorvastatin instead of its calcium salt, solubility was improved. Highly concentrated solution of statins offered potential fluid for per Buccal Per-Mucous(®) administration with the advantages of rapid and easy passage of drugs. PMID:25582487
An analytical method for computing atomic contact areas in biomolecules.
Mach, Paul; Koehl, Patrice
2013-01-15
We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816
Fast Multipole Methods for Particle Dynamics.
Kurzak, Jakub; Pettitt, Bernard M.
2006-08-30
The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems.
SINGLE-LABORATORY EVALUATION OF OSMIUM ANALYTICAL METHODS
The results of a single-laboratory study of osmium analytical methods are described. The methods studied include direct-aspiration atomic absorption spectroscopy (EPA Method 7550), furnace atomic absorption spectroscopy and inductively coupled plasma atomic emission spectroscopy ...
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
77 FR 56176 - Analytical Methods Used in Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... From the Federal Register Online via the Government Publishing Office POSTAL REGULATORY COMMISSION 39 CFR Part 3001 Analytical Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission... consider changes in the analytical methods approved for use in periodic reporting.\\1\\ \\1\\ Petition of...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
Methods and Instruments for Fast Neutron Detection
Jordan, David V.; Reeder, Paul L.; Cooper, Matthew W.; McCormick, Kathleen R.; Peurrung, Anthony J.; Warren, Glen A.
2005-05-01
Pacific Northwest National Laboratory evaluated the performance of a large-area (~0.7 m2) plastic scintillator time-of-flight (TOF) sensor for direct detection of fast neutrons. This type of sensor is a readily area-scalable technology that provides broad-area geometrical coverage at a reasonably low cost. It can yield intrinsic detection efficiencies that compare favorably with moderator-based detection methods. The timing resolution achievable should permit substantially more precise time windowing of return neutron flux than would otherwise be possible with moderated detectors. The energy-deposition threshold imposed on each scintillator contributing to the event-definition trigger in a TOF system can be set to blind the sensor to direct emission from the neutron generator. The primary technical challenge addressed in the project was to understand the capabilities of a neutron TOF sensor in the limit of large scintillator area and small scintillator separation, a size regime in which the neutral particle’s flight path between the two scintillators is not tightly constrained.
Fast Single Image Super-Resolution Using a New Analytical Solution for l2 - l2 Problems.
Zhao, Ningning; Wei, Qi; Basarab, Adrian; Dobigeon, Nicolas; Kouame, Denis; Tourneret, Jean-Yves
2016-08-01
This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 - l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques. PMID:27187960
Learner Language Analytic Methods and Pedagogical Implications
ERIC Educational Resources Information Center
Dyson, Bronwen
2010-01-01
Methods for analysing interlanguage have long aimed to capture learner language in its own right. By surveying the cognitive methods of Error Analysis, Obligatory Occasion Analysis and Frequency Analysis, this paper traces reformulations to attain this goal. The paper then focuses on Emergence Analysis, which fine-tunes learner language analysis…
Analytical methods used in a study of coke oven effluent.
Schulte, K A; Larsen, D J; Hornung, R W; Crable, J V
1975-02-01
In a coke oven study conducted by NIOSH, selected chemical analyses of airborne particulates, vapors, and metals in the emissions from five coke ovens were done. Eight sampling procedures and seven analytical techniques were used to analyze samples collected for the study. Six of the analytical methods used are discussed. PMID:1146677
Optimization of reversed-phase chromatography methods for peptide analytics.
Khalaf, Rushd; Baur, Daniel; Pfister, David
2015-12-18
The analytical description and quantification of peptide solutions is an essential part in the quality control of peptide production processes and in peptide mapping techniques. Traditionally, an important tool is analytical reversed phase liquid chromatography. In this work, we develop a model-based tool to find optimal analytical conditions in a clear, efficient and robust manner. The model, based on the Van't Hoff equation, the linear solvent strength correlation, and an analytical solution of the mass balance on a chromatographic column describing peptide retention in gradient conditions is used to optimize the analytical scale separation between components in a peptide mixture. The proposed tool is then applied in the design of analytical reversed phase liquid chromatography methods of five different peptide mixtures. PMID:26620597
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... MEALS, READY-TO-EAT (MREs), MEATS, AND MEAT PRODUCTS MREs, Meats, and Related Meat Food Products § 98.4... of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining pentachlorophenol (PCP) contamination in soil and wa...
FIELD ANALYTICAL SCREENING PROGRAM PCB METHOD: INNOVATIVE TECHNOLOGY EVALUATION REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
Analytical chemistry methods for mixed oxide fuel, March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of materials used to produce mixed oxide fuel. These materials are ceramic fuel and insulator pellets and the plutonium and uranium oxides and nitrates used to fabricate these pellets.
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Analytical techniques for instrument design - matrix methods
Robinson, R.A.
1997-09-01
We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Handbook of Analytical Methods for Textile Composites
NASA Technical Reports Server (NTRS)
Cox, Brian N.; Flanagan, Gerry
1997-01-01
The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.
Analytical techniques for instrument design -- Matrix methods
Robinson, R.A.
1997-12-31
The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Comparison of finite-difference and analytic microwave calculation methods
Friedlander, F.I.; Jackson, H.W.; Barmatz, M.; Wagner, P.
1996-12-31
Normal modes and power absorption distributions in microwave cavities containing lossy dielectric samples were calculated for problems of interest in materials processing. The calculations were performed both using a commercially available finite-difference electromagnetic solver and by numerical evaluation of exact analytic expressions. Results obtained by the two methods applied to identical physical situations were compared. The studies validate the accuracy of the finite-difference electromagnetic solver. Relative advantages of the analytic and finite-difference methods are discussed.
Analytical instruments, ionization sources, and ionization methods
Atkinson, David A.; Mottishaw, Paul
2006-04-11
Methods and apparatus for simultaneous vaporization and ionization of a sample in a spectrometer prior to introducing the sample into the drift tube of the analyzer are disclosed. The apparatus includes a vaporization/ionization source having an electrically conductive conduit configured to receive sample particulate which is conveyed to a discharge end of the conduit. Positioned proximate to the discharge end of the conduit is an electrically conductive reference device. The conduit and the reference device act as electrodes and have an electrical potential maintained between them sufficient to cause a corona effect, which will cause at least partial simultaneous ionization and vaporization of the sample particulate. The electrical potential can be maintained to establish a continuous corona, or can be held slightly below the breakdown potential such that arrival of particulate at the point of proximity of the electrodes disrupts the potential, causing arcing and the corona effect. The electrical potential can also be varied to cause periodic arcing between the electrodes such that particulate passing through the arc is simultaneously vaporized and ionized. The invention further includes a spectrometer containing the source. The invention is particularly useful for ion mobility spectrometers and atmospheric pressure ionization mass spectrometers.
Fracture mechanics life analytical methods verification testing
NASA Astrophysics Data System (ADS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-09-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.
1994-01-01
Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-01-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Rotary fast tool servo system and methods
Montesanti, Richard C.; Trumper, David L.
2007-10-02
A high bandwidth rotary fast tool servo provides tool motion in a direction nominally parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. Three or more flexure blades having all ends fixed are used to form an axis of rotation for a swing arm that carries a cutting tool at a set radius from the axis of rotation. An actuator rotates a swing arm assembly such that a cutting tool is moved in and away from the lathe-mounted, rotating workpiece in a rapid and controlled manner in order to machine the workpiece. A pair of position sensors provides rotation and position information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in-feed slide of a precision lathe.
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method for sample selection, preparation, extraction and clean up is prescribed. For analysis,...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method for sample selection, preparation, extraction and clean up is prescribed. For analysis,...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method for sample selection, preparation, extraction and clean up is prescribed. For analysis,...
Internal R and D task summary report: analytical methods development
Schweighardt, F.K.
1983-07-01
International Coal Refining Company (ICRC) conducted two research programs to develop analytical procedures for characterizing the feed, intermediates,and products of the proposed SRC-I Demonstration Plant. The major conclusion is that standard analytical methods must be defined and assigned statistical error limits of precision and reproducibility early in development. Comparing all SRC-I data or data from different processes is complex and expensive if common data correlation procedures are not followed. ICRC recommends that processes be audited analytically and statistical analyses generated as quickly as possible, in order to quantify process-dependent and -independent variables. 16 references, 10 figures, 20 tables.
Analytical method transfer: new descriptive approach for acceptance criteria definition.
de Fontenay, Gérald
2008-01-01
Within the pharmaceutical industry, method transfers are now commonplace during the life cycle of an analytical method. Setting acceptance criteria for analytical transfers is, however, much more difficult than usually described. Criteria which are too wide may lead to the acceptance of a laboratory providing non-equivalent results, resulting in bad release/reject decisions for pharmaceutical products (a consumer risk). On the contrary, criteria which are too tight may lead to the rejection of an equivalent laboratory, resulting in time costs and delay in the transfer process (an industrial risk). The consumer risk has to be controlled first. But the risk does depend on the method capability (tolerance to method precision ratio). Analytical transfers were simulated for different scenarios (different method capabilities and transfer designs, 10,000 simulations per test). The results of the simulations showed that the method capability has a strong influence on the probability of success of its transfer. For the transfer design, the number of independent analytical runs to be performed on a same batch has much more influence than the number of replicates per run, especially when the inter-day variability of the method is high. A classic descriptive approach for analytical method transfer does not take into account the variability of the method, and therefore, no risks are controlled. Tools for designing analytical transfers and defining a new descriptive acceptance criterion, which take into account the intra- and inter-day variability of the method, are provided for a better risk evaluation by non-statisticians. PMID:17961955
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Analytical methods for quantitation of prenylated flavonoids from hops
Nikolić, Dejan; van Breemen, Richard B.
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical methods for quantitation of prenylated flavonoids from hops.
Nikolić, Dejan; van Breemen, Richard B
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Fast Method of Detection of Periodical Radio Sources
NASA Astrophysics Data System (ADS)
Rodin, A. E.; Samodourov, V. A.; Oreshko, V. V.
2015-11-01
A fast method for searching periodical radio sources based on the Fast Fourier Transform at the radio telescope LPA LPI (the Large Phased Array of the Lebedev Physical Institute) is described. Examples of detection of already known pulsars and a list of new periodical radio sources with coordinates, period, and dispersion measure are presented.
Development of quality-by-design analytical methods.
Vogt, Frederick G; Kord, Alireza S
2011-03-01
Quality-by-design (QbD) is a systematic approach to drug development, which begins with predefined objectives, and uses science and risk management approaches to gain product and process understanding and ultimately process control. The concept of QbD can be extended to analytical methods. QbD mandates the definition of a goal for the method, and emphasizes thorough evaluation and scouting of alternative methods in a systematic way to obtain optimal method performance. Candidate methods are then carefully assessed in a structured manner for risks, and are challenged to determine if robustness and ruggedness criteria are satisfied. As a result of these studies, the method performance can be understood and improved if necessary, and a control strategy can be defined to manage risk and ensure the method performs as desired when validated and deployed. In this review, the current state of analytical QbD in the industry is detailed with examples of the application of analytical QbD principles to a range of analytical methods, including high-performance liquid chromatography, Karl Fischer titration for moisture content, vibrational spectroscopy for chemical identification, quantitative color measurement, and trace analysis for genotoxic impurities. PMID:21280050
An analytical method for designing low noise helicopter transmissions
NASA Technical Reports Server (NTRS)
Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.
1978-01-01
The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
Comparison of scalable fast methods for long-range interactions.
Arnold, Axel; Fahrenberger, Florian; Holm, Christian; Lenz, Olaf; Bolten, Matthias; Dachsel, Holger; Halver, Rene; Kabadshow, Ivo; Gähler, Franz; Heber, Frederik; Iseringhausen, Julian; Hofmann, Michael; Pippig, Michael; Potts, Daniel; Sutmann, Godehard
2013-12-01
Based on a parallel scalable library for Coulomb interactions in particle systems, a comparison between the fast multipole method (FMM), multigrid-based methods, fast Fourier transform (FFT)-based methods, and a Maxwell solver is provided for the case of three-dimensional periodic boundary conditions. These methods are directly compared with respect to complexity, scalability, performance, and accuracy. To ensure comparable conditions for all methods and to cover typical applications, we tested all methods on the same set of computers using identical benchmark systems. Our findings suggest that, depending on system size and desired accuracy, the FMM- and FFT-based methods are most efficient in performance and stability. PMID:24483585
Beamforming and holography image formation methods: an analytic study.
Solimene, Raffaele; Cuccaro, Antonio; Ruvio, Giuseppe; Tapia, Daniel Flores; O'Halloran, Martin
2016-04-18
Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the conﬁguration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) conﬁguration.. PMID:27137336
A New Analytic Alignment Method for a SINS
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.
Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua
2016-05-01
Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. PMID:26907860
Analytical methods for water disinfection byproducts in foods and beverages.
Raymer, J H; Pellizzari, E; Childs, B; Briggs, K; Shoemaker, J A
2000-01-01
The determination of exposure to drinking water disinfection byproducts (DBPs) requires an understanding of how drinking water comes into contact with human through multiple pathways. In order to facilitate the investigation of human exposure to DBPs via foods and beverages, analytical method development efforts were initiated for haloacetonitriles, haloketones, chloropicrin, and the haloacetic acids (HAAs) in these matrices. The recoveries of the target analytes were investigated from composite foods and beverages. Individual foods and beverages used to investigate the general applicability of the developed methods were selected for testing based on their watercontent and frequency of consumption. The haloacetonitriles, the haloketones, and chloral hydrate were generally well recovered (70-130%), except for bromochloroacetonitrile (64%) and dibromoacetonitrile (55%), from foods spiked after homogenization and following extraction with methyl-t-butyl ether (MTBE); the addition of acetone was found to be necessary to improve recoveries from beverages. The process of homogenization resulted in decreased recoveries for the more volatile analytes despite the presence of dry ice. The HAAs were generally well recovered (70-130%), except for trichloroacetic acid (58%) and tribromoacetic acid (132%), from foods but low recoveries and emulsion formation were experienced with some beverages. With both groups of analytes, certain matrices were more problematic (as measured by volatility losses, emulsion formation) than others with regard to processing and analyte recovery. PMID:11138673
Laser: a Tool for Optimization and Enhancement of Analytical Methods
Preisler, Jan
1997-01-01
In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p
ANALYTICAL METHOD READINESS FOR THE CONTAMINANT CANDIDATE LIST
The Contaminant Candidate List (CCL), which was promulgated in March 1998, includes 50 chemical and 10 microbiological contaminants/contaminant groups. At the time of promulgation, analytical methods were available for 6 inorganic and 28 organic contaminants. Since then, 4 anal...
Analytical chemistry methods for metallic core components: Revision March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of alloys used to fabricate core components. These alloys are 302, 308, 316, 316-Ti, and 321 stainless steels and 600 and 718 Inconels and they may include other 300-series stainless steels.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
Fast and Sensitive Method for Determination of Domoic Acid in Mussel Tissue.
Barbaro, Elena; Zangrando, Roberta; Barbante, Carlo; Gambaro, Andrea
2016-01-01
Domoic acid (DA), a neurotoxic amino acid produced by diatoms, is the main cause of amnesic shellfish poisoning (ASP). In this work, we propose a very simple and fast analytical method to determine DA in mussel tissue. The method consists of two consecutive extractions and requires no purification steps, due to a reduction of the extraction of the interfering species and the application of very sensitive and selective HILIC-MS/MS method. The procedural method was validated through the estimation of trueness, extract yield, precision, detection, and quantification limits of analytical method. The sample preparation was also evaluated through qualitative and quantitative evaluations of the matrix effect. These evaluations were conducted both on the DA-free matrix spiked with known DA concentration and on the reference certified material (RCM). We developed a very selective LC-MS/MS method with a very low value of method detection limit (9 ng g(-1)) without cleanup steps. PMID:26904720
Fast and Sensitive Method for Determination of Domoic Acid in Mussel Tissue
Barbaro, Elena; Zangrando, Roberta; Barbante, Carlo; Gambaro, Andrea
2016-01-01
Domoic acid (DA), a neurotoxic amino acid produced by diatoms, is the main cause of amnesic shellfish poisoning (ASP). In this work, we propose a very simple and fast analytical method to determine DA in mussel tissue. The method consists of two consecutive extractions and requires no purification steps, due to a reduction of the extraction of the interfering species and the application of very sensitive and selective HILIC-MS/MS method. The procedural method was validated through the estimation of trueness, extract yield, precision, detection, and quantification limits of analytical method. The sample preparation was also evaluated through qualitative and quantitative evaluations of the matrix effect. These evaluations were conducted both on the DA-free matrix spiked with known DA concentration and on the reference certified material (RCM). We developed a very selective LC-MS/MS method with a very low value of method detection limit (9 ng g−1) without cleanup steps. PMID:26904720
Fast total focusing method for ultrasonic imaging
NASA Astrophysics Data System (ADS)
Carcreff, Ewen; Dao, Gavin; Braconnier, Dominique
2016-02-01
Synthetic aperture focusing technique (SAFT) and total focusing method (TFM) have become popular tools in the field of ultrasonic non destructive testing. In particular, they are employed for detection and characterization of flaws. From data acquired with a transducer array, those techniques aim at reconstructing an image of the inspected object from coherent summations. In this paper, we make a comparison between the standard technique and a migration approach. Using experimental data, we show that the developed approach is faster and offers a better signal to noise ratio than the standard total focusing method. Moreover, the migration is particularly effective for near-surface imaging where standard methods used to fail. On the other hand, the migration approach is only adapted to layered objects whereas the standard technique can fit complex geometries. The methods are tested on homogeneous pieces containing artificial flaws such as side drilled holes.
A New Splitting Method for Both Analytical and Preparative LC/MS
NASA Astrophysics Data System (ADS)
Cai, Yi; Adams, Daniel; Chen, Hao
2013-11-01
This paper presents a novel splitting method for liquid chromatography/mass spectrometry (LC/MS) application, which allows fast MS detection of LC-separated analytes and subsequent online analyte collection. In this approach, a PEEK capillary tube with a micro-orifice drilled on the tube side wall is used to connect with LC column. A small portion of LC eluent emerging from the orifice can be directly ionized by desorption electrospray ionization (DESI) with negligible time delay (6~10 ms) while the remaining analytes exiting the tube outlet can be collected. The DESI-MS analysis of eluted compounds shows narrow peaks and high sensitivity because of the extremely small dead volume of the orifice used for LC eluent splitting (as low as 4 nL) and the freedom to choose favorable DESI spray solvent. In addition, online derivatization using reactive DESI is possible for supercharging proteins and for enhancing their signals without introducing extra dead volume. Unlike UV detector used in traditional preparative LC experiments, this method is applicable to compounds without chromophores (e.g., saccharides) due to the use of MS detector. Furthermore, this splitting method well suits monolithic column-based ultra-fast LC separation at a high elution flow rate of 4 mL/min. [Figure not available: see fulltext.
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
Use of scientometrics to assess nuclear and other analytical methods
Lyon, W.S.
1986-01-01
Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs.
Fast linear method of illumination classification
NASA Astrophysics Data System (ADS)
Cooper, Ted J.; Baqai, Farhan A.
2003-01-01
We present a simple method for estimating the scene illuminant for images obtained by a Digital Still Camera (DSC). The proposed method utilizes basis vectors obtained from known memory color reflectance to identify the memory color objects in the image. Once the memory color pixels are identified, we use the ratios of the red/green and blue/green to determine the most likely illuminant in the image. The critical part of the method is to estimate the smallest set of basis vectors that closely represent the memory color reflectances. Basis vectors obtained from both Principal Component Analysis (PCA) and Independent Component Analysis (ICA) are used. We will show that only two ICA basis vectors are needed to get an acceptable estimate.
Nascimento, Carina F; Rocha, Diogo L; Rocha, Fábio R P
2015-02-15
An environmental friendly procedure was developed for fast melamine determination as an adulterant of protein content in milk. Triton X-114 was used for sample clean-up and as a fluorophore, whose fluorescence was quenched by the analyte. A linear response was observed from 1.0 to 6.0mgL(-1) melamine, described by the Stern-Volmer equation I°/I=(0.999±0.002)+(0.0165±0.004) CMEL (r=0.999). The detection limit was estimated at 0.8mgL(-1) (95% confidence level), which allows detecting as low as 320μg melamine in 100g of milk. Coefficients of variation (n=8) were estimated at 0.4% and 1.4% with and without melamine, respectively. Recoveries to melamine spiked to milk samples from 95% to 101% and similar slopes of calibration graphs obtained with and without milk indicated the absence of matrix effects. Results for different milk samples agreed with those obtained by high performance liquid chromatography at the 95% confidence level. PMID:25236232
Analytic methods and free-space dyadic Green's functions
NASA Astrophysics Data System (ADS)
Weiglhofer, Werner S.
1993-09-01
A number of mathematical techniques are presented which have proven successful in obtaining analytic solutions to the differential equations for the dyadic Green's functions of electromagnetic theory. The emphasis is on infinite-medium (or free-space) time-harmonic solutions throughout, thus putting the focus on the physical medium in which the electromagnetic process takes place. The medium's properties enter Maxwell's equations through the constitutive relations, and a comprehensive listing of dyadic Green's functions for which closed-form solutions exist, is given. Presently, the list of media contains (achiral) isotropic, biisotropic (including chiral), generally uniaxial, electrically (or magnetically) gyrotropic, diffusive and moving media as well as certain plasmas. A critical evaluation of the achievements, successes, limits, and failures of the analytic techniques is provided, and a prognosis is put forward about the future place of analytic methods within the general context of the search for solutions to electromagnetic field problems.
Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis
NASA Astrophysics Data System (ADS)
JEONG, TAESEOK; SINGH, RAJENDRA
2000-06-01
This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360
Fast tomographic methods for the tokamak ISTTOK
Carvalho, P. J.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.; Gori, S.; Toussaint, U. v.
2008-04-07
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Fast timing methods for semiconductor detectors. Revision
Spieler, H.
1984-10-01
This tutorial paper discusses the basic parameters which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter.
A fast full constraints unmixing method
NASA Astrophysics Data System (ADS)
Ye, Zhang; Wei, Ran; Wang, Qing Yan
2012-10-01
Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.
Analytical method for determination of benzene-arsenic acids
Mitchell, G.L.; Bayse, G.S.
1988-01-01
A sensitive analytical method has been modified for use in determination of several benzenearsonic acids, including arsanilic acid (p-aminobenzenearsonic acid), Roxarsone (3-nitro-4-hydroxybenzenearsonic acid), and p-ureidobenzene arsonic acid. Controlled acid hydrolysis of these compounds produces a quantitative yield of arsenate, which is measured colorimetrically as the molybdenum blue complex at 865 nm. The method obeys Beer's Law over the micromolar concentration range. These benzenearsonic acids are routinely used as feed additives in poultry and swine. This method should be useful in assessing tissue levels of the arsenicals in appropriate extracts.
Active controls: A look at analytical methods and associated tools
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Adams, W. M., Jr.; Mukhopadhyay, V.; Tiffany, S. H.; Abel, I.
1984-01-01
A review of analytical methods and associated tools for active controls analysis and design problems is presented. Approaches employed to develop mathematical models suitable for control system analysis and/or design are discussed. Significant efforts have been expended to develop tools to generate the models from the standpoint of control system designers' needs and develop the tools necessary to analyze and design active control systems. Representative examples of these tools are discussed. Examples where results from the methods and tools have been compared with experimental data are also presented. Finally, a perspective on future trends in analysis and design methods is presented.
Analytical Methods for Measuring Mercury in Water, Sediment and Biota
Lasorsa, Brenda K.; Gill, Gary A.; Horvat, Milena
2012-06-07
Mercury (Hg) exists in a large number of physical and chemical forms with a wide range of properties. Conversion between these different forms provides the basis for mercury's complex distribution pattern in local and global cycles and for its biological enrichment and effects. Since the 1960’s, the growing awareness of environmental mercury pollution has stimulated the development of more accurate, precise and efficient methods of determining mercury and its compounds in a wide variety of matrices. During recent years new analytical techniques have become available that have contributed significantly to the understanding of mercury chemistry in natural systems. In particular, these include ultra sensitive and specific analytical equipment and contamination-free methodologies. These improvements allow for the determination of total mercury as well as major species of mercury to be made in water, sediments and soils, and biota. Analytical methods are selected depending on the nature of the sample, the concentration levels of mercury, and what species or fraction is to be quantified. The terms “speciation” and “fractionation” in analytical chemistry were addressed by the International Union for Pure and Applied Chemistry (IUPAC) which published guidelines (Templeton et al., 2000) or recommendations for the definition of speciation analysis. "Speciation analysis is the analytical activity of identifying and/or measuring the quantities of one or more individual chemical species in a sample. The chemical species are specific forms of an element defined as to isotopic composition, electronic or oxidation state, and/or complex or molecular structure. The speciation of an element is the distribution of an element amongst defined chemical species in a system. In case that it is not possible to determine the concentration of the different individual chemical species that sum up the total concentration of an element in a given matrix, meaning it is impossible to
Fast and accurate determination of the Wigner rotation matrices in the fast multipole method.
Dachsel, Holger
2006-04-14
In the rotation based fast multipole method the accurate determination of the Wigner rotation matrices is essential. The combination of two recurrence relations and the control of the error accumulations allow a very precise determination of the Wigner rotation matrices. The recurrence formulas are simple, efficient, and numerically stable. The advantages over other recursions are documented. PMID:16626188
Fast Erase Method and Apparatus For Digital Media
NASA Technical Reports Server (NTRS)
Oakely, Ernest C. (Inventor)
2006-01-01
A non-contact fast erase method for erasing information stored on a magnetic or optical media. The magnetic media element includes a magnetic surface affixed to a toroidal conductor and stores information in a magnetic polarization pattern. The fast erase method includes applying an alternating current to a planar inductive element positioned near the toroidal conductor, inducing an alternating current in the toroidal conductor, and heating the magnetic surface to a temperature that exceeds the Curie-point so that information stored on the magnetic media element is permanently erased. The optical disc element stores information in a plurality of locations being defined by pits and lands in a toroidal conductive layer. The fast erase method includes similarly inducing a plurality of currents in the optical media element conductive layer and melting a predetermined portion of the conductive layer so that the information stored on the optical medium is destroyed.
Control of irradiated food: Recent developments in analytical detection methods.
NASA Astrophysics Data System (ADS)
Delincée, H.
1993-07-01
An overview of recent international efforts, i.e. programmes of "ADMIT" (FAO/IAEA) and of BCR (EC) towards the development of analytical detection methods for radiation processed foods will be given. Some larger collaborative studies have already taken place, e.g. ESR of bones from chicken, prok, beef, frog legs and fish, thermoluminescence of insoluble minerals isolated from herbs and spices, GC analysis of long-chain hydrocarbons derived from the lipid fraction of chicken and other meats, and the microbiological APC/DEFT procedure for spices. These methods could soon be implemented in international standard protocols.
A new analytical method for groundwater recharge and discharge estimation
NASA Astrophysics Data System (ADS)
Liang, Xiuyu; Zhang, You-Kuan
2012-07-01
SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.
An analytical method for Mathieu oscillator based on method of variation of parameter
NASA Astrophysics Data System (ADS)
Li, Xianghong; Hou, Jingyu; Chen, Jufeng
2016-08-01
A simple, but very accurate analytical method for forced Mathieu oscillator is proposed, the idea of which is based on the method of variation of parameter. Assuming that the time-varying parameter in Mathieu oscillator is constant, one could easily obtain its accurately analytical solution. Then the approximately analytical solution for Mathieu oscillator could be established after substituting periodical time-varying parameter for the constant one in the obtained accurate analytical solution. In order to certify the correctness and precision of the proposed analytical method, the first-order and ninth-order approximation solutions by harmonic balance method (HBM) are also presented. The comparisons between the results by the proposed method with those by the numerical simulation and HBM verify that the results by the proposed analytical method agree very well with those by the numerical simulation. Moreover, the precision of the proposed new analytical method is not only higher than the approximation solution by first-order HBM, but also better than the approximation solution by the ninth-order HBM in large ranges of system parameters.
Methods for quantifying uncertainty in fast reactor analyses.
Fanning, T. H.; Fischer, P. F.
2008-04-07
Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-09-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-06-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
A fast multipole boundary element method for solving two-dimensional thermoelasticity problems
NASA Astrophysics Data System (ADS)
Liu, Y. J.; Li, Y. X.; Huang, S.
2014-09-01
A fast multipole boundary element method (BEM) for solving general uncoupled steady-state thermoelasticity problems in two dimensions is presented in this paper. The fast multipole BEM is developed to handle the thermal term in the thermoelasticity boundary integral equation involving temperature and heat flux distributions on the boundary of the problem domain. Fast multipole expansions, local expansions and related translations for the thermal term are derived using complex variables. Several numerical examples are presented to show the accuracy and effectiveness of the developed fast multipole BEM in calculating the displacement and stress fields for 2-D elastic bodies under various thermal loads, including thin structure domains that are difficult to mesh using the finite element method (FEM). The BEM results using constant elements are found to be accurate compared with the analytical solutions, and the accuracy of the BEM results is found to be comparable to that of the FEM with linear elements. In addition, the BEM offers the ease of use in generating the mesh for a thin structure domain or a domain with complicated geometry, such as a perforated plate with randomly distributed holes for which the FEM fails to provide an adequate mesh. These results clearly demonstrate the potential of the developed fast multipole BEM for solving 2-D thermoelasticity problems.
Aurigemma, Christine; Farrell, William
2010-09-24
Medicinal chemists often depend on analytical instrumentation for reaction monitoring and product confirmation at all stages of pharmaceutical discovery and development. To obtain pure compounds for biological assays, the removal of side products and final compounds through purification is often necessary. Prior to purification, chemists often utilize open-access analytical LC/MS instruments because mass confirmation is fast and reliable, and the chromatographic separation of most sample constituents is sufficient. Supercritical fluid chromatography (SFC) is often used as an orthogonal technique to HPLC or when isolation of the free base of a compound is desired. In laboratories where SFC is the predominant technique for analysis and purification of compounds, a reasonable approach for quickly determining suitable purification conditions is to screen the sample against different columns. This can be a bottleneck to the purification process. To commission SFC for open-access use, a walk-up analytical SFC/MS screening system was implemented in the medicinal chemistry laboratory. Each sample is automatically screened through six column/method conditions, and on-demand data processing occurs for the chromatographers after each screening method is complete. This paper highlights the "FastTrack" approach to expediting samples through purification. PMID:20728893
A fast multipole hybrid boundary node method for composite materials
NASA Astrophysics Data System (ADS)
Wang, Qiao; Miao, Yu; Zhu, Hongping
2013-06-01
This article presents a multi-domain fast multipole hybrid boundary node method for composite materials in 3D elasticity. The hybrid boundary node method (hybrid BNM) is a meshless method which only requires nodes constructed on the surface of a domain. The method is applied to 3D simulation of composite materials by a multi-domain solver and accelerated by the fast multipole method (FMM) in this paper. The preconditioned GMRES is employed to solve the final system equation and precondition techniques are discussed. The matrix-vector multiplication in each iteration is divided into smaller scale ones at the sub-domain level and then accelerated by FMM within individual sub-domains. The computed matrix-vector products at the sub-domain level are then combined according to the continuity conditions on the interfaces. The algorithm is implemented on a computer code written in C + +. Numerical results show that the technique is accurate and efficient.
Organic analysis and analytical methods development: FY 1995 progress report
Clauss, S.A.; Hoopes, V.; Rau, J.
1995-09-01
This report describes the status of organic analyses and developing analytical methods to account for the organic components in Hanford waste tanks, with particular emphasis on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-103 (Tank 103-SY). The analytical data are to serve as an example of the status of methods development and application. Samples of the convective and nonconvective layers from Tank 103-SY were analyzed for total organic carbon (TOC). The TOC value obtained for the nonconvective layer using the hot persulfate method was 10,500 {mu}g C/g. The TOC value obtained from samples of Tank 101-SY was 11,000 {mu}g C/g. The average value for the TOC of the convective layer was 6400 {mu}g C/g. Chelator and chelator fragments in Tank 103-SY samples were identified using derivatization. gas chromatography/mass spectrometry (GC/MS). Organic components were quantified using GC/flame ionization detection. Major components in both the convective and nonconvective-layer samples include ethylenediaminetetraacetic acid (EDTA), nitrilotriacetic acid (NTA), succinic acid, nitrosoiminodiacetic acid (NIDA), citric acid, and ethylenediaminetriacetic acid (ED3A). Preliminary results also indicate the presence of C16 and C18 carboxylic acids in the nonconvective-layer sample. Oxalic acid was one of the major components in the nonconvective layer as determined by derivatization GC/flame ionization detection.
ANALYTICAL METHODS FOR KINETIC STUDIES OF BIOLOGICAL INTERACTIONS: A REVIEW
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S.
2015-01-01
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721
Evolution of microbiological analytical methods for dairy industry needs.
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
Evolution of microbiological analytical methods for dairy industry needs
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
Analytical methods for kinetic studies of biological interactions: A review.
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S
2015-09-10
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721
NASA Astrophysics Data System (ADS)
Atteia, O.; Höhener, P.
2012-09-01
The aim of this work was to extend and to validate the flux tube-mixed instantaneous and kinetics superposition sequence approach (FT-MIKSS) to reaction chains of degrading species. Existing analytical solutions for the reactive transport of chains of decaying solutes were embedded in the flux-tube approach in order to conceive a semi-analytical model that allows fast parameter fitting. The model was applied for chloroethenes undergoing reductive dechlorination and oxidation in homogeneous and heterogeneous aquifers with sorption. The results from the semi-analytical model were compared to results from three numerical models (RT3D, PH3TD, PHAST). All models were validated in a homogeneous domain with an existing analytical solution. In heterogeneous domains, we found significant differences between the four models. FT-MIKSS gave intermediate results for all modelled cases. Results were obtained almost instantaneously, whereas other models had calculation times of up to several hours. Chloroethene plumes and redox conditions at the Plattsburgh field site were realistically modelled by FT-MIKSS, although results differed somewhat from those of PHT3D and PHAST. It is concluded that it may be tedious to obtain correct modelling results in heterogeneous media with degradation chain reactions and that the comparison of two different models may be useful. FT-MIKSS is a valuable tool for fast parameter fitting at field sites and should be used in the preparation of longer model runs with other numerical models.
Igor D. Kaganovich; Edward A. Startsev; Ronald C. Davidson
2003-11-25
Plasma neutralization of an intense ion beam pulse is of interest for many applications, including plasma lenses, heavy ion fusion, high energy physics, etc. Comprehensive analytical, numerical, and experimental studies are underway to investigate the complex interaction of a fast ion beam with a background plasma. The positively charged ion beam attracts plasma electrons, and as a result the plasma electrons have a tendency to neutralize the beam charge and current. A suite of particle-in-cell codes has been developed to study the propagation of an ion beam pulse through the background plasma. For quasi-steady-state propagation of the ion beam pulse, an analytical theory has been developed using the assumption of long charge bunches and conservation of generalized vorticity. The analytical results agree well with the results of the numerical simulations. The visualization of the data obtained in the numerical simulations shows complex collective phenomena during beam entry into and ex it from the plasma.
Fast and stable numerical method for neuronal modelling
NASA Astrophysics Data System (ADS)
Hashemi, Soheil; Abdolali, Ali
2016-11-01
Excitable cell modelling is of a prime interest in predicting and targeting neural activity. Two main limits in solving related equations are speed and stability of numerical method. Since there is a tradeoff between accuracy and speed, most previously presented methods for solving partial differential equations (PDE) are focused on one side. More speed means more accurate simulations and therefore better device designing. By considering the variables in finite differenced equation in proper time and calculating the unknowns in the specific sequence, a fast, stable and accurate method is introduced in this paper for solving neural partial differential equations. Propagation of action potential in giant axon is studied by proposed method and traditional methods. Speed, consistency and stability of the methods are compared and discussed. The proposed method is as fast as forward methods and as stable as backward methods. Forward methods are known as fastest methods and backward methods are stable in any circumstances. Complex structures can be simulated by proposed method due to speed and stability of the method.
Analytical methods for human biomonitoring of pesticides. A review.
Yusa, Vicent; Millet, Maurice; Coscolla, Clara; Roca, Marta
2015-09-01
Biomonitoring of both currently-used and banned-persistent pesticides is a very useful tool for assessing human exposure to these chemicals. In this review, we present current approaches and recent advances in the analytical methods for determining the biomarkers of exposure to pesticides in the most commonly used specimens, such as blood, urine, and breast milk, and in emerging non-invasive matrices such as hair and meconium. We critically discuss the main applications for sample treatment, and the instrumental techniques currently used to determine the most relevant pesticide biomarkers. We finally look at the future trends in this field. PMID:26388361
Performance of analytical methods for tomographic gamma scanning
Prettyman, T.H.; Mercer, D.J.
1997-06-01
The use of gamma-ray computerized tomography for nondestructive assay of radioactive materials has led to the development of specialized analytical methods. Over the past few years, Los Alamos has developed and implemented a computer code, called ARC-TGS, for the analysis of data obtained by tomographic gamma scanning (TGS). ARC-TGS reduces TGS transmission and emission tomographic data, providing the user with images of the sample contents, the activity or mass of selected radionuclides, and an estimate of the uncertainty in the measured quantities. The results provided by ARC-TGS can be corrected for self-attenuation when the isotope of interest emits more than one gamma-ray. In addition, ARC-TGS provides information needed to estimate TGS quantification limits and to estimate the scan time needed to screen for small amounts of radioactivity. In this report, an overview of the analytical methods used by ARC-TGS is presented along with an assessment of the performance of these methods for TGS.
The Augmented Fast Marching Method for Level Set Reinitialization
NASA Astrophysics Data System (ADS)
Salac, David
2011-11-01
The modeling of multiphase fluid flows typically requires accurate descriptions of the interface and curvature of the interface. Here a new reinitialization technique based on the fast marching method for gradient-augmented level sets is presented. The method is explained and results in both 2D and 3D are presented. Overall the method is more accurate than reinitialization methods based on similar stencils and the resulting curvature fields are much smoother. The method will also be demonstrated in a sample application investigating the dynamic behavior of vesicles in general fluid flows. Support provided by University at Buffalo - SUNY.
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
Using analytic network process for evaluating mobile text entry methods.
Ocampo, Lanndon A; Seva, Rosemary R
2016-01-01
This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods. PMID:26360215
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-01
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. PMID:26363946
Efficient Displacement Discontinuity Method Using Fast Multipole Techniques
Morris, J.P.; Blair, S.C.
2000-02-18
The Displacement Discontinuity method has been widely used in geomechanics because it accurately captures the behavior of fractures within a rock mass by explicitly accounting for discontinuities. Unfortunately, boundary element techniques require the interactions between all pairs of elements to be evaluated and traditional approaches to the Displacement Discontinuity method are computationally expensive for large problem sizes. Approximate summation techniques, such as the Fast Multipole Method (FMM), calculate the interactions between N entities in time proportional to N. We have implemented a modified Fast Multipole approach which performs the necessary calculations in optimal time and with reduced memory usage. Furthermore, the FMM introduces parameters which can be selected to give the desired trade-off between efficiency and accuracy. The FMM approach permits much larger problems to be solved using desktop computers, opening up a range of applications. We present results demonstrating the speed of the code and several test cases involving rock fracture in compression.
A linear analytical boundary element method (BEM) for 2D homogeneous potential problems
NASA Astrophysics Data System (ADS)
Friedrich, Jürgen
2002-06-01
The solution of potential problems is not only fundamental for geosciences, but also an essential part of related subjects like electro- and fluid-mechanics. In all fields, solution algorithms are needed that should be as accurate as possible, robust, simple to program, easy to use, fast and small in computer memory. An ideal technique to fulfill these criteria is the boundary element method (BEM) which applies Green's identities to transform volume integrals into boundary integrals. This work describes a linear analytical BEM for 2D homogeneous potential problems that is more robust and precise than numerical methods because it avoids numerical schemes and coordinate transformations. After deriving the solution algorithm, the introduced approach is tested against different benchmarks. Finally, the gained method was incorporated into an existing software program described before in this journal by the same author.
NASA Astrophysics Data System (ADS)
Theis, L. S.; Motzoi, F.; Wilhelm, F. K.
2016-01-01
We present a few-parameter ansatz for pulses to implement a broad set of simultaneous single-qubit rotations in frequency-crowded multilevel systems. Specifically, we consider a system of two qutrits whose working and leakage transitions suffer from spectral crowding (detuned by δ ). In order to achieve precise controllability, we make use of two driving fields (each having two quadratures) at two different tones to simultaneously apply arbitrary combinations of rotations about axes in the X -Y plane to both qubits. Expanding the waveforms in terms of Hanning windows, we show how analytic pulses containing smooth and composite-pulse features can easily achieve gate errors less than 10-4 and considerably outperform known adiabatic techniques. Moreover, we find a generalization of the WAHWAH (Weak AnHarmonicity With Average Hamiltonian) method by Schutjens et al. [R. Schutjens, F. A. Dagga, D. J. Egger, and F. K. Wilhelm, Phys. Rev. A 88, 052330 (2013)], 10.1103/PhysRevA.88.052330 that allows precise separate single-qubit rotations for all gate times beyond a quantum speed limit. We find in all cases a quantum speed limit slightly below 2 π /δ for the gate time and show that our pulses are robust against variations in system parameters and filtering due to transfer functions, making them suitable for experimental implementations.
NASA Astrophysics Data System (ADS)
Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.
2014-05-01
Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in ambient air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C9-C15 BVOC composition of single plant emissions may be characterised within a 14.5 min analysis time. Moreover, in-situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an 11.7 min chromatographic separation time (increasing to 19.7 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). These analysis times potentially allow for a twofold to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in-situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC (OBVOC) linalool in ambient air. During this field deployment within a suburban forest
A novel unified coding analytical method for Internet of Things
NASA Astrophysics Data System (ADS)
Sun, Hong; Zhang, JianHong
2013-08-01
This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.
Validation of Analytical Methods for Biomarkers Employed in Drug Development
Chau, Cindy H.; Rixe, Olivier; McLeod, Howard; Figg, William D.
2008-01-01
The role of biomarkers in drug discovery and development has gained precedence over the years. As biomarkers become integrated into drug development and clinical trials, quality assurance and in particular assay validation becomes essential with the need to establish standardized guidelines for analytical methods used in biomarker measurements. New biomarkers can revolutionize both the development and use of therapeutics, but is contingent upon the establishment of a concrete validation process that addresses technology integration and method validation as well as regulatory pathways for efficient biomarker development. This perspective focuses on the general principles of the biomarker validation process with an emphasis on assay validation and the collaborative efforts undertaken by various sectors to promote the standardization of this procedure for efficient biomarker development. PMID:18829475
Nanita, Sergio C; Stry, James J; Pentz, Anne M; McClory, Joseph P; May, John H
2011-07-27
A prototype multiresidue method based on fast extraction and dilution of samples followed by flow injection mass spectrometric analysis is proposed here for high-throughput chemical screening in complex matrices. The method was tested for sulfonylurea herbicides (triflusulfuron methyl, azimsulfuron, chlorimuron ethyl, sulfometuron methyl, chlorsulfuron, and flupyrsulfuron methyl), carbamate insecticides (oxamyl and methomyl), pyrimidine carboxylic acid herbicides (aminocyclopyrachlor and aminocyclopyrachlor methyl), and anthranilic diamide insecticides (chlorantraniliprole and cyantraniliprole). Lemon and pecan were used as representative high-water and low-water content matrices, respectively, and a sample extraction procedure was designed for each commodity type. Matrix-matched external standards were used for calibration, yielding linear responses with correlation coefficients (r) consistently >0.99. The limits of detection (LOD) were estimated to be between 0.01 and 0.03 mg/kg for all analytes, allowing execution of recovery tests with samples fortified at ≥0.05 mg/kg. Average analyte recoveries obtained during method validation for lemon and pecan ranged from 75 to 118% with standard deviations between 3 and 21%. Representative food processed fractions were also tested, that is, soybean oil and corn meal, yielding individual analyte average recoveries ranging from 62 to 114% with standard deviations between 4 and 18%. An intralaboratory blind test was also performed; the method excelled with 0 false positives and 0 false negatives in 240 residue measurements (20 samples × 12 analytes). The daily throughput of the fast extraction and dilution (FED) procedure is estimated at 72 samples/chemist, whereas the flow injection mass spectrometry (FI-MS) throughput could be as high as 4.3 sample injections/min, making very efficient use of mass spectrometers with negligible instrumental analysis time compared to the sample homogenization, preparation, and data
GenoSets: Visual Analytic Methods for Comparative Genomics
Cain, Aurora A.; Kosara, Robert; Gibas, Cynthia J.
2012-01-01
Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest. PMID:23056299
MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...
Algebraic and analytic reconstruction methods for dynamic tomography.
Desbat, L; Rit, S; Clackdoyle, R; Mennessier, C; Promayon, E; Ntalampeki, S
2007-01-01
In this work, we discuss algebraic and analytic approaches for dynamic tomography. We present a framework of dynamic tomography for both algebraic and analytic approaches. We finally present numerical experiments. PMID:18002059
The methods of decrease operating pressure of fast neutrals source
NASA Astrophysics Data System (ADS)
Barchenko, V. T.; Komlev, A. E.; Babinov, N. A.; Vinogradov, M. L.
2015-11-01
The fast neutral particles sources are more and more widely used in technologies of surface processing and coatings deposition, especially in the case of dielectric surfaces processing. However for substantial expansion of the sources applications scope it is necessary to decrease the pressure in the vacuum chamber at which they can operate. This article describes the methods to reduce the operating pressure of the fast neutral particles source with combined ions acceleration and its neutralization regions. This combination provide a total absence of the high-energy ions in the particles beam. The main discussed methods are creation of pressure drop between internal and external volumes of the source and working gas preionization which is provided by combustion of auxiliary gas discharge.
New analytical methods for determining trace elements in coal
Dale, L.S.; Riley, K.W.
1996-12-31
New and improved analytical methods, based on modem spectroscopic techniques, have been developed to provide more reliable data on the levels of environmentally significant elements in Australian bituminous thermal coals. Arsenic, selenium and antimony are determined using hydride generation atomic absorption or fluorescence spectrometry, applied to an Eschka fusion of the raw coal. Boron is determined on the same digest using inductively coupled plasma atomic emission spectrometry (ICPAES). ICPAES is also used to determine beryllium, chromium, cobalt, copper, manganese, molybdenum, nickel, lead and zinc, after fusion of a low temperature ash with lithium borate. Other elements of concern including cadmium, uranium and thorium are analyzed by inductively coupled plasma mass spectrometry on a mixed acid digest of a low temperature ash. This technique was also suitable for determining elements analyzed by the ICPAES. Improved methods for chlorine and fluorine have also been developed. Details of the methods will be given and results of validation trials discussed on some of the methods which are anticipated to be designated Australian standard methods.
Application of surface analytical methods in thin film analysis
NASA Astrophysics Data System (ADS)
Wen, Xingu
Self-assembly and the sol-gel process are two promising methods for the preparation of novel materials and thin films. In this research, these two methods were utilized to prepare two types of thin films: self-assembled monolayers of peptides on gold and SiO2 sol-gel thin films modified with Ru(II) complexes. The properties of the resulting thin films were investigated by several analytical techniques in order to explore their potential applications in biomaterials, chemical sensors, nonlinear optics and catalysis. Among the analytical techniques employed in the study, surface analytical techniques, such as X-ray photoelectron spectroscopy (XPS) and grazing angle reflection absorption Fourier transform infrared spectroscopy (RA-FTIR), are particularly useful in providing information regarding the compositions and structures of the thin films. In the preparation of peptide thin films, monodisperse peptides were self-assembled on gold substrate via the N-terminus-coupled lipoic acid. The film compositions were investigated by XPS and agreed well with the theoretical values. XPS results also revealed that the surface coverage of the self-assembled films was significantly larger than that of the physisorbed films and that the chemisorption between the peptides and gold surface was stable in solvent. Studies by angle dependent XPS (ADXPS) and grazing angle RA-FTIR indicated that the peptides were on average oriented at a small angle from the surface normal. By using a model of orientation distribution function, both the peptide tilt angle and film thickness can be well calculated. Ru(II) complex doped SiO2 sol-gel thin films were prepared by low temperature sol-gel process. The ability of XPS coupled with Ar + ion sputtering to provide both chemical and compositional depth profile information of these sol-gel films was evaluated. This technique, together with UV-VIS and electrochemical measurements, was used to investigate the stability of Ru complexes in the composite
21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Procedure for announcing analytical methods for...-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a) FDA may issue an order announcing a specific analytical method or methods for the quantification...
Differential correction method applied to measurement of the FAST reflector
NASA Astrophysics Data System (ADS)
Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng
2016-08-01
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%–80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.
Analytical methods for volatile compounds in wheat bread.
Pico, Joana; Gómez, Manuel; Bernal, José; Bernal, José Luis
2016-01-01
Bread aroma is one of the main requirements for its acceptance by consumers, since it is one of the first attributes perceived. Sensory analysis, crucial to be correlated with human perception, presents limitations and needs to be complemented with instrumental analysis. Gas chromatography coupled to mass spectrometry is usually selected as the technique to determine bread volatile compounds, although proton-transfer reaction mass spectrometry begins also to be used to monitor aroma processes. Solvent extraction, supercritical fluid extraction and headspace analysis are the main options for the sample treatment. The present review focuses on the different sample treatments and instrumental alternatives reported in the literature to analyse volatile compounds in wheat bread, providing advantages and limitations. Usual parameters employed in these analytical methods are also described. PMID:26452307
Analytical Failure Prediction Method Developed for Woven and Braided Composites
NASA Technical Reports Server (NTRS)
Min, James B.
2003-01-01
Historically, advances in aerospace engine performance and durability have been linked to improvements in materials. Recent developments in ceramic matrix composites (CMCs) have led to increased interest in CMCs to achieve revolutionary gains in engine performance. The use of CMCs promises many advantages for advanced turbomachinery engine development and may be especially beneficial for aerospace engines. The most beneficial aspects of CMC material may be its ability to maintain its strength to over 2500 F, its internal material damping, and its relatively low density. Ceramic matrix composites reinforced with two-dimensional woven and braided fabric preforms are being considered for NASA s next-generation reusable rocket turbomachinery applications (for example, see the preceding figure). However, the architecture of a textile composite is complex, and therefore, the parameters controlling its strength properties are numerous. This necessitates the development of engineering approaches that combine analytical methods with limited testing to provide effective, validated design analyses for the textile composite structures development.
Analytical Methods in Untargeted Metabolomics: State of the Art in 2015
Alonso, Arnald; Marsal, Sara; Julià, Antonio
2015-01-01
Metabolomics comprises the methods and techniques that are used to measure the small molecule composition of biofluids and tissues, and is actually one of the most rapidly evolving research fields. The determination of the metabolomic profile – the metabolome – has multiple applications in many biological sciences, including the developing of new diagnostic tools in medicine. Recent technological advances in nuclear magnetic resonance and mass spectrometry are significantly improving our capacity to obtain more data from each biological sample. Consequently, there is a need for fast and accurate statistical and bioinformatic tools that can deal with the complexity and volume of the data generated in metabolomic studies. In this review, we provide an update of the most commonly used analytical methods in metabolomics, starting from raw data processing and ending with pathway analysis and biomarker identification. Finally, the integration of metabolomic profiles with molecular data from other high-throughput biotechnologies is also reviewed. PMID:25798438
A two-dimensional, semi-analytic expansion method for nodal calculations
Palmtag, S.P.
1995-08-01
Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure.
[Analytical methods for control of foodstuffs made from bioengineered plants].
Chernysheva, O N; Sorokina, E Iu
2013-01-01
Foodstuffs made by modern biotechnology are requiring for special control. Analytical methods used for these purposes are being constantly perfected. When choosing a strategy for the analysis, several factors have to be assessed: specificity, sensitivity, practically of the method and time efficiency. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. Protein detection methods are based mainly on ELISA. The specific detection of a novel protein synthesized by gene introduced during transformation constitutes an alternative approach for the identification of GMO. The genetic modification is not always specifically directed at the production of a novel protein and does not always result in protein expression levels sufficient for detection purposes. In addition, some proteins may be expressed only in specific parts of the plant or expressed at different levels in distinct parts of plant. As DNA is a rather stable molecule relative to proteins, it is preferred target for any kind of sample. These methods are more sensitive and specific than protein detection methods. PCR-based test can be categorized into several levels of specificity. The least specific methods are commonly called "screening methods" and relate to target DNA elements, such as promoters and terminators that are present in many different GMOs. For routine screening purpose regulatory elements 35S promoter, derived from the Cauliflower Mosaic Virus and the NOS terminator, derived from the nopaline synthase gene of Agrobacterium tumefaciens, are used as target sequences. The second level is "gene-specific methods". These methods target a part of the DNA harbouring the active gene associated with the specific genetic modification. The highest specificity is seen when the target is the unique junction found at the integration locus between the inserted DNA and the recipient genome. These are called "event-specific methods". For a
Piri-Moghadam, Hamed; Ahmadi, Fardin; Gómez-Ríos, German Augusto; Boyacı, Ezel; Reyes-Garcés, Nathaly; Aghakhani, Ali; Bojko, Barbara; Pawliszyn, Janusz
2016-06-20
Herein we report the development of solid-phase microextraction (SPME) devices designed to perform fast extraction/enrichment of target analytes present in small volumes of complex matrices (i.e. V≤10 μL). Micro-sampling was performed with the use of etched metal tips coated with a thin layer of biocompatible nano-structured polypyrrole (PPy), or by using coated blade spray (CBS) devices. These devices can be coupled either to liquid chromatography (LC), or directly to mass spectrometry (MS) via dedicated interfaces. The reported results demonstrated that the whole analytical procedure can be carried out within a few minutes with high sensitivity and quantitation precision, and can be used to sample from various biological matrices such as blood, urine, or Allium cepa L single-cells. PMID:27158909
Kroniger, K; Herzog, M; Landry, G; Dedes, G; Parodi, K; Traneus, E
2015-06-15
Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used as irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.
Analytical solutions for radiation-driven winds in massive stars. I. The fast regime
Araya, I.; Curé, M.; Cidale, L. S.
2014-11-01
Accurate mass-loss rate estimates are crucial keys in the study of wind properties of massive stars and for testing different evolutionary scenarios. From a theoretical point of view, this implies solving a complex set of differential equations in which the radiation field and the hydrodynamics are strongly coupled. The use of an analytical expression to represent the radiation force and the solution of the equation of motion has many advantages over numerical integrations. Therefore, in this work, we present an analytical expression as a solution of the equation of motion for radiation-driven winds in terms of the force multiplier parameters. This analytical expression is obtained by employing the line acceleration expression given by Villata and the methodology proposed by Müller and Vink. On the other hand, we find useful relationships to determine the parameters for the line acceleration given by Müller and Vink in terms of the force multiplier parameters.
Gaussian and finite-element Coulomb method for the fast evaluation of Coulomb integrals
NASA Astrophysics Data System (ADS)
Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko
2007-04-01
The authors propose a new linear-scaling method for the fast evaluation of Coulomb integrals with Gaussian basis functions called the Gaussian and finite-element Coulomb (GFC) method. In this method, the Coulomb potential is expanded in a basis of mixed Gaussian and finite-element auxiliary functions that express the core and smooth Coulomb potentials, respectively. Coulomb integrals can be evaluated by three-center one-electron overlap integrals among two Gaussian basis functions and one mixed auxiliary function. Thus, the computational cost and scaling for large molecules are drastically reduced. Several applications to molecular systems show that the GFC method is more efficient than the analytical integration approach that requires four-center two-electron repulsion integrals. The GFC method realizes a near linear scaling for both one-dimensional alanine α-helix chains and three-dimensional diamond pieces.
Gaussian and finite-element Coulomb method for the fast evaluation of Coulomb integrals.
Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko
2007-04-14
The authors propose a new linear-scaling method for the fast evaluation of Coulomb integrals with Gaussian basis functions called the Gaussian and finite-element Coulomb (GFC) method. In this method, the Coulomb potential is expanded in a basis of mixed Gaussian and finite-element auxiliary functions that express the core and smooth Coulomb potentials, respectively. Coulomb integrals can be evaluated by three-center one-electron overlap integrals among two Gaussian basis functions and one mixed auxiliary function. Thus, the computational cost and scaling for large molecules are drastically reduced. Several applications to molecular systems show that the GFC method is more efficient than the analytical integration approach that requires four-center two-electron repulsion integrals. The GFC method realizes a near linear scaling for both one-dimensional alanine alpha-helix chains and three-dimensional diamond pieces. PMID:17444700
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
A PDE-Based Fast Local Level Set Method
NASA Astrophysics Data System (ADS)
Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo
1999-11-01
We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
Arcadu, Filippo; Stampanoni, Marco; Marone, Federica
2016-06-27
This paper introduces new gridding projectors designed to efficiently perform analytical and iterative tomographic reconstruction, when the forward model is represented by the derivative of the Radon transform. This inverse problem is tightly connected with an emerging X-ray tube- and synchrotron-based imaging technique: differential phase contrast based on a grating interferometer. This study shows, that the proposed projectors, compared to space-based implementations of the same operators, yield high quality analytical and iterative reconstructions, while improving the computational efficiency by few orders of magnitude. PMID:27410628
Basal buoyancy and fast-moving glaciers: in defense of analytic force balance
NASA Astrophysics Data System (ADS)
van der Veen, C. J.
2016-06-01
The geometric approach to force balance advocated by T. Hughes in a series of publications has challenged the analytic approach by implying that the latter does not adequately account for basal buoyancy on ice streams, thereby neglecting the contribution to the gravitational driving force associated with this basal buoyancy. Application of the geometric approach to Byrd Glacier, Antarctica, yields physically unrealistic results, and it is argued that this is because of a key limiting assumption in the geometric approach. A more traditional analytic treatment of force balance shows that basal buoyancy does not affect the balance of forces on ice streams, except locally perhaps, through bridging effects.
Parabolic approximation method for fast magnetosonic wave propagation in tokamaks
Phillips, C.K.; Perkins, F.W.; Hwang, D.Q.
1985-07-01
Fast magnetosonic wave propagation in a cylindrical tokamak model is studied using a parabolic approximation method in which poloidal variations of the wave field are considered weak in comparison to the radial variations. Diffraction effects, which are ignored by ray tracing mthods, are included self-consistently using the parabolic method since continuous representations for the wave electromagnetic fields are computed directly. Numerical results are presented which illustrate the cylindrical convergence of the launched waves into a diffraction-limited focal spot on the cyclotron absorption layer near the magnetic axis for a wide range of plasma confinement parameters.
Electrical impedance tomography and the fast multipole method
NASA Astrophysics Data System (ADS)
Bikowski, Jutta; Mueller, Jennifer L.
2004-10-01
A 3-D linearization-based reconstruction algorithm for Electrical Impedance Tomography suitable for breast cancer detection using data collected on a rectangular array was introduced by Mueller et al. [IEEE Biomed. Eng., 46(11), 1999]. By considering the scenario as an electrostatic problem, it is possible to model the electrodes with various charges, facilitating the use of the Fast Multipole Method (FMM) for calculating particle interactions and also supporting the use of different electrode models. In this paper the use of FMM is explained and results in form of reconstructed images from experimental data show that this method is an improvement.
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
Asli Tandogan, Anatoly V. Radyushkin
2011-11-01
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x < 1/2 or x > 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x)){sup t} dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|{sup 2t} appears due to a jump at x = 1/2. A good convergence was observed in the t {approx}< 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
NASA Astrophysics Data System (ADS)
Gemayel, R.; Temime-Roussel, B.; Hellebust, S.; Gligorovski, S.; Wortham, H.
2014-12-01
A comprehensive understanding of the chemical composition of the atmospheric particles is of paramount importance in order to understand their impact on the health and climate. Hence, there is an imperative need for the development of appropriate analytical methods of analysis for the on-line, time-resolved measurements of atmospheric particles. Laser Ablation Aerosol Particle Time of Flight Mass Spectrometry (LAAP-TOF-MS) allows a real time qualitative analysis of nanoparticles of differing composition and size. LAAP-TOF-MS is aimed for on-line and continuous measurements of atmospheric particles with the fast time resolution in order of millisecond. This system uses a 193 nm excimer laser for particle ablation/ionization and a 403 nm scattering laser for sizing (and single particle detection/triggering). The charged ions are then extracted into a bi-polar Time-of-Flight mass spectrometer. Here we present an analytical methodology for quantitative determination of the composition and size-distribution of the particles by LAAP-TOF instrument. We developed and validate an analytical methodology of this high time resolution instrument by comparison with the conventional analysis systems with lower time resolution (electronic microscopy, optical counters…) with final aim to render the methodology quantitative. This was performed with the aid of other instruments for on-line and off-line measurement such as Scanning Mobility Particle Sizer, electronic microscopy... Validation of the analytical method was performed under laboratory conditions by detection and identification of the targeted main types such as SiO2, CeO2, and TiO2
The Analytical Methods Manual for the Western Lake Survey - Phase I is a supplement to the Analytical Methods Manual for the Eastern Lake Survey Phase I. The supplement provides a general description of the analytical methods that are used by the field laboratories and by the ana...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2010 CFR
2010-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...
21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Analytical methods for an in vivo bioavailability... Analytical methods for an in vivo bioavailability or bioequivalence study. (a) The analytical method used in... ingredient or therapeutic moiety, or its active metabolite(s), achieved in the body. (b) When the...
21 CFR 530.40 - Safe levels and availability of analytical methods.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe... accordance with § 530.22, the following analytical methods have been accepted by FDA:...
Fast Market Splitting Matching Method for Spot Electric Power Market
NASA Astrophysics Data System (ADS)
Sawa, Toshiyuki; Nakata, Yuji; Tsurugai, Mitsuo; Sugiyama, Shigenari
We have developed a fast, innovative matching method for the spot power market, considering network constraints. In this method, buy and sell order bids are respectively divided into the aggregated volume of several band prices. Then the aggregated volume and the center of each band price are used to calculate a band clearing price, which contains the real clearing price. The dividing and calculating process is iterated until the band price is less than the tick size of the bidding price. We applied this method to a real problem in the Japanese power market with 9 areas, 10 area-connecting lines, and 9000 orders (volume/price pairs). Our simulation results show that the new method is ten times faster than conventional linear programming. This demonstrates the effectiveness of the developed method.
How to assess the quality of your analytical method?
Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten
2015-10-01
Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees. PMID:26408611
Crovelli, Robert A.; revised by Charpentier, Ronald R.
2012-01-01
The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.
An analytical method for predicting postwildfire peak discharges
Moody, John A.
2012-01-01
An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The
Quality control and analytical methods for baculovirus-based products.
Roldão, António; Vicente, Tiago; Peixoto, Cristina; Carrondo, Manuel J T; Alves, Paula M
2011-07-01
Recombinant baculoviruses (rBac) are used for many different applications, ranging from bio-insecticides to the production of heterologous proteins, high-throughput screening of gene functions, drug delivery, in vitro assembly studies, design of antiviral drugs, bio-weapons, building blocks for electronics, biosensors and chemistry, and recently as a delivery system in gene therapy. Independent of the application, the quality, quantity and purity of rBac-based products are pre-requisites demanded by regulatory authorities for product licensing. To guarantee maximization utility, it is necessary to delineate optimized production schemes either using trial-and-error experimental setups ("brute force" approach) or rational design of experiments by aid of in silico mathematical models (Systems Biology approach). For that, one must define all of the main steps in the overall process, identify the main bioengineering issues affecting each individual step and implement, if required, accurate analytical methods for product characterization. In this review, current challenges for quality control (QC) technologies for up- and down-stream processing of rBac-based products are addressed. In addition, a collection of QC methods for monitoring/control of the production of rBac derived products are presented as well as innovative technologies for faster process optimization and more detailed product characterization. PMID:21784235
Feature extraction from mammographic images using fast marching methods
NASA Astrophysics Data System (ADS)
Bottigli, U.; Golosio, B.
2002-07-01
Features extraction from medical images represents a fundamental step for shape recognition and diagnostic support. The present work faces the problem of the detection of large features, such as massive lesions and organ contours, from mammographic images. The regions of interest are often characterized by an average grayness intensity that is different from the surrounding. In most cases, however, the desired features cannot be extracted by simple gray level thresholding, because of image noise and non-uniform density of the surrounding tissue. In this work, edge detection is achieved through the fast marching method (Level Set Methods and Fast Marching Methods, Cambridge University Press, Cambridge, 1999), which is based on the theory of interface evolution. Starting from a seed point in the shape of interest, a front is generated which evolves according to an appropriate speed function. Such function is expressed in terms of geometric properties of the evolving interface and of image properties, and should become zero when the front reaches the desired boundary. Some examples of application of such method to mammographic images from the CALMA database (Nucl. Instr. and Meth. A 460 (2001) 107) are presented here and discussed.
[Fast Implementation Method of Protein Spots Detection Based on CUDA].
Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong
2016-02-01
In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms. PMID:27382745
DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROTECTION
A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...
An analytically enriched finite element method for cohesive crack modeling.
Cox, James V.
2010-04-01
Meaningful computational investigations of many solid mechanics problems require accurate characterization of material behavior through failure. A recent approach to fracture modeling has combined the partition of unity finite element method (PUFEM) with cohesive zone models. Extension of the PUFEM to address crack propagation is often referred to as the extended finite element method (XFEM). In the PUFEM, the displacement field is enriched to improve the local approximation. Most XFEM studies have used simplified enrichment functions (e.g., generalized Heaviside functions) to represent the strong discontinuity but have lacked an analytical basis to represent the displacement gradients in the vicinity of the cohesive crack. As such, the mesh had to be sufficiently fine for the FEM basis functions to capture these gradients.In this study enrichment functions based upon two analytical investigations of the cohesive crack problem are examined. These functions have the potential of representing displacement gradients in the vicinity of the cohesive crack with a relatively coarse mesh and allow the crack to incrementally advance across each element. Key aspects of the corresponding numerical formulation are summarized. Analysis results for simple model problems are presented to evaluate if quasi-static crack propagation can be accurately followed with the proposed formulation. A standard finite element solution with interface elements is used to provide the accurate reference solution, so the model problems are limited to a straight, mode I crack in plane stress. Except for the cohesive zone, the material model for the problems is homogenous, isotropic linear elasticity. The effects of mesh refinement, mesh orientation, and enrichment schemes that enrich a larger region around the cohesive crack are considered in the study. Propagation of the cohesive zone tip and crack tip, time variation of the cohesive zone length, and crack profiles are presented. The analysis
NASA Astrophysics Data System (ADS)
Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan
2016-04-01
Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.
NASA Astrophysics Data System (ADS)
Meeks, Sanford L.; Bova, Frank J.; Buatti, John M.; Friedman, William A.; Eyster, Brian; Kendrick, Lance A.
1999-11-01
Linear accelerator (linac) radiosurgery utilizes non-coplanar arc therapy delivered through circular collimators. Generally, spherically symmetric arc sets are used, resulting in nominally spherical dose distributions. Various treatment planning parameters may be manipulated to provide dose conformation to irregular lesions. Iterative manipulation of these variables can be a difficult and time-consuming task, because (a) understanding the effect of these parameters is complicated and (b) three-dimensional (3D) dose calculations are computationally expensive. This manipulation can be simplified, however, because the prescription isodose surface for all single isocentre distributions can be approximated by conic sections. In this study, the effects of treatment planning parameter manipulation on the dimensions of the treatment isodose surface were determined empirically. These dimensions were then fitted to analytic functions, assuming that the dose distributions were characterized as conic sections. These analytic functions allowed real-time approximation of the 3D isodose surface. Iterative plan optimization, either manual or automated, is achieved more efficiently using this real time approximation of the dose matrix. Subsequent to iterative plan optimization, the analytic function is related back to the appropriate plan parameters, and the dose distribution is determined using conventional dosimetry calculations. This provides a pseudo-inverse approach to radiosurgery optimization, based solely on geometric considerations.
Robust, Scalable, and Fast Bootstrap Method for Analyzing Large Scale Data
NASA Astrophysics Data System (ADS)
Basiri, Shahab; Ollila, Esa; Koivunen, Visa
2016-02-01
In this paper we address the problem of performing statistical inference for large scale data sets i.e., Big Data. The volume and dimensionality of the data may be so high that it cannot be processed or stored in a single computing node. We propose a scalable, statistically robust and computationally efficient bootstrap method, compatible with distributed processing and storage systems. Bootstrap resamples are constructed with smaller number of distinct data points on multiple disjoint subsets of data, similarly to the bag of little bootstrap method (BLB) [1]. Then significant savings in computation is achieved by avoiding the re-computation of the estimator for each bootstrap sample. Instead, a computationally efficient fixed-point estimation equation is analytically solved via a smart approximation following the Fast and Robust Bootstrap method (FRB) [2]. Our proposed bootstrap method facilitates the use of highly robust statistical methods in analyzing large scale data sets. The favorable statistical properties of the method are established analytically. Numerical examples demonstrate scalability, low complexity and robust statistical performance of the method in analyzing large data sets.
The between-day reproducibility of fasting, satiety-related analytes, in 8 to 11year-old boys.
Allsop, Susan; Rumbold, Penny L S; Green, Benjamin P
2016-10-01
The aim of the present study was to establish the between-day reproducibility of fasting plasma GLP-17-36, glucagon, leptin, insulin and glucose, in lean and overweight/obese 8-11year-old boys. A within-group study design was utilised wherein the boys attended two study days, separated by 1week, where a fasting fingertip capillary blood sample was obtained. Deming regression, mean difference, Bland-Altman limits of agreement (LOA) and typical imprecision as a percentage coefficient of variation (CV %), were utilised to assess reproducibility between-days. On a group level, Deming regression detected no evidence of systematic or proportional bias between-days for all of the satiety-related analytes however, only glucose and plasma GLP-17-36 displayed low typical and random imprecision. When analysed according to body composition, good reproducibility was maintained for glucose in the overweight/obese boys and for plasma GLP-17-36, in those with lean body mass. The present findings demonstrate that the measurement of glucose and plasma GLP-17-36 by fingertip capillary sampling on a group level, is reproducible between-days, in 8-11year-old boys. Comparison of blood glucose obtained by fingertip capillary sampling can be made between lean and overweight/obese 8-11year-old boys. Presently, the comparison of fasting plasma GLP-17-36 according to body weight is inappropriate due to high imprecision observed in lean boys between-days. The use of fingertip capillary sampling in the measurement of satiety-related analytes has the potential to provide a better understanding of mechanisms that affect appetite and feeding behaviour in children. PMID:27265877
Segmentation of hand radiographs using fast marching methods
NASA Astrophysics Data System (ADS)
Chen, Hong; Novak, Carol L.
2006-03-01
Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.
A Fast Estimation Method of Railway Passengers' Flow
NASA Astrophysics Data System (ADS)
Nagasaki, Yusaku; Asuka, Masashi; Komaya, Kiyotoshi
To evaluate a train schedule from the viewpoint of passengers' convenience, it is important to know each passenger's choice of trains and transfer stations to arrive at his/her destination. Because of difficulties of measuring such passengers' behavior, estimation methods of railway passengers' flow are proposed to execute such an evaluation. However, a train schedule planning system equipped with those methods is not practical due to necessity of much time to complete the estimation. In this article, the authors propose a fast passengers' flow estimation method that employs features of passengers' flow graph using preparative search based on each train's arrival time at each station. And the authors show the results of passengers' flow estimation applied on a railway in an urban area.
Communications overlapping in fast multipole particle dynamics methods
Kurzak, Jakub; Pettitt, B. Montgomery . E-mail: pettitt@uh.edu
2005-03-01
In molecular dynamics the fast multipole method (FMM) is an attractive alternative to Ewald summation for calculating electrostatic interactions due to the operation counts. However when applied to small particle systems and taken to many processors it has a high demand for interprocessor communication. In a distributed memory environment this demand severely limits applicability of the FMM to systems with O(10 K atoms). We present an algorithm that allows for fine grained overlap of communication and computation, while not sacrificing synchronization and determinism in the equations of motion. The method avoids contention in the communication subsystem making it feasible to use the FMM for smaller systems on larger numbers of processors. Our algorithm also facilitates application of multiple time stepping techniques within the FMM. We present scaling at a reasonably high level of accuracy compared with optimized Ewald methods.
Analytical method to estimate resin cement diffusion into dentin
NASA Astrophysics Data System (ADS)
de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa
2016-05-01
This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C–O–C, 1113 cm-1) present in the cements, and the mineral content (P–O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.
Station-Keeping For COMS Satellite by Analytic Methods
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Kim, Hae-Yeon; Park, Sang-Young; Lee, Byoung-Sun; Park, Jae Woo; Choi, Kyu-Hong
2006-09-01
In this paper, an automation algorithm of analyzing and scheduling the station-keeping maneuver is presented for Communication, Ocean and Meteorological Satellite (COMS). The perturbation analysis for keeping the position of the geostationary satellite is performed by analytic methods. The east/west and north/south station-keeping maneuvers are simulated for COMS. Weekly east/west and biweekly north/south station-keeping maneuvers are investigated for a period of one year. Various station-keeping orbital parameters are analyzed. As the position of COMS is not yet decided at either 128.2°E or 116.0°E, both cases are simulated. For the case of 128.2°E, east/west station-keeping requires Δ V of 3.50m/s and north/south station-keeping requires Δ V of 52.71m/s for the year 2009. For the case of 116.0°E, Δ V of 3.86m/s and Δ V of 52.71m/s are required for east/west and north/south station-keeping, respectively. The results show that the station-keeping maneuver of COMS is more effective at 128.2°E.
NIOSH Manual of Analytical Methods (third edition). Fourth supplement
Not Available
1990-08-15
The NIOSH Manual of Analytical Methods, 3rd edition, was updated for the following chemicals: allyl-glycidyl-ether, 2-aminopyridine, aspartame, bromine, chlorine, n-butylamine, n-butyl-glycidyl-ether, carbon-dioxide, carbon-monoxide, chlorinated-camphene, chloroacetaldehyde, p-chlorophenol, crotonaldehyde, 1,1-dimethylhydrazine, dinitro-o-cresol, ethyl-acetate, ethyl-formate, ethylenimine, sodium-fluoride, hydrogen-fluoride, cryolite, sodium-hexafluoroaluminate, formic-acid, hexachlorobutadiene, hydrogen-cyanide, hydrogen-sulfide, isopropyl-acetate, isopropyl-ether, isopropyl-glycidyl-ether, lead, lead-oxide, maleic-anhydride, methyl-acetate, methyl-acrylate, methyl-tert-butyl ether, methyl-cellosolve-acetate, methylcyclohexanol, 4,4'-methylenedianiline, monomethylaniline, monomethylhydrazine, nitric-oxide, p-nitroaniline, phenyl-ether, phenyl-ether-biphenyl mixture, phenyl-glycidyl-ether, phenylhydrazine, phosphine, ronnel, sulfuryl-fluoride, talc, tributyl-phosphate, 1,1,2-trichloro-1,2,2-trifluoroethane, trimellitic-anhydride, triorthocresyl-phosphate, triphenyl-phosphate, and vinyl-acetate.
Analytical Methods for Assessing Chondroitin Sulfate in Human Plasma.
Mantovani, Veronica; Galeotti, Fabio; Maccari, Francesca; Volpi, Nicola
2016-03-01
Chondroitin sulfate (CS) is a linear heteropolysaccharide of repeating disaccharide units bearing sulfate groups in various positions, commonly at C4 and/or C6 of galactosamine. CS plays important roles in various (patho)physiological processes also performing intriguing biological and therapeutical activities. Plasmatic CS is mainly composed of nonsulfated and 4-sulfated disaccharides. To obtain samples for the determination of CS amount and composition in blood/plasma, dried blood spot (DBS) could be used. DBSs have many advantages over other laboratory methods, allowing for large-scale population screening. Many analytical techniques may be used for the determination of CS. In particular, CE has proved to be a very attractive alternative separation technique for complex polysaccharide characterization. In this work, we compared CS levels between plasma and DBS samples, using CE equipped with the highly sensitive laser-induced fluorescence detector. CS from DBS differs from plasma CS owing to the high content of disaccharides sulfated in C4 and C6. This is due to the presence of the more sulfated CS derived from blood cellular fraction, in particular leukocytes. The identification and quantification of CS in blood plasma could be a useful prognostic and diagnostic tool in pathological conditions and for pharmacological applications. PMID:26961813
A method of fast mosaic for massive UAV images
NASA Astrophysics Data System (ADS)
Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong
2014-11-01
With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.
Fast triangulated vortex methods for the 2D Eulen equations
NASA Astrophysics Data System (ADS)
Russo, Giovanni; Strain, John A.
1994-04-01
Vortex methods for inviscid incompressible two-dimensional fluid flow are usually based on blob approximations. This paper presents a vortex method in which the vorticity is approximated by a piecewise polynomial interpolant on a Delaunay triangulation of the vortices. An efficient reconstruction of the Delaunay triangulation at each step makes the method accurate for long times. The vertices of the triangulation move with the fluid velocity, which is reconstructed from the vorticity via a simplified fast multipole method for the Biot-Savart law with a continuous source distribution. The initial distribution of vortices is constructed from the initial vorticity field by an adaptive approximation method which produces good accuracy even for discontinuous initial data. Numerical results show that the method is highly accurate over long time intervals. Experiments with single and multiple circular and elliptical rotating patches of both piecewise constant and smooth vorticity indicate that the method produces much smaller errors than blob methods with the same number of degrees of freedom, at little additional cost. Generalizations to domains with boundaries, viscous flow, and three space dimensions are discussed.
Fast triangulated vortex methods for the 2D Euler equations
Russo, G. ); Strain, J.A. )
1994-04-01
Vortex methods for inviscid incompressible two-dimensional fluid flow are usually based on blob approximations. This paper presents a vortex method in which the vorticity is approximated by a piecewise polynomial interpolant on a Delaunay triangulation of the vortices. An efficient reconstruction of the Delaunay triangulation at each step makes the method accurate for long times. The vertices of the triangulation move with the fluid velocity, which is reconstructed from the vorticity via a simplified fast multipole method for the Biot-Savart law with a continuous source distribution. The initial distribution of vortices is constructed from the initial vorticity field by an adaptive approximation method which produces good accuracy even for discontinuous initial data. Numerical results show that the method is highly accurate over long time intervals. Experiments with single and multiple circular and elliptical rotating patches of both piecewise constant and smooth vorticity indicate that the method produces much smaller errors than blob methods with the same number of degrees of freedom, at little additional cost. Generalizations to domains with boundaries, viscous flow, and three space dimensions are discussed. 52 refs., 28 figs., 2 tabs.
Evaluating protocols and analytical methods for peptide adsorption experiments.
Fears, Kenan P; Petrovykh, Dmitri Y; Clark, Thomas D
2013-12-01
This paper evaluates analytical techniques that are relevant for performing reliable quantitative analysis of peptide adsorption on surfaces. Two salient problems are addressed: determining the solution concentrations of model GG-X-GG, X5, and X10 oligopeptides (G = glycine, X = a natural amino acid), and quantitative analysis of these peptides following adsorption on surfaces. To establish a uniform methodology for measuring peptide concentrations in water across the entire GG-X-GG and X n series, three methods were assessed: UV spectroscopy of peptides having a C-terminal tyrosine, the bicinchoninic acid (BCA) protein assay, and amino acid (AA) analysis. Due to shortcomings or caveats associated with each of the different methods, none were effective at measuring concentrations across the entire range of representative model peptides. In general, reliable measurements were within 30% of the nominal concentration based on the weight of as-received lyophilized peptide. In quantitative analysis of model peptides adsorbed on surfaces, X-ray photoelectron spectroscopy (XPS) data for a series of lysine-based peptides (GGKGG, K5, and K10) on Au substrates, and for controls incubated in buffer in the absence of peptides, suggested a significant presence of aliphatic carbon species. Detailed analysis indicated that this carbonaceous contamination adsorbed from the atmosphere after the peptide deposition. The inferred adventitious nature of the observed aliphatic carbon was supported by control experiments in which substrates were sputter-cleaned by Ar(+) ions under ultra-high vacuum (UHV) then re-exposed to ambient air. In contrast to carbon contamination, no adventitious nitrogen species were detected on the controls; therefore, the relative surface densities of irreversibly-adsorbed peptides were calculated by normalizing the N/Au ratios by the average number of nitrogen atoms per residue. PMID:24706133
Fast state-space methods for inferring dendritic synaptic connectivity.
Pakman, Ari; Huggins, Jonathan; Smith, Carl; Paninski, Liam
2014-06-01
We present fast methods for filtering voltage measurements and performing optimal inference of the location and strength of synaptic connections in large dendritic trees. Given noisy, subsampled voltage observations we develop fast l1-penalized regression methods for Kalman state-space models of the neuron voltage dynamics. The value of the l1-penalty parameter is chosen using cross-validation or, for low signal-to-noise ratio, a Mallows' Cp-like criterion. Using low-rank approximations, we reduce the inference runtime from cubic to linear in the number of dendritic compartments. We also present an alternative, fully Bayesian approach to the inference problem using a spike-and-slab prior. We illustrate our results with simulations on toy and real neuronal geometries. We consider observation schemes that either scan the dendritic geometry uniformly or measure linear combinations of voltages across several locations with random coefficients. For the latter, we show how to choose the coefficients to offset the correlation between successive measurements imposed by the neuron dynamics. This results in a "compressed sensing" observation scheme, with an important reduction in the number of measurements required to infer the synaptic weights. PMID:24077932
A square-wave adsorptive stripping voltammetric method for determination of fast green dye.
Al-Ghamdi, Ali F
2009-01-01
Square-wave adsorptive stripping voltammetric (SW-AdSV) determinations of trace concentrations of the coloring agent fast green were described. The analytical methodology used was based on the adsorptive preconcentration of the dye on the hanging mercury drop electrode, and then a negative sweep was initiated. In pH 10 carbonate supporting electrolyte, fast green gave a well-defined and sensitive SW-AdSV peak at -1220 mV. The electroanalytical determination of this dye was found to be optimized in carbonate buffer (pH 10) with the following experimental conditions: accumulation time (120 s); accumulation potential (-0.8 V); scan rate (800 mV/s); pulse amplitude (90 mV); frequency (90 Hz); surface area of the working electrode (0.6 mm2); and the convection rate (2000 rpm). Under these optimized conditions, the AdSV peak current was proportional over the concentration range 2 x 10(-8) -6 x 10(-7) M (r = 0.999), with an LOD of 1.63 x 10(-10) M (0.132 ppb). This analytical approach possessed more enhanced sensitivity than conventional chromatography or spectrophotometry, and was simple and quick. The precision of the method in terms of RSD was 0.17%, whereas the accuracy was evaluated via the mean recovery of 99.6%. Possible interferences by several substances usually present as food additive azo dyes (E110, E102, E123, and E129), natural and artificial sweeteners, and antioxidants were also investigated. Applicability of the developed electroanalysis method was illustrated via the determination of fast green in ice cream and soft drink samples. PMID:20166589
Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2
Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.
1994-07-01
that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.
Analytic Method to Estimate Particle Acceleration in Flux Ropes
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
Zhao, Huaying; Brautigam, Chad A.; Ghirlando, Rodolfo; Schuck, Peter
2013-01-01
Significant progress in the interpretation of analytical ultracentrifugation (AUC) data in the last decade has led to profound changes in the practice of AUC, both for sedimentation velocity (SV) and sedimentation equilibrium (SE). Modern computational strategies have allowed for the direct modeling of the sedimentation process of heterogeneous mixtures, resulting in SV size-distribution analyses with significantly improved detection limits and strongly enhanced resolution. These advances have transformed the practice of SV, rendering it the primary method of choice for most existing applications of AUC, such as the study of protein self- and hetero-association, the study of membrane proteins, and applications in biotechnology. New global multi-signal modeling and mass conservation approaches in SV and SE, in conjunction with the effective-particle framework for interpreting the sedimentation boundary structure of interacting systems, as well as tools for explicit modeling of the reaction/diffusion/sedimentation equations to experimental data, have led to more robust and more powerful strategies for the study of reversible protein interactions and multi-protein complexes. Furthermore, modern mathematical modeling capabilities have allowed for a detailed description of many experimental aspects of the acquired data, thus enabling novel experimental opportunities, with important implications for both sample preparation and data acquisition. The goal of the current commentary is to supplement previous AUC protocols, Current Protocols in Protein Science 20.3 (1999) and 20.7 (2003), and 7.12 (2008), and provide an update describing the current tools for the study of soluble proteins, detergent-solubilized membrane proteins and their interactions by SV and SE. PMID:23377850
Fast radiative transfer of dust reprocessing in semi-analytic models with artificial neural networks
NASA Astrophysics Data System (ADS)
Silva, Laura; Fontanot, Fabio; Granato, Gian Luigi
2012-06-01
A serious concern for semi-analytical galaxy formation models, aiming to simulate multiwavelength surveys and to thoroughly explore the model parameter space, is the extremely time-consuming numerical solution of the radiative transfer of stellar radiation through dusty media. To overcome this problem, we have implemented an artificial neural network (ANN) algorithm in the radiative transfer code GRASIL, in order to significantly speed up the computation of the infrared (IR) spectral energy distribution (SED). The ANN we have implemented is of general use, in that its input neurons are defined as those quantities effectively determining the shape of the IR SED. Therefore, the training of the ANN can be performed with any model and then applied to other models. We made a blind test to check the algorithm, by applying a net trained with a standard chemical evolution model (i.e. CHE_EVO) to a mock catalogue extracted from the semi-analytic model MORGANA, and compared galaxy counts and evolution of the luminosity functions in several near-IR to sub-millimetre (sub-mm) bands, and also the spectral differences for a large subset of randomly extracted models. The ANN is able to excellently approximate the full computation, but with a gain in CPU time by ˜2 orders of magnitude. It is only advisable that the training covers reasonably well the range of values of the input neurons in the application. Indeed in the sub-mm at high redshift, a tiny fraction of models with some sensible input neurons out of the range of the trained net cause wrong answer by the ANN. These are extreme starbursting models with high optical depths, favourably selected by sub-mm observations, and are difficult to predict a priori.
A Method for Fast Computation of FTLE Fields
NASA Astrophysics Data System (ADS)
Brunton, Steven; Rowley, Clarence
2008-11-01
An efficient method for computing finite time Lyapunov exponent (FTLE) fields is investigated. FTLE fields, which measure the stretching between nearby particles, are important in determining transport mechanisms in unsteady flows. Ridges of the FTLE field are Lagrangian Coherent Structures (LCS) and provide an unsteady analogue of invariant manifolds from dynamical systems theory. FTLE field computations are expensive because of the large number of particle trajectories which must be integrated. However, when computing a time series of fields, it is possible to use the integrated trajectories at a previous time to compute an approximation of the integrated trajectories initialized at a later time, resulting in significant computational savings. This work provides analytic estimates for accumulated error and computation time as well as simulations comparing exact results with the approximate method for a number of interesting flows.
Nanocrystalline hydroxyapatite coatings on titanium: a new fast biomimetic method.
Bigi, Adriana; Boanini, Elisa; Bracci, Barbara; Facchini, Alessandro; Panzavolta, Silvia; Segatti, Francesco; Sturba, Luigina
2005-07-01
We obtained a fast biomimetic deposition of hydroxyapatite (HA) coatings on Ti6Al4V substrates using a slightly supersaturated Ca/P solution, with an ionic composition simpler than that of simulated body fluid (SBF). At variance with other fast deposition methods, which produce amorphous calcium phosphate coatings, the new proposed composition allows one to obtain nanocrystalline HA. Soaking in supersaturated Ca/P solution results in the deposition of a uniform coating in a few hours, whereas SBF, or even 1.5SBF, requires 14 days to deposit a homogeneous coating on the same substrates. The coating consists of HA globular aggregates, which exhibit a finer lamellar structure than those deposited from SBF. The extent of deposition increases on increasing the immersion time. Transmission electron microscope (TEM) images recorded on the material detached from the coating show that the deposition is constituted of thin nanocrystals. Electron diffraction (ED) patterns recorded from most of the crystals exhibit the presence of rings, which can be indexed as reflections characteristic of HA. Furthermore, several HA single-crystal spot ED images were obtained from individual crystals. PMID:15664635
Fast detection of air contaminants using immunobiological methods
NASA Astrophysics Data System (ADS)
Schmitt, Katrin; Bolwien, Carsten; Sulz, Gerd; Koch, Wolfgang; Dunkhorst, Wilhelm; Lödding, Hubert; Schwarz, Katharina; Holländer, Andreas; Klockenbring, Torsten; Barth, Stefan; Seidel, Björn; Hofbauer, Wolfgang; Rennebarth, Torsten; Renzl, Anna
2009-05-01
The fast and direct identification of possibly pathogenic microorganisms in air is gaining increasing interest due to their threat for public health, e.g. in clinical environments or in clean rooms of food or pharmaceutical industries. We present a new detection method allowing the direct recognition of relevant germs or bacteria via fluorescence-labeled antibodies within less than one hour. In detail, an air-sampling unit passes particles in the relevant size range to a substrate which contains antibodies with fluorescence labels for the detection of a specific microorganism. After the removal of the excess antibodies the optical detection unit comprising reflected-light and epifluorescence microscopy can identify the microorganisms by fast image processing on a single-particle level. First measurements with the system to identify various test particles as well as interfering influences have been performed, in particular with respect to autofluorescence of dust particles. Specific antibodies for the detection of Aspergillus fumigatus spores have been established. The biological test system consists of protein A-coated polymer particles which are detected by a fluorescence-labeled IgG. Furthermore the influence of interfering particles such as dust or debris is discussed.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
Groeneboom, N. E.; Dahle, H.
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
A novel fast and accurate pseudo-analytical simulation approach for MOAO
NASA Astrophysics Data System (ADS)
Gendron, É.; Charara, A.; Abdelfattah, A.; Gratadour, D.; Keyes, D.; Ltaief, H.; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.
2014-08-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures
Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George
2012-01-01
We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.
Fast Second Degree Total Variation Method for Image Compressive Sensing
Liu, Pengfei; Xiao, Liang; Zhang, Jun
2015-01-01
This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed. PMID:26361008
Mössbauer concentratometry as a new analytical method
NASA Astrophysics Data System (ADS)
Kholmetskii, A. L.; Misevich, O. V.; Abramchuk, N. M.; Leshkov, S. M.
1994-12-01
The physical basis of Mössbauer concentratometry with resonance detection of backscattered radiation has been considered. A basic analytical equation has been obtained and some of its consequences for tin dioxide measurements discussed. A Mössbauer concentratometer is briefly described.
Analytic methods in assessment of optic nerve cupping.
Jindra, L F; Kuběna, T; Gaudino, R N
2014-06-01
The intent of this paper is to provide a systems-based analysis of the methods used to evaluate optic nerve cupping, identify potential flaws in these systems, and propose alternatives better to assess this anatomic quantity. Estimation of optic nerve cupping requires an analytic understanding of both the psychophysical as well as the mathematical bases inherent in this measure. When the (decimal-based) cup-to-disc ratio is used to quantitate optic nerve cupping, a one-dimensional, linear estimate is produced, which in turn is derived from two- or three-dimensional, non-linear physical quantities of area or volume, respectively. When extrapolating from volume, to area, to linear measures, due to the psychophysical constraints which limit this task, such a data-compressed estimate of optic nerve cupping may neither accurately reflect, nor correctly represent, the true amount of cupping actually present in the optic nerve head. This type of one-dimensional metric (when comparing calculations from two- or three-dimensional measures over a range of optic nerve cupping), appears to introduce errors which, while most pronounced earlier on in the disease progression, often overestimate the amount of relative cupping (percent cupping) present in a pathological process like glaucoma. The same systemic errors can also lead to overestimation of the progression in cupping, especially in optic nerves with low cup-to disc values. To provide clinically meaningful estimates of optic nerve cupping, the practitioner needs to be aware of psychophysical and mathematical limitations inherent in using a linear cup-to-disc ratio to estimate the amount of cupping observed in a physical structure like the optic disc. The resultant flaws introduced by observer extrapolation from three, to two, to one dimensions (volume, area, and linear); transposition from non-linear to linear quantities; and optical illusions, caused by factors like disc topology, morphology, and ametropia, can all
Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C
2016-08-24
Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid. PMID:27496995
Hanford environmental analytical methods: Methods as of March 1990. Volume 3, Appendix A2-I
Goheen, S.C.; McCulloch, M.; Daniel, J.L.
1993-05-01
This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually.
Communications Overlapping in Fast Multipole Particle Dynamics Methods
Kurzak, Jakub; Pettitt, Bernard M.
2005-03-01
The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. In molecular dynamics the fast multipole method (FMM) is an attractive alternative to Ewald summation for calculating electrostatic interactions due to the operation counts. However when applied to small particle systems and taken to many processors it has a high demand for interprocessor communication. In a distributed memory environment this demand severely limits applicability of the FMM to systems with O(10 K atoms). We present an algorithm that allows for fine grained overlap of communication and computation, while not sacrificing synchronization and determinism in the equations of motion. The method avoids contention in the communication subsystem making it feasible to use the FMM for smaller systems on larger numbers of processors. Our algorithm also facilitates application of multiple time stepping techniques within the FMM. We present scaling at a reasonably high level of accuracy compared with optimized Ewald methods.
Sverko, Ed
2006-01-01
Analytical methods for the analysis of polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs) are widely available and are the result of a vast amount of environmental analytical method development and research on persistent organic pollutants (POPs) over the past 30–40 years. This review summarizes procedures and examines new approaches for extraction, isolation, identification and quantification of individual congeners/isomers of the PCBs and OCPs. Critical to the successful application of this methodology is the collection, preparation, and storage of samples, as well as specific quality control and reporting criteria, and therefore these are also discussed. With the signing of the Stockholm convention on POPs and the development of global monitoring programs, there is an increased need for laboratories in developing countries to determine PCBs and OCPs. Thus, while this review attempts to summarize the current best practices for analysis of PCBs and OCPs, a major focus is the need for low-cost methods that can be easily implemented in developing countries. A “performance based” process is described whereby individual laboratories can adapt methods best suited to their situations. Access to modern capillary gas chromatography (GC) equipment with either electron capture or low-resolution mass spectrometry (MS) detection to separate and quantify OCP/PCBs is essential. However, screening of samples, especially in areas of known use of OCPs or PCBs, could be accomplished with bioanalytical methods such as specific commercially available enzyme-linked immunoabsorbent assays and thus this topic is also reviewed. New analytical techniques such two-dimensional GC (2D-GC) and “fast GC” using GC–ECD may be well-suited for broader use in routine PCB/OCP analysis in the near future given their relatively low costs and ability to provide high-resolution separations of PCB/OCPs. Procedures with low environmental impact (SPME, microscale, low
Fast integral methods for integrated optical systems simulations: a review
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.
2015-09-01
-functional profiles, very deep ones, very large ones compared to wavelength, or simple smooth profiles. This integral method with either trigonometric or spline collocation, iterative solver with O(N2) complexity, named IESMP, was significantly improved by an efficient mesh refinement, matrix preconditioning, Ewald summation method, and an exponentially convergent quadrature in 2006 by G. Schmidt and A. Rathsfeld from Weierstrass-Institute (WIAS) Berlin. The so-called modified integral method (MIM) is a modification of the IEM of D. Maystre and has been introduced by L. Goray in 1995. It has been improved for weak convergence problems in 2001 and it was the only commercial available integral method for a long time, known as PCGRATE. All referenced integral methods so far are for in-plane diffraction only, no conical diffraction was possible. The first integral method for gratings in conical mounting was developed and proven under very weak conditions by G. Schmidt (WIAS) in 2010. It works for separated interfaces and for inclusions as well as for interpenetrating interfaces and for a large number of thin and thick layers in the same stable way. This very fast method has then been implemented for parallel processing under Unix and Windows operating systems. This work gives an overview over the most important BIMs for grating diffraction. It starts by presenting the historical evolution of the methods, highlights their advantages and differences, and gives insight into new approaches and their achievements. It addresses future open challenges at the end.
Sonoluminescence Spectroscopy as a Promising New Analytical Method
NASA Astrophysics Data System (ADS)
Yurchenko, O. I.; Kalinenko, O. S.; Baklanov, A. N.; Belov, E. A.; Baklanova, L. V.
2016-03-01
The sonoluminescence intensity of Cs, Ru, K, Na, Li, Sr, In, Ga, Ca, Th, Cr, Pb, Mn, Ag, and Mg salts in aqueous solutions of various concentrations was investigated as a function of ultrasound frequency and intensity. Techniques for the determination of these elements in solutions of table salt and their own salts were developed. It was shown that the proposed analytical technique gave results at high concentrations with better metrological characteristics than atomic-absorption spectroscopy because the samples were not diluted.
Method for Operating a Sensor to Differentiate Between Analytes in a Sample
Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J
1998-07-28
Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.
A fast inversion method for interpreting borehole electromagnetic data
NASA Astrophysics Data System (ADS)
Kim, H. J.; Lee, K. H.; Wilt, M.
2003-05-01
A fast and stable inversion scheme has been developed using the localized nonlinear (LN) approximation to analyze electromagnetic fields obtained in a borehole. The medium is assumed to be cylindrically symmetric about the borehole, and to maintain the symmetry a vertical magnetic dipole is used as a source. The efficiency and robustness of an inversion scheme is very much dependent on the proper use of Lagrange multiplier, which is often provided manually to achieve a desired convergence. We utilize an automatic Lagrange multiplier selection scheme, which enhances the utility of the inversion scheme in handling field data. In this selection scheme, the integral equation (IE) method is quite attractive in speed because Green's functions, the most time consuming part in IE methods, are repeatedly re-usable throughout the selection procedure. The inversion scheme using the LN approximation has been tested to show its stability and efficiency using synthetic and field data. The inverted result from the field data is successfully compared with induction logging data measured in the same borehole.
Fast and sensitive method for detecting volatile species in liquids.
Trimarco, Daniel B; Pedersen, Thomas; Hansen, Ole; Chorkendorff, Ib; Vesborg, Peter C K
2015-07-01
This paper presents a novel apparatus for extracting volatile species from liquids using a "sniffer-chip." By ultrafast transfer of the volatile species through a perforated and hydrophobic membrane into an inert carrier gas stream, the sniffer-chip is able to transport the species directly to a mass spectrometer through a narrow capillary without the use of differential pumping. This method inherits features from differential electrochemical mass spectrometry (DEMS) and membrane inlet mass spectrometry (MIMS), but brings the best of both worlds, i.e., the fast time-response of a DEMS system and the high sensitivity of a MIMS system. In this paper, the concept of the sniffer-chip is thoroughly explained and it is shown how it can be used to quantify hydrogen and oxygen evolution on a polycrystalline platinum thin film in situ at absolute faradaic currents down to ∼30 nA. To benchmark the capabilities of this method, a CO-stripping experiment is performed on a polycrystalline platinum thin film, illustrating how the sniffer-chip system is capable of making a quantitative in situ measurement of <1% of a monolayer of surface adsorbed CO being electrochemically stripped off an electrode at a potential scan-rate of 50 mV s(-1). PMID:26233407
Fast and sensitive method for detecting volatile species in liquids
NASA Astrophysics Data System (ADS)
Trimarco, Daniel B.; Pedersen, Thomas; Hansen, Ole; Chorkendorff, Ib; Vesborg, Peter C. K.
2015-07-01
This paper presents a novel apparatus for extracting volatile species from liquids using a "sniffer-chip." By ultrafast transfer of the volatile species through a perforated and hydrophobic membrane into an inert carrier gas stream, the sniffer-chip is able to transport the species directly to a mass spectrometer through a narrow capillary without the use of differential pumping. This method inherits features from differential electrochemical mass spectrometry (DEMS) and membrane inlet mass spectrometry (MIMS), but brings the best of both worlds, i.e., the fast time-response of a DEMS system and the high sensitivity of a MIMS system. In this paper, the concept of the sniffer-chip is thoroughly explained and it is shown how it can be used to quantify hydrogen and oxygen evolution on a polycrystalline platinum thin film in situ at absolute faradaic currents down to ˜30 nA. To benchmark the capabilities of this method, a CO-stripping experiment is performed on a polycrystalline platinum thin film, illustrating how the sniffer-chip system is capable of making a quantitative in situ measurement of <1 % of a monolayer of surface adsorbed CO being electrochemically stripped off an electrode at a potential scan-rate of 50 mV s-1.
A Domain Decomposition Parallelization of the Fast Marching Method
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.
Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus
2016-02-01
We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images. PMID:26915000
Maria Rizelio, Viviane; Gonzaga, Luciano Valdemiro; Borges, Graciele da Silva Campelo; Maltez, Heloisa França; Costa, Ana Carolina Oliveira; Fett, Roseane
2012-09-15
This study reports the development and validation of a fast capillary electrophoresis method for cation determination in honey samples and the classification of honey by geographical origin using Principal Components Analysis (PCA). The background electrolyte (BGE) was optimized using the Peakmaster(®) software, which evaluates the tendency of the analytes to undergo electromigration dispersion and the BGE buffer capacity and conductivity. The final BGE composition was defined as 30 mmol L(-1) imidazole, 300 mmol L(-1) acetic acid and 140 mmol L(-1) Lactic acid, at pH 3,0, and the separation of K(+), Na(+), Ca(2+), Mg(2+) and Mn(2+) using Ba(2+) as the internal standard was achieved in less than 2 min. The method showed satisfactory results in terms of linearity (R(2)>0.999), the detection limits ranged from 0.27-3.17 mg L(-1) and the quantification limits ranged from 0.91-10.55 mg L(-1). Precision measurements within 0.55 and 4.64%RSD were achieved and recovery values for the analytes in the honey samples ranged from 93.6%-108.6%. Forty honey samples were analyzed to test the proposed method. These samples were dissolved in deionized water and filtered before injection. The CE-UV reliability in the cation analysis in the real sample was compared statistically with ICP-MS methodology. No significant differences were found, with a 95% confidence interval between the methodologies. The PCA showed that the cumulative variance for the first two principal components explain more than 85% of the variability of the data. The analytical data suggest a significant influence of the geographical origin on the mineral composition. PMID:22967578
Hardware architecture design of a fast global motion estimation method
NASA Astrophysics Data System (ADS)
Liang, Chaobing; Sang, Hongshi; Shen, Xubang
2015-12-01
VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.
Analytical methods for the evaluation of melamine contamination.
Cantor, Stuart L; Gupta, Abhay; Khan, Mansoor A
2014-02-01
There is an urgent need for the analysis of melamine in the global pharmaceutical supply chain to detect economically motivated adulteration or unintentional contamination using a simple, nondestructive analytical technique that confirms the extent of adulteration in a shorter time period. In this work, different analytical techniques (thermal analysis, X-ray diffraction, Fourier transform infrared (FT-IR), FT-Raman, and near-infrared (NIR) spectroscopy) were evaluated for their ability to detect a range of melamine levels in gelatin. While FT-IR and FT-Raman provided qualitative assessment of melamine contamination or adulteration, powder X-ray diffraction and NIR were able to detect and quantify the presence of melamine at levels as low as 1.0% w/w. Multivariate analysis of the NIR data yielded the most accurate model when three principal components were used. Data were pretreated using standard normal variate transformation to remove multiplicative interferences of scatter and particle size. The model had a root-mean-square error of calibration of 2.4 (R(2) = 0.99) and root-mean square error of prediction of 2.5 (R(2) = 0.96). The value of the paired t test for actual and predicted samples (1%-50% w/w) was 0.448 (p < 0.05), further indicating the robustness of the model. PMID:24327168
PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS
Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...
Antibodies covalently immobilized on actin filaments for fast myosin driven analyte transport.
Kumar, Saroj; ten Siethoff, Lasse; Persson, Malin; Lard, Mercy; te Kronnie, Geertruy; Linke, Heiner; Månsson, Alf
2012-01-01
Biosensors would benefit from further miniaturization, increased detection rate and independence from external pumps and other bulky equipment. Whereas transportation systems built around molecular motors and cytoskeletal filaments hold significant promise in the latter regard, recent proof-of-principle devices based on the microtubule-kinesin motor system have not matched the speed of existing methods. An attractive solution to overcome this limitation would be the use of myosin driven propulsion of actin filaments which offers motility one order of magnitude faster than the kinesin-microtubule system. Here, we realized a necessary requirement for the use of the actomyosin system in biosensing devices, namely covalent attachment of antibodies to actin filaments using heterobifunctional cross-linkers. We also demonstrated consistent and rapid myosin II driven transport where velocity and the fraction of motile actin filaments was negligibly affected by the presence of antibody-antigen complexes at rather high density (>20 µm(-1)). The results, however, also demonstrated that it was challenging to consistently achieve high density of functional antibodies along the actin filament, and optimization of the covalent coupling procedure to increase labeling density should be a major focus for future work. Despite the remaining challenges, the reported advances are important steps towards considerably faster nanoseparation than shown for previous molecular motor based devices, and enhanced miniaturization because of high bending flexibility of actin filaments. PMID:23056279
Antibodies Covalently Immobilized on Actin Filaments for Fast Myosin Driven Analyte Transport
Kumar, Saroj; ten Siethoff, Lasse; Persson, Malin; Lard, Mercy; te Kronnie, Geertruy; Linke, Heiner; Månsson, Alf
2012-01-01
Biosensors would benefit from further miniaturization, increased detection rate and independence from external pumps and other bulky equipment. Whereas transportation systems built around molecular motors and cytoskeletal filaments hold significant promise in the latter regard, recent proof-of-principle devices based on the microtubule-kinesin motor system have not matched the speed of existing methods. An attractive solution to overcome this limitation would be the use of myosin driven propulsion of actin filaments which offers motility one order of magnitude faster than the kinesin-microtubule system. Here, we realized a necessary requirement for the use of the actomyosin system in biosensing devices, namely covalent attachment of antibodies to actin filaments using heterobifunctional cross-linkers. We also demonstrated consistent and rapid myosin II driven transport where velocity and the fraction of motile actin filaments was negligibly affected by the presence of antibody-antigen complexes at rather high density (>20 µm−1). The results, however, also demonstrated that it was challenging to consistently achieve high density of functional antibodies along the actin filament, and optimization of the covalent coupling procedure to increase labeling density should be a major focus for future work. Despite the remaining challenges, the reported advances are important steps towards considerably faster nanoseparation than shown for previous molecular motor based devices, and enhanced miniaturization because of high bending flexibility of actin filaments. PMID:23056279
Analysis methods for fast impurity ion dynamics data
Den Hartog, D.J.; Almagri, A.F.; Prager, S.C.; Fonck, R.J.
1994-08-01
A high resolution spectrometer has been developed and used on the MST reversed-field pinch (RFP) to measure passively impurity ion temperatures and flow velocities with 10 {mu}s temporal resolution. Such measurements of MHD-scale fluctuations are particularly relevant in the RFP because the flow velocity fluctuation induced transport of current (the ``MHD dynamo``) may produce the magnetic field reversal characteristic of an RFP. This instrument will also be used to measure rapid changes in the equilibrium flow velocity, such as occur during locking and H-mode transition. The precision of measurements made to date is <0.6 km/s. The authors are developing accurate analysis techniques appropriate to the reduction of this fast ion dynamics data. Moment analysis and curve-fitting routines have been evaluated for noise sensitivity and robustness. Also presented is an analysis method which correctly separates the flux-surface average of the correlated fluctuations in u and B from the fluctuations due to rigid shifts of the plasma column.
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.
Progress in the GEOROC Database - Fast and Simple Access to Analytical Data by Precompilation
NASA Astrophysics Data System (ADS)
Sarbas, B.
2001-12-01
sample, these are compiled according to specific rules. These rules consider the method of analysis as well as the year of publication.
Sterigmatocystin: occurrence in foodstuffs and analytical methods--an overview.
Versilovskis, Aleksandrs; De Saeger, Sarah
2010-01-01
Sterigmatocystin (STC) is a mycotoxin produced by fungi of many different Aspergillus species. Other species such as Bipolaris, Chaetomium, Emiricella are also able to produce STC. STC producing fungi were frequently isolated from different foodstuffs, while STC was regularly detected in grains, corn, bread, cheese, spices, coffee beans, soybeans, pistachio nuts, animal feed and silage. STC shows different toxicological, mutagenic and carcinogenic effects in animals and has been recognized as a 2B carcinogen (possible human carcinogen) by International Agency for Research on Cancer. There are more than 775 publications available in Scopus (and more than 505 in PubMed) mentioning STC, but there is no summary information available about STC occurrence and analysis in food. This review presents an overview of the worldwide information on the occurrence of STC in different foodstuffs during the last 40 years, and describes the progress made in analytical methodology for the determination of STC in food. PMID:19998385
Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph
2012-11-01
Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization. PMID:23084050
The field analytical screening program (FASP) polychlorinated biphenyl (PCB) method uses a temperature-programmable gas chromatograph (GC) equipped with an electron capture detector (ECD) to identify and quantify PCBs. Gas chromatography is an EPA-approved method for determi...
DEVELOPMENT OF A RAPID ANALYTICAL METHOD FOR DETERMINING ASBESTOS IN WATER
The development of a rapid analytical method for determining chrysotile asbestos in water that requires substantially less time per analysis than electron microscopy methods is described. Based on the proposition that separation of chrysotile from other waterborne particulate wou...
Survey of Technetium Analytical Production Methods Supporting Hanford Nuclear Materials Processing
TROYER, G.L.
1999-11-03
This document provides a historical survey of analytical methods used for measuring {sup 99}Tc in nuclear fuel reprocessing materials and wastes at Hanford. Method challenges including special sludge matrices tested are discussed. Special problems and recommendations are presented.
Analytical methods for the determination of carbon tetrachloride in soils.
Alvarado, J. S.; Spokas, K.; Taylor, J.
1999-06-01
Improved methods for the determination of carbon tetrachloride are described. These methods incorporate purge-and-trap concentration of heated dry samples, an improved methanol extraction procedure, and headspace sampling. The methods minimize sample pretreatment, accomplish solvent substitution, and save time. The methanol extraction and headspace sampling procedures improved the method detection limits and yielded better sensitivity, good recoveries, and good performance. Optimization parameters are shown. Results obtained with these techniques are compared for soil samples from contaminated sites.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... evidenced by the analysis of method blanks, laboratory control samples, and spiked samples that also contain... into the sample (as evidenced by the analysis of method blanks, laboratory control samples, and...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... evidenced by the analysis of method blanks, laboratory control samples, and spiked samples that also contain... into the sample (as evidenced by the analysis of method blanks, laboratory control samples, and...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR 136.6... monitoring wavelength of a colorimeter or the reaction time and temperature as needed to achieve the chemical..., preservation, or holding time requirements of an approved method. Such modifications to sample...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR 136.6... monitoring wavelength of a colorimeter or the reaction time and temperature as needed to achieve the chemical..., preservation, or holding time requirements of an approved method. Such modifications to sample...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR 136.6... monitoring wavelength of a colorimeter or the reaction time and temperature as needed to achieve the chemical..., preservation, or holding time requirements of an approved method. Such modifications to sample...
Methods for performing fast discrete curvelet transforms of data
Candes, Emmanuel; Donoho, David; Demanet, Laurent
2010-11-23
Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.
Application of an analytical method for solution of thermal hydraulic conservation equations
Fakory, M.R.
1995-09-01
An analytical method has been developed and applied for solution of two-phase flow conservation equations. The test results for application of the model for simulation of BWR transients are presented and compared with the results obtained from application of the explicit method for integration of conservation equations. The test results show that with application of the analytical method for integration of conservation equations, the Courant limitation associated with explicit Euler method of integration was eliminated. The results obtained from application of the analytical method (with large time steps) agreed well with the results obtained from application of explicit method of integration (with time steps smaller than the size imposed by Courant limitation). The results demonstrate that application of the analytical approach significantly improves the numerical stability and computational efficiency.
Analytical methods and a quality assurance plan have been developed to determine the concentration of a select group of bioaccumulatable chemicals in fish tissue. he analytes include PCBs and 21 pesticides and industrial chemicals. he methodology has been used to conduct a survey...
Manual of analytical methods for the Industrial Hygiene Chemistry Laboratory
Greulich, K.A.; Gray, C.E.
1991-08-01
This Manual is compiled from techniques used in the Industrial Hygiene Chemistry Laboratory of Sandia National Laboratories in Albuquerque, New Mexico. The procedures are similar to those used in other laboratories devoted to industrial hygiene practices. Some of the methods are standard; some, modified to suit our needs; and still others, developed at Sandia. The authors have attempted to present all methods in a simple and concise manner but in sufficient detail to make them readily usable. It is not to be inferred that these methods are universal for any type of sample, but they have been found very reliable for the types of samples mentioned.
An analytic method for the inverse problem of MREPT
NASA Astrophysics Data System (ADS)
Palamodov, V.
2016-03-01
Magnetic resonance electric properties tomography (MREPT) is a medical imaging modality for visualizing the electrical tissue properties of the human body using radio-frequency magnetic fields. This method consists of reconstructing the admittivity distribution from the positive rotating component of the magnetic field. In the newest paper of Ammari et al (2015 Inverse Problems 31 105001) an approximate method of reconstruction of variable admittivity was proposed. In this paper a method for exact reconstruction of the admittivity from data of the positive rotating component of the field is given.
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122
EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD
Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...
Base flow separation: A comparison of analytical and mass balance methods
NASA Astrophysics Data System (ADS)
Lott, Darline A.; Stewart, Mark T.
2016-04-01
Base flow is the ground water contribution to stream flow. Many activities, such as water resource management, calibrating hydrological and climate models, and studies of basin hydrology, require good estimates of base flow. The base flow component of stream flow is usually determined by separating a stream hydrograph into two components, base flow and runoff. Analytical methods, mathematical functions or algorithms used to calculate base flow directly from discharge, are the most widely used base flow separation methods and are often used without calibration to basin or gage-specific parameters other than basin area. In this study, six analytical methods are compared to a mass balance method, the conductivity mass-balance (CMB) method. The base flow index (BFI) values for 35 stream gages are obtained from each of the seven methods with each gage having at least two consecutive years of specific conductance data and 30 years of continuous discharge data. BFI is cumulative base flow divided by cumulative total discharge over the period of record of analysis. The BFI value is dimensionless, and always varies from 0 to 1. Areas of basins used in this study range from 27 km2 to 68,117 km2. BFI was first determined for the uncalibrated analytical methods. The parameters of each analytical method were then calibrated to produce BFI values as close to the CMB derived BFI values as possible. One of the methods, the power function (aQb + cQ) method, is inherently calibrated and was not recalibrated. The uncalibrated analytical methods have an average correlation coefficient of 0.43 when compared to CMB-derived values, and an average correlation coefficient of 0.93 when calibrated with the CMB method. Once calibrated, the analytical methods can closely reproduce the base flow values of a mass balance method. Therefore, it is recommended that analytical methods be calibrated against tracer or mass balance methods.
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
Technological and Analytical Methods for Arabinoxylan Quantification from Cereals.
Döring, Clemens; Jekle, Mario; Becker, Thomas
2016-04-25
Arabinoxylan (AX) is the major nonstarch polysaccharide contained in various types of grains. AX consists of a backbone of β1.4D-xylopyranosyl residues with randomly linked αlarabinofuranosyl units. Once isolated and included as food additive, AX affects foodstuff attributes and has positive effects on human health. AX can be classified into waterextractable and waterunextractable AX. For isolating AX out of their natural matrix, a range of methods was developed, adapted, and improved. This review presents a survey of the commonly used extraction methods for AX by the influence of different techniques. It also provides a brief overview of the structural and technological impact of AX as a dough additive. A concluding section summarizes different detection methods for analyzing and quantification AX. PMID:25629383
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
NASA Astrophysics Data System (ADS)
Atteia, O.; Höhener, P.
2012-09-01
Various numerical reactive transport models were developed in the last decade to simulate plumes of pollutants in heterogeneous aquifers. However, these models remain difficult to use for the non-specialist, and the computation times are often long. Users who need to fit several model parameters to match predictions with field data in heterogeneous aquifers may be discouraged by the time needed to run the simulations. The objective of this paper is to provide a set of approximations that allow performing almost instantaneous calculations for transport of redox-reactive pollutants, the most common examples being benzene, toluene, ethylbenzene and xylenes (BTEX). The approach relies on two major tools: (i) the use of flux tubes (FT), a variant of stream tubes that include dispersion, and (ii) sequential superposition of the reactions (Mixed Instantaneous and Kinetics Superposition Sequence (MIKSS)). The calculation of transport is uncoupled from the calculation of reactions. The superposition principle has been used previously for the analytical solution of a bimolecular reaction of an electron donor with an acceptor and is here extended to more than one dissolved electron acceptor reacting with more than one donor. The approach is furthermore improved by including limitations of the kinetic reactions according to the availability of the reactants and by combining kinetic and instantaneous reactions. The results computed with this approach are compared to three well known numerical models (RT3D, PHT3D, PHAST) for various test cases including uniform, slightly diverted or highly irregular flow fields and several reaction schemes for BTEX. The FT-MIKSS solution gives nearly the same results as the other models and proved to be very flexible. The major advantage of the FT-MIKSS solution is fast computation times that are generally 100 to 1000 times faster than other numerical models. This approach might be a useful tool during the long fitting procedure of field data
Analytical methods of electrode design for a relativistic electron gun
Caporaso, G.J.; Cole, A.G.; Boyd, J.K.
1985-05-09
The standard paraxial ray equation method for the design of electrodes for an electrostatically focused gun is extended to include relativistic effects and the effects of the beam's azimuthal magnetic field. Solutions for parallel and converging beams are obtained and the predicted currents are compared against those measured on the High Brightness Test Stand. 4 refs., 2 figs.
Teaching Analytical Method Development in an Undergraduate Instrumental Analysis Course
ERIC Educational Resources Information Center
Lanigan, Katherine C.
2008-01-01
Method development and assessment, central components of carrying out chemical research, require problem-solving skills. This article describes a pedagogical approach for teaching these skills through the adaptation of published experiments and application of group-meeting style discussions to the curriculum of an undergraduate instrumental…
Advanced and In Situ Analytical Methods for Solar Fuel Materials.
Chan, Candace K; Tüysüz, Harun; Braun, Artur; Ranjan, Chinmoy; La Mantia, Fabio; Miller, Benjamin K; Zhang, Liuxian; Crozier, Peter A; Haber, Joel A; Gregoire, John M; Park, Hyun S; Batchellor, Adam S; Trotochaud, Lena; Boettcher, Shannon W
2016-01-01
In situ and operando techniques can play important roles in the development of better performing photoelectrodes, photocatalysts, and electrocatalysts by helping to elucidate crucial intermediates and mechanistic steps. The development of high throughput screening methods has also accelerated the evaluation of relevant photoelectrochemical and electrochemical properties for new solar fuel materials. In this chapter, several in situ and high throughput characterization tools are discussed in detail along with their impact on our understanding of solar fuel materials. PMID:26267386
Analytical methods applied to diverse types of Brazilian propolis
2011-01-01
Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented. PMID:21631940
EVALUATION OF BENTHIC MACROINVERTEBRATE BIOMASS METHODOLOGY. PART 1. LABORATORY ANALYTICAL METHODS
Evaluation of analytical methods employed for wet weight (live or preserved samples) of benthic marcoinvertebrates reveals that centrifugation at 140 x gravity for one minute yields constant biomass estimates. Duration of specimen exposure in ethanol, formalin, and formol (formal...
EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A
SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...
The Superfund Innovative Technology Evaluation (SITE) Program evaluates new technologies to assess their effectiveness. This bulletin summarizes results from the 1993 SITE demonstration of the Field Analytical Screening Program (FASP) Pentachlorophenol (PCP) Method to determine P...
A vocal-based analytical method for goose behaviour recognition.
Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole
2012-01-01
Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037
Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua
2015-09-01
Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed. PMID:26753274
Sulfathiazole: analytical methods for quantification in seawater and macroalgae.
Leston, Sara; Nebot, Carolina; Nunes, Margarida; Cepeda, Alberto; Pardal, Miguel Ângelo; Ramos, Fernando
2015-01-01
The awareness of the interconnection between pharmaceutical residues, human health, and aquaculture has highlighted the concern with the potential harmful effects it can induce. Furthermore, to better understand the consequences more research is needed and to achieve that new methodologies on the detection and quantification of pharmaceuticals are necessary. Antibiotics are a major class of drugs included in the designation of emerging contaminants, representing a high risk to natural ecosystems. Among the most prescribed are sulfonamides, with sulfathiazole being the selected compound to be investigated in this study. In the environment, macroalgae are an important group of producers, continuously exposed to contaminants, with a significant role in the trophic web. Due to these characteristics are already under scope for the possibility of being used as bioindicators. The present study describes two new methodologies based on liquid chromatography for the determination of sulfathiazole in seawater and in the green macroalgae Ulva lactuca. Results show both methods were validated according to international standards, with MS/MS detection showing more sensitivity as expected with LODs of 2.79ng/g and 1.40ng/mL for algae and seawater, respectively. As for UV detection the values presented were respectively 2.83μg/g and 2.88μg/mL, making it more suitable for samples originated in more contaminated sites. The methods were also applied to experimental data with success with results showing macroalgae have potential use as indicators of contamination. PMID:25473819
Computational Neutronics Methods and Transmutation Performance Analyses for Fast Reactors
R. Ferrer; M. Asgari; S. Bays; B. Forget
2007-03-01
The once-through fuel cycle strategy in the United States for the past six decades has resulted in an accumulation of Light Water Reactor (LWR) Spent Nuclear Fuel (SNF). This SNF contains considerable amounts of transuranic (TRU) elements that limit the volumetric capacity of the current planned repository strategy. A possible way of maximizing the volumetric utilization of the repository is to separate the TRU from the LWR SNF through a process such as UREX+1a, and convert it into fuel for a fast-spectrum Advanced Burner Reactor (ABR). The key advantage in this scenario is the assumption that recycling of TRU in the ABR (through pyroprocessing or some other approach), along with a low capture-to-fission probability in the fast reactor’s high-energy neutron spectrum, can effectively decrease the decay heat and toxicity of the waste being sent to the repository. The decay heat and toxicity reduction can thus minimize the need for multiple repositories. This report summarizes the work performed by the fuel cycle analysis group at the Idaho National Laboratory (INL) to establish the specific technical capability for performing fast reactor fuel cycle analysis and its application to a high-priority ABR concept. The high-priority ABR conceptual design selected is a metallic-fueled, 1000 MWth SuperPRISM (S-PRISM)-based ABR with a conversion ratio of 0.5. Results from the analysis showed excellent agreement with reference values. The independent model was subsequently used to study the effects of excluding curium from the transuranic (TRU) external feed coming from the LWR SNF and recycling the curium produced by the fast reactor itself through pyroprocessing. Current studies to be published this year focus on analyzing the effects of different separation strategies as well as heterogeneous TRU target systems.
Using an analytical geometry method to improve tiltmeter data presentation
Su, W.-J.
2000-01-01
The tiltmeter is a useful tool for geologic and geotechnical applications. To obtain full benefit from the tiltmeter, easy and accurate data presentations should be used. Unfortunately, the most commonly used method for tilt data reduction now may yield inaccurate and low-resolution results. This article describes a simple, accurate, and high-resolution approach developed at the Illinois State Geological Survey for data reduction and presentation. The orientation of tiltplates is determined first by using a trigonometric relationship, followed by a matrix transformation, to obtain the true amount of rotation change of the tiltplate at any given time. The mathematical derivations used for the determination and transformation are then coded into an integrated PC application by adapting the capabilities of commercial spreadsheet, database, and graphics software. Examples of data presentation from tiltmeter applications in studies of landfill covers, characterizations of mine subsidence, and investigations of slope stability are also discussed.
Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 4, Organic methods
Not Available
1993-08-01
This interim notice covers the following: extractable organic halides in solids, total organic halides, analysis by gas chromatography/Fourier transform-infrared spectroscopy, hexadecane extracts for volatile organic compounds, GC/MS analysis of VOCs, GC/MS analysis of methanol extracts of cryogenic vapor samples, screening of semivolatile organic extracts, GPC cleanup for semivolatiles, sample preparation for GC/MS for semi-VOCs, analysis for pesticides/PCBs by GC with electron capture detection, sample preparation for pesticides/PCBs in water and soil sediment, report preparation, Florisil column cleanup for pesticide/PCBs, silica gel and acid-base partition cleanup of samples for semi-VOCs, concentrate acid wash cleanup, carbon determination in solids using Coulometrics` CO{sub 2} coulometer, determination of total carbon/total organic carbon/total inorganic carbon in radioactive liquids/soils/sludges by hot persulfate method, analysis of solids for carbonates using Coulometrics` Model 5011 coulometer, and soxhlet extraction.
Analytical methods for abused drugs in hair and their applications.
Wada, Mitsuhiro; Ikeda, Rie; Kuroda, Naotaka; Nakashima, Kenichiro
2010-06-01
Hair has been focused on for its usability as an alternative biological specimen to blood and urine for determining drugs of abuse in fields such as forensic and toxicological sciences because hair can be used to elucidate the long intake history of abused drugs compared with blood and urine. Hair analysis consists of several pretreatment steps, such as washing out contaminates from hair, extraction of target compounds from hair, and cleanup for instrumental analysis. Each step includes characteristic and independent features for the class of drugs, e.g., stimulants, narcotics, cannabis, and other medicaments. In this review, recently developed methods to determine drugs of abuse are summarized, and the pretreatment steps as well as the sensitivity and applicability are critically discussed. PMID:20232061
NASA Astrophysics Data System (ADS)
Wailliez, Sébastien E.
2014-03-01
In the two-body model, time of flight between two positions can be expressed as a single-variable function and a variety of formulations exist. Lambert’s problem can be solved by inverting such a function. In this article, a method which inverts Lagrange’s flight time equation and supports the problematic 180° transfer is proposed. This method relies on a Householder algorithm of variable order. However, unlike other iterative methods, it is semi-analytical in the sense that flight time functions are derived analytically to second order vs. first order finite differences. The author investigated the profile of Lagrange’s elliptic flight time equation and its derivatives with a special focus on their significance to the behaviour of the proposed method and the stated goal of guaranteed convergence. Possible numerical deficiencies were identified and dealt with. As a test, 28 scenarios of variable difficulty were designed to cover a wide variety of geometries. The context of this research being the orbit determination of artificial satellites and debris, the scenarios are representative of typical such objects in Low-Earth, Geostationary and Geostationary Transfer Orbits. An analysis of the computational impact of the quality of the initial guess vs. that of the order of the method was also done, providing clues for further research and optimisations (e.g. asteroids, long period comets, multi-revolution cases). The results indicate fast to very fast convergence in all test cases, they validate the numerical safeguards and also give a quantitative assessment of the importance of the initial guess.
Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.
1988-01-01
A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises (1) a whole blood sample disc, (2) a serum sample disc, (3) a sample preparation rotor, and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc in capillary tubes filled by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analytical rotor for analysis by conventional methods.
Analytic study of the Tadoma method: background and preliminary results.
Norton, S J; Schultz, M C; Reed, C M; Braida, L D; Durlach, N I; Rabinowitz, W M; Chomsky, C
1977-09-01
Certain deaf-blind persons have been taught, through the Tadoma method of speechreading, to use vibrotactile cues from the face and neck to understand speech. This paper reports the results of preliminary tests of the speechreading ability of one adult Tadoma user. The tests were of four major types: (1) discrimination of speech stimuli; (2) recognition of words in isolation and in sentences; (3) interpretation of prosodic and syntactic features in sentences; and (4) comprehension of written (Braille) and oral speech. Words in highly contextual environments were much better perceived than were words in low-context environments. Many of the word errors involved phonemic substitutions which shared articulatory features with the target phonemes, with a higher error rate for vowels than consonants. Relative to performance on word-recognition tests, performance on some of the discrimination tests was worse than expected. Perception of sentences appeared to be mildly sensitive to rate of talking and to speaker differences. Results of the tests on perception of prosodic and syntactic features, while inconclusive, indicate that many of the features tested were not used in interpreting sentences. On an English comprehension test, a higher score was obtained for items administered in Braille than through oral presentation. PMID:904318
Hyperspectral imaging based method for fast characterization of kidney stone types
NASA Astrophysics Data System (ADS)
Blanco, Francisco; López-Mesas, Montserrat; Serranti, Silvia; Bonifazi, Giuseppe; Havel, Josef; Valiente, Manuel
2012-07-01
The formation of kidney stones is a common and highly studied disease, which causes intense pain and presents a high recidivism. In order to find the causes of this problem, the characterization of the main compounds is of great importance. In this sense, the analysis of the composition and structure of the stone can give key information about the urine parameters during the crystal growth. But the usual methods employed are slow, analyst dependent and the information obtained is poor. In the present work, the near infrared (NIR)-hyperspectral imaging technique was used for the analysis of 215 samples of kidney stones, including the main types usually found and their mixtures. The NIR reflectance spectra of the analyzed stones showed significant differences that were used for their classification. To do so, a method was created by the use of artificial neural networks, which showed a probability higher than 90% for right classification of the stones. The promising results, robust methodology, and the fast analytical process, without the need of an expert assistance, lead to an easy implementation at the clinical laboratories, offering the urologist a rapid diagnosis that shall contribute to minimize urolithiasis recidivism.
An introduction to clinical microeconomic analysis: purposes and analytic methods.
Weintraub, W S; Mauldin, P D; Becker, E R
1994-06-01
The recent concern with health care economics has fostered the development of a new discipline that is generally called clinical microeconomics. This is a discipline in which microeconomic methods are used to study the economics of specific medical therapies. It is possible to perform stand alone cost analyses, but more profound insight into the medical decision making process may be accomplished by combining cost studies with measures of outcome. This is most often accomplished with cost-effectiveness or cost-utility studies. In cost-effectiveness studies there is one measure of outcome, often death. In cost-utility studies there are multiple measures of outcome, which must be grouped together to give an overall picture of outcome or utility. There are theoretical limitations to the determination of utility that must be accepted to perform this type of analysis. A summary statement of outcome is quality adjusted life years (QALYs), which is utility time socially discounted survival. Discounting is used because people value a year of future life less than a year of present life. Costs are made up of in-hospital direct, professional, follow-up direct, and follow-up indirect costs. Direct costs are for medical services. Indirect costs reflect opportunity costs such as lost time at work. Cost estimates are often based on marginal costs, or the cost for one additional procedure of the same type. Finally an overall statistic may be generated as cost per unit increase in effectiveness, such as dollars per QALY.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:10151059
A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.
Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J
2015-05-01
In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. PMID:26259660
Căruntu, Bogdan
2014-01-01
The paper presents the optimal homotopy perturbation method, which is a new method to find approximate analytical solutions for nonlinear partial differential equations. Based on the well-known homotopy perturbation method, the optimal homotopy perturbation method presents an accelerated convergence compared to the regular homotopy perturbation method. The applications presented emphasize the high accuracy of the method by means of a comparison with previous results. PMID:25003150
ERIC Educational Resources Information Center
Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia
2008-01-01
There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…
A Comparative Evaluation of Analytical Methods to Allocate Individual Marks from a Team Mark
ERIC Educational Resources Information Center
Nepal, Kali
2012-01-01
This study presents a comparative evaluation of analytical methods to allocate individual marks from a team mark. Only the methods that use or can be converted into some form of mathematical equations are analysed. Some of these methods focus primarily on the assessment of the quality of teamwork product (product assessment) while the others put…
NASA Astrophysics Data System (ADS)
Afanas'ev, A. P.; Dzyuba, S. M.
2015-10-01
A method for constructing approximate analytic solutions of systems of ordinary differential equations with a polynomial right-hand side is proposed. The implementation of the method is based on the Picard method of successive approximations and a procedure of continuation of local solutions. As an application, the problem of constructing the minimal sets of the Lorenz system is considered.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... Methods AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The Environmental...) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as developing a reference coliform/non-coliform library...
FASTLens (FAst STatistics for weak Lensing): Fast Method for Weak Lensing Statistics and Map Making
NASA Astrophysics Data System (ADS)
Pires, S.; Starck, J.-L.; Amara, A.; Teyssier, R.; Refregier, A.; Fadili, J.
2010-10-01
The analysis of weak lensing data requires to account for missing data such as masking out of bright stars. To date, the majority of lensing analyses uses the two point-statistics of the cosmic shear field. These can either be studied directly using the two-point correlation function, or in Fourier space, using the power spectrum. The two-point correlation function is unbiased by missing data but its direct calculation will soon become a burden with the exponential growth of astronomical data sets. The power spectrum is fast to estimate but a mask correction should be estimated. Other statistics can be used but these are strongly sensitive to missing data. The solution that is proposed by FASTLens is to properly fill-in the gaps with only NlogN operations, leading to a complete weak lensing mass map from which one can compute straight forwardly and with a very good accuracy any kind of statistics like power spectrum or bispectrum.
Method for using fast fluidized bed dry bottom coal gasification
Snell, George J.; Kydd, Paul H.
1983-01-01
Carbonaceous solid material such as coal is gasified in a fast fluidized bed gasification system utilizing dual fluidized beds of hot char. The coal in particulate form is introduced along with oxygen-containing gas and steam into the fast fluidized bed gasification zone of a gasifier assembly wherein the upward superficial gas velocity exceeds about 5.0 ft/sec and temperature is 1500.degree.-1850.degree. F. The resulting effluent gas and substantial char are passed through a primary cyclone separator, from which char solids are returned to the fluidized bed. Gas from the primary cyclone separator is passed to a secondary cyclone separator, from which remaining fine char solids are returned through an injection nozzle together with additional steam and oxygen-containing gas to an oxidation zone located at the bottom of the gasifier, wherein the upward gas velocity ranges from about 3-15 ft/sec and is maintained at 1600.degree.-200.degree. F. temperature. This gasification arrangement provides for increased utilization of the secondary char material to produce higher overall carbon conversion and product yields in the process.
Waste Tank Organic Safety Program: Analytical methods development. Progress report, FY 1994
Campbell, J.A.; Clauss, S.A.; Grant, K.E.
1994-09-01
The objectives of this task are to develop and document extraction and analysis methods for organics in waste tanks, and to extend these methods to the analysis of actual core samples to support the Waste Tank organic Safety Program. This report documents progress at Pacific Northwest Laboratory (a) during FY 1994 on methods development, the analysis of waste from Tank 241-C-103 (Tank C-103) and T-111, and the transfer of documented, developed analytical methods to personnel in the Analytical Chemistry Laboratory (ACL) and 222-S laboratory. This report is intended as an annual report, not a completed work.
Kolber, Z.; Falkowski, P.
1995-06-20
A fast repetition rate fluorometer device and method for measuring in vivo fluorescence of phytoplankton or higher plants chlorophyll and photosynthetic parameters of phytoplankton or higher plants is revealed. The phytoplankton or higher plants are illuminated with a series of fast repetition rate excitation flashes effective to bring about and measure resultant changes in fluorescence yield of their Photosystem II. The series of fast repetition rate excitation flashes has a predetermined energy per flash and a rate greater than 10,000 Hz. Also, disclosed is a flasher circuit for producing the series of fast repetition rate flashes. 14 figs.
Kolber, Zbigniew; Falkowski, Paul
1995-06-20
A fast repetition rate fluorometer device and method for measuring in vivo fluorescence of phytoplankton or higher plants chlorophyll and photosynthetic parameters of phytoplankton or higher plants by illuminating the phytoplankton or higher plants with a series of fast repetition rate excitation flashes effective to bring about and measure resultant changes in fluorescence yield of their Photosystem II. The series of fast repetition rate excitation flashes has a predetermined energy per flash and a rate greater than 10,000 Hz. Also, disclosed is a flasher circuit for producing the series of fast repetition rate flashes.
Watson, A.P.; Kistner, S.
1995-06-01
This first technical conference promoted the standardization of analytical procotols to reliably detect chemical warfare agents and their degradation products in soil, water, and other complex environmental media. This supports the various chemical weapons disposal and emergency preparedness programs, Chemical Weapons Convention treaty compliance, installation restoration and base closure decisions. Five major topics were addressed: Implementation for treaty compliance, installation, restoration and stockpile disposal decisions, existing analytical methods, practical applications of existing analytical techniques, immunoassay technologies, environmental and biological fate of agents and their degradation products. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.
Shayan, Mohsen; Kiani, Abolfazl
2015-08-12
This work represents a new, extremely low cost and easy method for fabrication of bipolar electrode (BPE) for rapid and simultaneous screening of potential candidates for electrocatalytic reactions and sensing applications. Our method takes advantage of the silver reflective layer deposited on already available recordable digital versatile disc (DVD-R) polycarbonate substrate which acts as BPE. Oxidation of the reflective layer of the DVD-R in anodic pole of the BPE results in a permanent and visually measurable dissolute length. Therefore, one could correlate the electrocatalytic activity of the catalyst at the cathodic pole of the BPE, as well as the concentration of analyte in the solution, to the dissolution length of the BPE. To illustrate the promising applications of this new substrate as BPE, p-benzoquinone (BQ) and hydrogen peroxide were tested as model targets for the sensing application. Moreover, in order to show the feasibility of using DVD BPEs for screening applications, the electrocatalytic activity of Pt, Pd, Au, and pristine DVD substrate toward hydrogen evolution reaction (HER) were compared using an array of BPEs prepared on DVD substrate. PMID:26320958
A semi-analytical method for heat sweep calculations in fractured reservoirs
Pruess, K.; Wu, Y.S.
1988-01-01
An analytical approximation is developed for purely conductive heat transfer from impermeable blocks of rock to fluids sweeping past the rocks in fractures. The method was incorporated into a multi-phase fluid and heat flow simulator. Comparison with exact analytical solutions and with simulations using a multiple interacting continua approach shows very good accuracy, with no increase in computing time compared to porous medium simulations. 14 refs., 3 figs., 5 tabs.
A semi-analytical method for heat sweep calculations in fractured reservoirs
Pruess, K.; Wu, Y.S.
1988-01-01
An analytical approximation is developed for purely conductive heat transfer from impermeable blocks of rock to fluids sweeping past the rocks in fractures. The method was incorporated into a multi-phase fluid and heat flow simulator. Comparison with exact analytical solutions and with simulations using a multiple interacting continua approach shows very good accuracy, with no increase in computing time compared to porous medium simulations.
NASA Astrophysics Data System (ADS)
Vizireanu, D. N.; Halunga, S. V.
2012-04-01
A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.
NASA Astrophysics Data System (ADS)
Ceccaroni, Marta; Biscani, Francesco; Biggs, James
2014-01-01
This article provides a method for finding initial conditions for perturbed frozen orbits around inhomogeneous fast rotating asteroids. These orbits can be used as reference trajectories in missions that require close inspection of any rigid body. The generalized perturbative procedure followed exploits the analytical methods of relegation of the argument of node and Delaunay normalisation to arbitrary order. These analytical methods are extremely powerful but highly computational. The gravitational potential of the heterogeneous body is firstly stated, in polar-nodal coordinates, which takes into account the coefficients of the spherical harmonics up to an arbitrary order. Through the relegation of the argument of node and the Delaunay normalization, a series of canonical transformations of coordinates is found, which reduces the Hamiltonian describing the system to a integrable, two degrees of freedom Hamiltonian plus a truncated reminder of higher order. Setting eccentricity, argument of pericenter and inclination of the orbit of the truncated system to be constant, initial conditions are found, which evolve into frozen orbits for the truncated system. Using the same initial conditions yields perturbed frozen orbits for the full system, whose perturbation decreases with the consideration of arbitrary homologic equations in the relegation and normalization procedures. Such procedure can be automated for the first homologic equation up to the consideration of any arbitrary number of spherical harmonics coefficients. The project has been developed in collaboration with the European Space Agency (ESA).
A Novel and Fast Purification Method for Nucleoside Transporters.
Hao, Zhenyu; Thomsen, Maren; Postis, Vincent L G; Lesiuk, Amelia; Sharples, David; Wang, Yingying; Bartlam, Mark; Goldman, Adrian
2016-01-01
Nucleoside transporters (NTs) play critical biological roles in humans, and to understand the molecular mechanism of nucleoside transport requires high-resolution structural information. However, the main bottleneck for structural analysis of NTs is the production of pure, stable, and high quality native protein for crystallization trials. Here we report a novel membrane protein expression and purification strategy, including construction of a high-yield membrane protein expression vector, and a new and fast purification protocol for NTs. The advantages of this strategy are the improved time efficiency, leading to high quality, active, stable membrane proteins, and the efficient use of reagents and consumables. Our strategy might serve as a useful point of reference for investigating NTs and other membrane proteins by clarifying the technical points of vector construction and improvements of membrane protein expression and purification. PMID:27376071
A Novel and Fast Purification Method for Nucleoside Transporters
Hao, Zhenyu; Thomsen, Maren; Postis, Vincent L. G.; Lesiuk, Amelia; Sharples, David; Wang, Yingying; Bartlam, Mark; Goldman, Adrian
2016-01-01
Nucleoside transporters (NTs) play critical biological roles in humans, and to understand the molecular mechanism of nucleoside transport requires high-resolution structural information. However, the main bottleneck for structural analysis of NTs is the production of pure, stable, and high quality native protein for crystallization trials. Here we report a novel membrane protein expression and purification strategy, including construction of a high-yield membrane protein expression vector, and a new and fast purification protocol for NTs. The advantages of this strategy are the improved time efficiency, leading to high quality, active, stable membrane proteins, and the efficient use of reagents and consumables. Our strategy might serve as a useful point of reference for investigating NTs and other membrane proteins by clarifying the technical points of vector construction and improvements of membrane protein expression and purification. PMID:27376071
METHOD AND APPARATUS FOR IMPROVING PERFORMANCE OF A FAST REACTOR
Koch, L.J.
1959-01-20
A specific arrangement of the fertile material and fissionable material in the active portion of a fast reactor to achieve improvement in performance and to effectively lower the operating temperatures in the center of the reactor is described. According to this invention a group of fuel elements containing fissionable material are assembled to form a hollow fuel core. Elements containing a fertile material, such as depleted uranium, are inserted into the interior of the fuel core to form a central blanket. Additional elemenis of fertile material are arranged about the fuel core to form outer blankets which in tunn are surrounded by a reflector. This arrangement of fuel core and blankets results in substantial flattening of the flux pattern.
A capture-gated fast neutron detection method
NASA Astrophysics Data System (ADS)
Liu, Yi; Yang, Yi-Gang; Tai, Yang; Zhang, Zhi
2016-07-01
To address the problem of the shortage of neutron detectors used in radiation portal monitors (RPMs), caused by the 3He supply crisis, research on a cadmium-based capture-gated fast neutron detector is presented in this paper. The detector is composed of many 1 cm × 1 cm × 20 cm plastic scintillator cuboids covered by 0.1 mm thick film of cadmium. The detector uses cadmium to absorb thermal neutrons and produce capture γ-rays to indicate the detection of neutrons, and uses plastic scintillator to moderate neutrons and register γ-rays. This design removes the volume competing relationship in traditional 3He counter-based fast neutron detectors, which hinders enhancement of the neutron detection efficiency. Detection efficiency of 21.66% ± 1.22% has been achieved with a 40.4 cm × 40.4 cm × 20 cm overall detector volume. This detector can measure both neutrons and γ-rays simultaneously. A small detector (20.2 cm × 20.2 cm × 20 cm) demonstrated a 3.3 % false alarm rate for a 252Cf source with a neutron yield of 1841 n/s from 50 cm away within 15 s measurement time. It also demonstrated a very low (<0.06%) false alarm rate for a 3.21×105 Bq 137Cs source. This detector offers a potential single-detector replacement for both neutron and the γ-ray detectors in RPM systems. Supported by National Natural Science Foundation of China (11175098, 11375095)
Santos, Sílvia; Ungureanu, Gabriela; Boaventura, Rui; Botelho, Cidália
2015-07-15
Selenium is an essential trace element for many organisms, including humans, but it is bioaccumulative and toxic at higher than homeostatic levels. Both selenium deficiency and toxicity are problems around the world. Mines, coal-fired power plants, oil refineries and agriculture are important examples of anthropogenic sources, generating contaminated waters and wastewaters. For reasons of human health and ecotoxicity, selenium concentration has to be controlled in drinking-water and in wastewater, as it is a potential pollutant of water bodies. This review article provides firstly a general overview about selenium distribution, sources, chemistry, toxicity and environmental impact. Analytical techniques used for Se determination and speciation and water and wastewater treatment options are reviewed. In particular, published works on adsorption as a treatment method for Se removal from aqueous solutions are critically analyzed. Recent published literature has given particular attention to the development and search for effective adsorbents, including low-cost alternative materials. Published works mostly consist in exploratory findings and laboratory-scale experiments. Binary metal oxides and LDHs (layered double hydroxides) have presented excellent adsorption capacities for selenium species. Unconventional sorbents (algae, agricultural wastes and other biomaterials), in raw or modified forms, have also led to very interesting results with the advantage of their availability and low-cost. Some directions to be considered in future works are also suggested. PMID:25847169
Shelley, Jacob T; Hieftje, Gary M
2010-04-01
The recent development of ambient desorption/ionization mass spectrometry (ADI-MS) has enabled fast, simple analysis of many different sample types. The ADI-MS sources have numerous advantages, including little or no required sample pre-treatment, simple mass spectra, and direct analysis of solids and liquids. However, problems of competitive ionization and limited fragmentation require sample-constituent separation, high mass accuracy, and/or tandem mass spectrometry (MS/MS) to detect, identify, and quantify unknown analytes. To maintain the inherent high throughput of ADI-MS, it is essential for the ion source/mass analyzer combination to measure fast transient signals and provide structural information. In the current study, the flowing atmospheric-pressure afterglow (FAPA) ionization source is coupled with a time-of-flight mass spectrometer (TOF-MS) to analyze fast transient signals (<500 ms FWHM). It was found that gas chromatography (GC) coupled with the FAPA source resulted in a reproducible (<5% RSD) and sensitive (detection limits of <6 fmol for a mixture of herbicides) system with analysis times of ca. 5 min. Introducing analytes to the FAPA in a transient was also shown to significantly reduce matrix effects caused by competitive ionization by minimizing the number and amount of constituents introduced into the ionization source. Additionally, MS/MS with FAPA-TOF-MS, enabling analyte identification, was performed via first-stage collision-induced dissociation (CID). Lastly, molecular and structural information was obtained across a fast transient peak by modulating the conditions that caused the first-stage CID. PMID:20349535
Synergistic effect of combining two nondestructive analytical methods for multielemental analysis.
Toh, Yosuke; Ebihara, Mitsuru; Kimura, Atsushi; Nakamura, Shoji; Harada, Hideo; Hara, Kaoru Y; Koizumi, Mitsuo; Kitatani, Fumito; Furutaka, Kazuyoshi
2014-12-16
We developed a new analytical technique that combines prompt gamma-ray analysis (PGA) and time-of-flight elemental analysis (TOF) by using an intense pulsed neutron beam at the Japan Proton Accelerator Research Complex. It allows us to obtain the results from both methods at the same time. Moreover, it can be used to quantify elemental concentrations in the sample, to which neither of these methods can be applied independently, if a new analytical spectrum (TOF-PGA) is used. To assess the effectiveness of the developed method, a mixed sample of Ag, Au, Cd, Co, and Ta, and the Gibeon meteorite were analyzed. The analytical capabilities were compared based on the gamma-ray peak selectivity and signal-to-noise ratios. TOF-PGA method showed high merits, although the capability may differ based on the target and coexisting elements. PMID:25371049
NASA Astrophysics Data System (ADS)
Zhang, Gang; Zhou, Di; Mortari, Daniele
2012-12-01
A new approximate analytical method for the two-body impulsive orbit rendezvous problem with short range is presented. The classical analytical approach derives the initial relative velocity from the state transition matrix of linear relative motion equations. This paper proposes a different analytical approach based on the relative Lambert solutions. An approximate expression for the transfer time is obtained as a function of chaser's and target's semi-major axes difference. This results in first and second order estimates of the chaser's semi-major axis. Singularity points of rendezvous time for the classical and proposed new methods are both analyzed. As compared with the classical method, the new solution is simpler, more accurate, and has fewer singularity points. Moreover, the proposed method can be easily expanded to higher order solutions. A numerical example quantifies the accuracy gain for multiple-revolution cases.
Zhang, Bo; Zhong, Zhaoping; Min, Min; Ding, Kuan; Xie, Qinglong; Ruan, Roger
2015-01-01
In this study, catalytic fast co-pyrolysis (co-CFP) of corn stalk and food waste (FW) was carried out to produce aromatics using quantitative pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS), and ZSM-5 zeolite in the hydrogen form was employed as the catalyst. Co-CFP temperature and a parameter called hydrogen to carbon effective ratio (H/C(eff) ratio) were examined for their effects on the relative content of aromatics. Experimental results showed that co-CFP temperature of 600 °C was optimal for the formation of aromatics and other organic pyrolysis products. Besides, H/C(eff) ratio had an important influence on product distribution. The yield of total organic pyrolysis products and relative content of aromatics increased non-linearly with increasing H/C(eff) ratio. There was an apparent synergistic effect between corn stalk and FW during co-CFP process, which promoted the production of aromatics significantly. Co-CFP of biomass and FW was an effective method to produce aromatics and other petrochemicals. PMID:25864028
Taurino, R; Cannio, M; Mafredini, T; Pozzi, P
2014-01-01
In this study, X-ray fluorescence (XRF) spectroscopy was used, in combination with micro-Raman spectroscopy, for a fast determination of bromine concentration and then of brominated flame retardants (BFRs) compounds in waste electrical and electronic equipments. Different samples from different recycling industries were characterized to evaluate the sorting performances of treatment companies. This investigation must be considered of prime research interest since the impact of BFRs on the environment and their potential risk on human health is an actual concern. Indeed, the new European Restriction of Hazardous Substances Directive (RoHS 2011/65/EU) demands that plastics with BFRs concentration above 0.1%, being potential health hazards, are identified and eliminated from the recycling process. Our results show the capability and the potential of Raman spectroscopy, together with XRF analysis, as effective tools for the rapid detection of BFRs in plastic materials. In particular, the use of these two techniques in combination can be considered as a promising method suitable for quality control applications in the recycling industry. PMID:25244143
Analytical methods to determine phosphonic and amino acid group-containing pesticides.
Stalikas, C D; Konidari, C N
2001-01-12
A comprehensive view on the possibilities of the most recently developed chromatographic methods and emerging techniques in the analysis of pesticides glyphosate, glufosinate, bialaphos and their metabolites is presented. The state-of-the-art of the individual pre-treatment steps (extraction, pre-concentration, clean-up, separation, quantification) of the employed analytical methods for this group of chemicals is reviewed. The advantages and drawbacks of the described analytical methods are discussed and the present status and future trends are outlined. PMID:11217016
Contextual and Analytic Qualities of Research Methods Exemplified in Research on Teaching
ERIC Educational Resources Information Center
Svensson, Lennart; Doumas, Kyriaki
2013-01-01
The aim of the present article is to discuss contextual and analytic qualities of research methods. The arguments are specified in relation to research on teaching. A specific investigation is used as an example to illustrate the general methodological approach. It is argued that research methods should be carefully grounded in an understanding of…
Determinations of pesticides in food are often complicated by the presence of fats and require multiple cleanup steps before analysis. Cost-effective analytical methods are needed for conducting large-scale exposure studies. We examined two extraction methods, supercritical flu...
Flammable gas safety program. Analytical methods development: FY 1994 progress report
Campbell, J.A.; Clauss, S.; Grant, K.; Hoopes, V.; Lerner, B.; Lucke, R.; Mong, G.; Rau, J.; Wahl, K.; Steele, R.
1994-09-01
This report describes the status of developing analytical methods to account for the organic components in Hanford waste tanks, with particular focus on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-101 (Tank 101-SY).
Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.
2011-03-01
In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.
NASA Technical Reports Server (NTRS)
Zeleznik, Frank J.; Gordon, Sanford
1960-01-01
The Brinkley, Huff, and White methods for chemical-equilibrium calculations were modified and extended in order to permit an analytical comparison. The extended forms of these methods permit condensed species as reaction products, include temperature as a variable in the iteration, and permit arbitrary estimates for the variables. It is analytically shown that the three extended methods can be placed in a form that is independent of components. In this form the Brinkley iteration is identical computationally to the White method, while the modified Huff method differs only'slightly from these two. The convergence rates of the modified Brinkley and White methods are identical; and, further, all three methods are guaranteed to converge and will ultimately converge quadratically. It is concluded that no one of the three methods offers any significant computational advantages over the other two.
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
NASA Technical Reports Server (NTRS)
Schnase, John L. (Inventor); Duffy, Daniel Q. (Inventor); Tamkin, Glenn S. (Inventor)
2016-01-01
A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.
Using decision analytic methods to assess the utility of family history tools.
Tyagi, Anupam; Morris, Jill
2003-02-01
Family history may be a useful tool for identifying people at increased risk of disease and for developing targeted interventions for individuals at higher-than-average risk. This article addresses the issue of how to examine the utility of a family history tool for public health and preventive medicine. We propose the use of a decision analytic framework for the assessment of a family history tool and outline the major elements of a decision analytic approach, including analytic perspective, costs, outcome measurements, and data needed to assess the value of a family history tool. We describe the use of sensitivity analysis to address uncertainty in parameter values and imperfect information. To illustrate the use of decision analytic methods to assess the value of family history, we present an example analysis based on using family history of colorectal cancer to improve rates of colorectal cancer screening. PMID:12568827
The alias method: A fast, efficient Monte Carlo sampling technique
Rathkopf, J.A.; Edwards, A.L. ); Smidt, R.K. )
1990-11-16
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 2 figs., 1 tab.
Long-stroke fast tool servo and a tool setting method for freeform optics fabrication
NASA Astrophysics Data System (ADS)
Liu, Qiang; Zhou, Xiaoqin; Liu, Zhiwei; Lin, Chao; Ma, Long
2014-09-01
Diamond turning assisted by fast tool servo is of high efficiency for the fabrication of freeform optics. This paper describes a long-stroke fast tool servo to obtain a large-amplitude tool motion. It has the advantage of low cost and higher stiffness and natural frequency than other flexure-based long-stroke fast tool servo systems. The fast tool servo is actuated by a voice coil motor and guided by a flexure-hinge structure. Open-loop and close-loop control tests are conducted on the testing platform. While fast tool servo system is an additional motion axis for a diamond turning machine, a tool center adjustment method is described to confirm tool center position in the machine tool coordinate system when the fast tool servo system is fixed on the diamond turning machine. Last, a sinusoidal surface is machined and the results demonstrate that the tool adjustment method is efficient and precise for a flexure-based fast tool servo system, and the fast tool servo system works well on the fabrication of freeform optics.
NASA Astrophysics Data System (ADS)
Cimpoca, Gh. V.; Radulescu, C.; Popescu, I. V.; Dulama, I. D.; Ionita, I.; Cimpoca, M.; Cernica, I.; Gavrila, R.
2010-01-01
In this paper we study the possibility to develop an alternative Analytical Method for Investigation in Real-Time of Liquid Properties, the layout and the operation with Quartz Crystal Microbalance (QCM) Systems. The quartz crystal microbalance (QCM) can be accepted as a powerful technique to monitor adsorption and desorption processes at interfaces in different chemical and biological areas. In our paper, Quartz Crystal Microbalance is used to monitor in real-time the polymer adsorption followed by azoic dye adsorption and then copolymer adsorption as well as optimization of interaction processes and determination of solution effects on the analytical signal. The solutions of azoic dye (5ṡ10-4 g/L, 5ṡ10-5 g/L and 5ṡ10-6 g/L in DMF) are adsorbed at gold electrodes of QCM and the sensor responses are estimated through decrease and increase of QCM frequency. Also, the response of the sensor at maleic anhydride (MA) copolymer with styrene St (MA-St copolymer concentration of solution: 5ṡ10-4 g/L; 5ṡ10-5 g/L and 5ṡ10-6 g/L in DMF) is fast, large, and reversible. The detailed investigation showed the fact that the Quartz Crystal Microbalance is a modern method to study a wider number of physical and chemical properties related to the surface and interfacial processes of synthesized copolymer leading to a higher reliability of the research results.
A fast finite volume method for conservative space-fractional diffusion equations in convex domains
NASA Astrophysics Data System (ADS)
Jia, Jinhong; Wang, Hong
2016-04-01
We develop a fast finite volume method for variable-coefficient, conservative space-fractional diffusion equations in convex domains via a volume-penalization approach. The method has an optimal storage and an almost linear computational complexity. The method retains second-order accuracy without requiring a Richardson extrapolation. Numerical results are presented to show the utility of the method.
FAst STatistics for weak Lensing (FASTLens): fast method for weak lensing statistics and map making
NASA Astrophysics Data System (ADS)
Pires, S.; Starck, J.-L.; Amara, A.; Teyssier, R.; Réfrégier, A.; Fadili, J.
2009-05-01
With increasingly large data sets, weak lensing measurements are able to measure cosmological parameters with ever-greater precision. However, this increased accuracy also places greater demands on the statistical tools used to extract the available information. To date, the majority of lensing analyses use the two-point statistics of the cosmic shear field. These can be either studied directly using the two-point correlation function or in Fourier space, using the power spectrum. But analysing weak lensing data inevitably involves the masking out of regions, for example to remove bright stars from the field. Masking out the stars is common practice but the gaps in the data need proper handling. In this paper, we show how an inpainting technique allows us to properly fill in these gaps with only NlogN operations, leading to a new image from which we can compute straightforwardly and with a very good accuracy both the power spectrum and the bispectrum. We then propose a new method to compute the bispectrum with a polar FFT algorithm, which has the main advantage of avoiding any interpolation in the Fourier domain. Finally, we propose a new method for dark matter mass map reconstruction from shear observations, which integrates this new inpainting concept. A range of examples based on 3D N-body simulations illustrates the results.
Groopman, Amber M.; Katz, Jonathan I.; Holland, Mark R.; Fujita, Fuminori; Matsukawa, Mami; Mizuno, Katsunori; Wear, Keith A.; Miller, James G.
2015-01-01
Conventional, Bayesian, and the modified least-squares Prony's plus curve-fitting (MLSP + CF) methods were applied to data acquired using 1 MHz center frequency, broadband transducers on a single equine cancellous bone specimen that was systematically shortened from 11.8 mm down to 0.5 mm for a total of 24 sample thicknesses. Due to overlapping fast and slow waves, conventional analysis methods were restricted to data from sample thicknesses ranging from 11.8 mm to 6.0 mm. In contrast, Bayesian and MLSP + CF methods successfully separated fast and slow waves and provided reliable estimates of the ultrasonic properties of fast and slow waves for sample thicknesses ranging from 11.8 mm down to 3.5 mm. Comparisons of the three methods were carried out for phase velocity at the center frequency and the slope of the attenuation coefficient for the fast and slow waves. Good agreement among the three methods was also observed for average signal loss at the center frequency. The Bayesian and MLSP + CF approaches were able to separate the fast and slow waves and provide good estimates of the fast and slow wave properties even when the two wave modes overlapped in both time and frequency domains making conventional analysis methods unreliable. PMID:26328678
Schmidt, U
1997-01-01
This paper describes the current state of behavioural, cognitive-behavioural and cognitive-analytical treatments of anorexia nervosa and the underlying theoretical models. Purely behavioural treatment methods have been evaluated in a number of single case studies. Although effective in terms of increasing body weight, these methods are obsolete in view of their unpleasant side-effects. Cognitive-behavioural and cognitive-analytical therapies are much more appropriate for these patients given their complex symptomatology and frequently ambivalent attitude to treatment. However, so far evaluations of these treatments are rare. The reasons for this are discussed. PMID:9411461
NASA Astrophysics Data System (ADS)
Drieniková, Katarína; Hrdinová, Gabriela; Naňo, Tomáš; Sakál, Peter
2010-01-01
The paper deals with the analysis of the theory of corporate social responsibility, risk management and the exact method of analytic hierarchic process that is used in the decision-making processes. The Chapters 2 and 3 focus on presentation of the experience with the application of the method in formulating the stakeholders' strategic goals within the Corporate Social Responsibility (CSR) and simultaneously its utilization in minimizing the environmental risks. The major benefit of this paper is the application of Analytical Hierarchy Process (AHP).
Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data
NASA Technical Reports Server (NTRS)
Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.
2004-01-01
A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
Crockett, A.B.; Craig, H.D.; Jenkins, T.F.; Sisk, W.E.
1996-09-01
A large number of defense-related sites are contaminated with elevated levels of secondary explosives. Levels of contamination range from barely detectable to levels above 10% that need special handling due to the detonation potential. Characterization of explosives-contaminated sites is particularly difficult due to the very heterogeneous distribution of contamination in the environment and within samples. To improve site characterization, several options exist including collecting more samples, providing on-site analytical data to help direct the investigation, compositing samples, improving homogenization of samples, and extracting larger samples. On-site analytical methods are essential to more economical and improved characterization. On-site methods might suffer in terms of precision and accuracy, but this is more than offset by the increased number of samples that can be run. While verification using a standard analytical procedure should be part of any quality assurance program, reducing the number of samples analyzed by the more expensive methods can result in significantly reduced costs. Often 70 to 90% of the soil samples analyzed during an explosives site investigation do not contain detectable levels of contamination. Two basic types of on-site analytical methods are in wide use for explosives in soil, calorimetric and immunoassay. Calorimetric methods generally detect broad classes of compounds such as nitroaromatics or nitramines, while immunoassay methods are more compound specific. Since TNT or RDX is usually present in explosive-contaminated soils, the use of procedures designed to detect only these or similar compounds can be very effective.
Evaluation of sampling and analytical methods for the determination of chlorodifluoromethane in air.
Seymour, M J; Lucas, M F
1993-05-01
In January 1989, the Occupational Safety and Health Administration (OSHA) published revised permissible exposure limits (PELs) for 212 compounds and established PELs for 164 additional compounds. In cases where regulated compounds did not have specific sampling and analytical methods, methods were suggested by OSHA. The National Institute for Occupational Safety and Health (NIOSH) Manual of Analytical Methods (NMAM) Method 1020, which was developed for 1,1,2-trichloro-1,2,2-trifluoroethane, was suggested by OSHA for the determination of chlorodifluoromethane in workplace air. Because this method was developed for a liquid and chlorodifluoromethane is a gas, the ability of NMAM Method 1020 to adequately sample and quantitate chlorodifluoromethane was questioned and tested by researchers at NIOSH. The evaluation of NMAM Method 1020 for chlorodifluoromethane showed that the capacity of the 100/50-mg charcoal sorbent bed was limited, the standard preparation procedure was incorrect for a gas analyte, and the analyte had low solubility in carbon disulfide. NMAM Method 1018 for dichlorodifluoromethane uses two coconut-shell charcoal tubes in series, a 400/200-mg tube followed by a 100/50-mg tube, which are desorbed with methylene chloride. This method was evaluated for chlorodifluoromethane. Test atmospheres, with chlorodifluoromethane concentrations from 0.5-2 times the PEL were generated. Modifications of NMAM Method 1018 included changes in the standard preparation procedure, and the gas chromatograph was equipped with a capillary column. These revisions to NMAM 1018 resulted in a 96.5% recovery and a total precision for the method of 7.1% for chlorodifluoromethane. No significant bias in the method was found. Results indicate that the revised NMAM Method 1018 is suitable for the determination of chlorodifluoromethane in workplace air. PMID:8498360
Cvetkovikj, I; Stefkov, G; Acevska, J; Stanoeva, J Petreska; Karapandzova, M; Stefova, M; Dimitrovska, A; Kulevanova, S
2013-03-22
Although the knowledge and use of several Salvia species (Salvia officinalis, Salvia fruticosa, and Salvia pomifera) can be dated back to Greek Era and have a long history of culinary and effective medicinal use, still there is a remarkable interest concerning their chemistry and especially the polyphenolic composition. Despite the demand in the food and pharmaceutical industry for methods for fast quality assessment of the herbs and spices, even now there are no official requirements for the minimum content of polyphenols in sage covered by current regulations neither the European Pharmacopoeia monographs nor the ISO 11165 standard. In this work a rapid analytical method for extraction, characterization and quantification of the major polyphenolic constituents in Sage was developed. Various extractions (infusion - IE; ultrasound-assisted extraction - USE and microwave-assisted extraction - MWE) were performed and evaluated for their effectiveness. Along with the optimization of the mass-detector and chromatographic parameters, the applicability of three different reverse C18 stationary phases (extra-density bonded, core-shell technology and monolith column) for polyphenolics characterization was evaluated. A comprehensive overview of the very variable polyphenolic composition of 118 different plant samples of 68 populations of wild growing culinary Salvia species (S. officinalis: 101; S. fruticosa: 15; S. pomifera: 2) collected from South East Europe (SEE) was performed using HPLC-DAD-ESI-MS(n) and more than 50 different compounds were identified and quantified. With this work the knowledge about polyphenols of culinary Sage was expanded thus the possibility for gaining an insight into the chemodiversity of culinary Salvia species in South East Europe was unlocked. PMID:23415138
Analytical calculation of spectral phase of grism pairs by the geometrical ray tracing method
NASA Astrophysics Data System (ADS)
Rahimi, L.; Askari, A. A.; Saghafifar, H.
2016-07-01
The most optimum operation of a grism pair is practically approachable when an analytical expression of its spectral phase is in hand. In this paper, we have employed the accurate geometrical ray tracing method to calculate the analytical phase shift of a grism pair, at transmission and reflection configurations. As shown by the results, for a great variety of complicated configurations, the spectral phase of a grism pair is in the same form of that of a prism pair. The only exception is when the light enters into and exits from different facets of a reflection grism. The analytical result has been used to calculate the second-order dispersions of several examples of grism pairs in various possible configurations. All results are in complete agreement with those from ray tracing method. The result of this work can be very helpful in the optimal design and application of grism pairs at various configurations.
NASA Astrophysics Data System (ADS)
Manea, I.; Popa, G.; Girnita, I.; Prenta, G.
2015-11-01
The paper presents a practical methodology for design and structural verification of the locomotive bogie frames using a modern software package for design, structural verification and validation through combined, analytical and experimental methods. In the initial stage, the bogie geometry is imported from a CAD program into a finite element analysis program, such as Ansys. The analytical model validation is done by experimental modal analysis carried out on a finished bogie frame. The bogie frame own frequencies and own modes by both experimental and analytic methods are determined and the correlation analysis of the two types of models is performed. If the results are unsatisfactory, the structural optimization should be performed. If the results are satisfactory, the qualification procedures follow by static and fatigue tests carried out in a laboratory with international accreditation in the field. This paper presents an application made on bogie frames for the LEMA electric locomotive of 6000 kW.
Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C
2014-06-01
A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. PMID:24767454
Comparative analysis of methods for real-time analytical control of chemotherapies preparations.
Bazin, Christophe; Cassard, Bruno; Caudron, Eric; Prognon, Patrice; Havard, Laurent
2015-10-15
Control of chemotherapies preparations are now an obligation in France, though analytical control is compulsory. Several methods are available and none of them is presumed as ideal. We wanted to compare them so as to determine which one could be the best choice. We compared non analytical (visual and video-assisted, gravimetric) and analytical (HPLC/FIA, UV/FT-IR, UV/Raman, Raman) methods thanks to our experience and a SWOT analysis. The results of the analysis show great differences between the techniques, but as expected none us them is without defects. However they can probably be used in synergy. Overall for the pharmacist willing to get involved, the implementation of the control for chemotherapies preparations must be widely anticipated, with the listing of every parameter, and remains according to us an analyst's job. PMID:26299761
An Overview of Conventional and Emerging Analytical Methods for the Determination of Mycotoxins
Cigić, Irena Kralj; Prosen, Helena
2009-01-01
Mycotoxins are a group of compounds produced by various fungi and excreted into the matrices on which they grow, often food intended for human consumption or animal feed. The high toxicity and carcinogenicity of these compounds and their ability to cause various pathological conditions has led to widespread screening of foods and feeds potentially polluted with them. Maximum permissible levels in different matrices have also been established for some toxins. As these are quite low, analytical methods for determination of mycotoxins have to be both sensitive and specific. In addition, an appropriate sample preparation and pre-concentration method is needed to isolate analytes from rather complicated samples. In this article, an overview of methods for analysis and sample preparation published in the last ten years is given for the most often encountered mycotoxins in different samples, mainly in food. Special emphasis is on liquid chromatography with fluorescence and mass spectrometric detection, while in the field of sample preparation various solid-phase extraction approaches are discussed. However, an overview of other analytical and sample preparation methods less often used is also given. Finally, different matrices where mycotoxins have to be determined are discussed with the emphasis on their specific characteristics important for the analysis (human food and beverages, animal feed, biological samples, environmental samples). Various issues important for accurate qualitative and quantitative analyses are critically discussed: sampling and choice of representative sample, sample preparation and possible bias associated with it, specificity of the analytical method and critical evaluation of results. PMID:19333436
Cháfer-Pericás, C; Torres-Cuevas, I; Sanchez-Illana, A; Escobar, J; Kuligowski, J; Solberg, R; Garberg, H T; Huun, M U; Saugstad, O D; Vento, M
2016-06-01
This paper describes a reliable analytical method based on ultra-performance liquid chromatography coupled to tandem mass spectrometry to determine F2-isoprostanes and other total byproducts (isoprostanes, isofurans, neuroprostanes and neurofurans) as lipid peroxidation biomarkers in newborn plasma samples. The proposed procedure is characterized by a simple sample treatment employing a reduced sample volume (100µL). Also, it shows a high throughput and high selectivity to determine simultaneously different isoprostane isomers in a large number of samples. The reliability of the described method was demonstrated by analysis of spiked plasma samples, obtaining recoveries between 70% and 130% for most of the analytes. Taking into account the implementation of further clinical studies, it was demonstrated the proper sensitivity of the method by means of the analysis of few human newborn plasma samples. In addition to this, newborn piglet plasma samples (n=80) were analyzed observing that the developed method was suitable to determine the analyte levels present in this kind of samples. Therefore, this analytical method could be applied in further clinical research about establishment of reliable lipid peroxidation biomarkers employing this experimental model. PMID:27130102
Charisiadis, Pantelis; Makris, Konstantinos C
2014-02-01
Because of the plethora of exposure sources and routes through which humans are exposed to trihalomethanes (THM), the limitation of their short half-lives could be overcome, if a highly sensitive method was available to quantify urinary THM concentrations at sub-ppb levels. The objective of this study was to develop a fast and reliable method for the determination of the four THM analytes in human urine. A sensitive methodology was developed for THM in urine samples using gas chromatography coupled with triple quadrupole mass spectrometry (GC-QqQ-MS/MS) promoting its use in epidemiological and biomonitoring studies. The proposed methodology enjoys limits of detection similar to those reported in the literature (11-80 ng L(-1)) and the advantages of small initial urine volumes (15 mL) and fast analysis per sample (12 min) when compared with other methods. This is the first report using GC-QqQ-MS/MS for the determination of THM in urine samples. Because of its simplicity and less time-consuming nature, the proposed method could be incorporated into detailed (hundreds of participants' urine samples) exposure assessment protocols providing valuable insight into the dose-response relationship of THM and cancer or pregnancy anomalies. PMID:24370554
A method for fast selecting feature wavelengths from the spectral information of crop nitrogen
Technology Transfer Automated Retrieval System (TEKTRAN)
Research on a method for fast selecting feature wavelengths from the nitrogen spectral information is necessary, which can determine the nitrogen content of crops. Based on the uniformity of uniform design, this paper proposed an improved particle swarm optimization (PSO) method. The method can ch...
NASA Astrophysics Data System (ADS)
He, Xiaolong; de la Llave, Rafael
2016-08-01
We construct analytic quasi-periodic solutions of a state-dependent delay differential equation with quasi-periodically forcing. We show that if we consider a family of problems that depends on one dimensional parameters (with some non-degeneracy conditions), there is a positive measure set Π of parameters for which the system admits analytic quasi-periodic solutions. The main difficulty to be overcome is the appearance of small divisors and this is the reason why we need to exclude parameters. Our main result is proved by a Nash-Moser fast convergent method and is formulated in the a-posteriori format of numerical analysis. That is, given an approximate solution of a functional equation which satisfies some non-degeneracy conditions, we can find a true solution close to it. This is in sharp contrast with the finite regularity theory developed in [18]. We conjecture that the exclusion of parameters is a real phenomenon and not a technical difficulty. More precisely, for generic families of perturbations, the quasi-periodic solutions are only finitely differentiable in open sets in the complement of parameters set Π.
Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces
NASA Technical Reports Server (NTRS)
Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John
2011-01-01
Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.
ANALYTICAL METHODS NECESSARY TO IMPLEMENT RISK-BASED CRITERIA FOR CHEMICALS IN MUNICIPAL SLUDGE
The Ambient Water Quality Criteria that were promulgated by the U.S. Environmental Protection Agency in 1980 included water concentration levels which, for many pollutants, were so low as to be unmeasurable by standard analytical methods. Criteria for controlling toxics in munici...
ANALYTICAL METHODS AND QUALITY ASSURANCE CRITERIA FOR LC/ES/MS DETERMINATION OF PFOS IN FISH
PFOS, perfluorooctanesulfonate, has recently received much attention from environmental researchers. Previous analytical methods were based upon complexing with a strong ion-pairing reagent and extraction into MTBE. Detection was done on a concentrate using negative ion LC/ES/MS/...
COMPARISON OF ANALYTICAL METHODS FOR THE MEASUREMENT OF NON-VIABLE BIOLOGICAL PM
The paper describes a preliminary research effort to develop a methodology for the measurement of non-viable biologically based particulate matter (PM), analyzing for mold, dust mite, and ragweed antigens and endotoxins. Using a comparison of analytical methods, the research obj...
Accurate analytical method for the extraction of solar cell model parameters
NASA Astrophysics Data System (ADS)
Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.
1984-05-01
Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.
40 CFR 141.402 - Ground water source microbial monitoring and analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Ground water source microbial monitoring and analytical methods. 141.402 Section 141.402 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Ground Water Rule § 141.402 Ground water source...
T-2 toxin, a trichothecene mycotoxin: Review of toxicity, metabolism, and analytical methods
Technology Transfer Automated Retrieval System (TEKTRAN)
This review focuses on the toxicity and metabolism of T-2 toxin and the analytical methods used for the determination of T-2 toxin. Among the naturally occurring trichothecenes in food and feed, T-2 toxin is a cytotoxic fungal secondary metabolite produced by various species of Fusarium. Following...
Knowledge, Skills, and Abilities for Entry-Level Business Analytics Positions: A Multi-Method Study
ERIC Educational Resources Information Center
Cegielski, Casey G.; Jones-Farmer, L. Allison
2016-01-01
It is impossible to deny the significant impact from the emergence of big data and business analytics on the fields of Information Technology, Quantitative Methods, and the Decision Sciences. Both industry and academia seek to hire talent in these areas with the hope of developing organizational competencies. This article describes a multi-method…
A joint EPA/state/industry working group has developed several multi-analyte methods to analyze soils for low ppb (parts per billion) levels of herbicides (such as sulfonylureas, imidazolinones, and sulfonamides) that are acetolactate synthase (ALS) inhibitors and may cause phyto...
40 CFR 260.21 - Petitions for equivalent testing or analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 26 2014-07-01 2014-07-01 false Petitions for equivalent testing or analytical methods. 260.21 Section 260.21 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21 Petitions for equivalent testing...
Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods
ERIC Educational Resources Information Center
Zhang, Ying
2011-01-01
Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…
40 CFR 141.402 - Ground water source microbial monitoring and analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Ground water source microbial monitoring and analytical methods. 141.402 Section 141.402 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Ground Water Rule § 141.402 Ground water source...
A novel second order fast decoupled load flow method in polar coordinates
Nanda, J.; Kothari, D.P.; Srivastava, S.C. )
1988-01-01
This paper presents a novel and effective second order fast decoupled load flow model in polar co-ordinates employing a totally different approach than used in existing second order methods in polar co-ordinates. This work eliminates the need for storing and computing repeatedly the second order terms by prudently injecting the elements of the Hessian matrix into the Jacobian. This results in a memory requirement at par with the usual fast decoupled load flow method. Investigations reveal that for well-behaved systems, the new method and the fast decoupled load flow method have practically the same convergence properties, whereas for certain ill-conditioned systems, the new method shows distinctly better convergence properties.
Effective Permeability of Fractured Rocks by Analytical Methods: A 3D Computational Study
NASA Astrophysics Data System (ADS)
Sævik, P. N.; Berre, I.; Jakobsen, M.; Lien, M.
2013-12-01
Analytical upscaling methods have been proposed in the literature to predict the effective hydraulic permeability of a fractured rock from its micro-scale parameters (fracture aperture, fracture orientation, fracture content, etc.). In this presentation, we put special emphasis on three effective medium methods (the symmetric and asymmetric self-consistent methods, and the differential method), and evaluate their accuracy for a wide range of parameter values. The analytical predictions are computed using our recently developed effective medium formulations, which are specifically adapted for fractured media. Compared to previous formulations, the new expressions have improved numerical stability properties, and require fewer input parameters. To assess their accuracy, the analytical predictions have been compared with 3D finite element simulations. Specifically, we generated realizations of several different fracture geometries, each consisting of 102 fractures within a unit cube. We applied unit potential difference on two opposing sides, and no-flux conditions on the remaining sides. A commercial finite-element solver was used to calculate the mean flux, from which the effective conductivity was found. This process was repeated for fracture densities up to ɛ = 1.0. Also, a wide range of fracture permeabilities was considered, from completely blocking to infinitely permeable fractures. The results were used to determine the range of applicability for each analytical method, which excels in different regions of the parameter space. For blocking fractures, the differential method is very accurate throughout the investigated parameter range. The symmetric self-consistent method also agrees well with the numerical results on sealed fractures, while the asymmetric self-consistent method is more unreliable. For permeable fractures, the performance of the methods depends on the dimensionless quantity λ = (Kfrac a)/(r Kmat ), describing the contrast between fracture and
Glass, Nel; Davis, Kierrynn
2004-01-01
Nursing research informed by postmodern feminist perspectives has prompted many debates in recent times. While this is so, nurse researchers who have been tempted to break new ground have had few examples of appropriate analytical methods for a research design informed by the above perspectives. This article presents a deconstructive/reconstructive secondary analysis of a postmodern feminist ethnography in order to provide an analytical exemplar. In doing so, previous notions of vulnerability as a negative state have been challenged and reconstructed. PMID:15206680
Method enabling fast partial sequencing of cDNA clones.
Nordström, T; Gharizadeh, B; Pourmand, N; Nyren, P; Ronaghi, M
2001-05-15
Pyrosequencing is a nonelectrophoretic single-tube DNA sequencing method that takes advantage of cooperativity between four enzymes to monitor DNA synthesis. To investigate the feasibility of the recently developed technique for tag sequencing, 64 colonies of a selected cDNA library from human were sequenced by both pyrosequencing and Sanger DNA sequencing. To determine the needed length for finding a unique DNA sequence, 100 sequence tags from human were retrieved from the database and different lengths from each sequence were randomly analyzed. An homology search based on 20 and 30 nucleotides produced 97 and 98% unique hits, respectively. An homology search based on 100 nucleotides could identify all searched genes. Pyrosequencing was employed to produce sequence data for 30 nucleotides. A similar search using BLAST revealed 16 different genes. Forty-six percent of the sequences shared homology with one gene at different positions. Two of the 64 clones had unique sequences. The search results from pyrosequencing were in 100% agreement with conventional DNA sequencing methods. The possibility of using a fully automated pyrosequencer machine for future high-throughput tag sequencing is discussed. PMID:11355860
Fast algorithms for glassy materials: methods and explorations
NASA Astrophysics Data System (ADS)
Middleton, A. Alan
2014-03-01
Glassy materials with frozen disorder, including random magnets such as spin glasses and interfaces in disordered materials, exhibit striking non-equilibrium behavior such as the ability to store a history of external parameters (memory). Precisely due to their glassy nature, direct simulation of models of these materials is very slow. In some fortunate cases, however, algorithms exist that exactly compute thermodynamic quantities. Such cases include spin glasses in two dimensions and interfaces and random field magnets in arbitrary dimensions at zero temperature. Using algorithms built using ideas developed by computer scientists and mathematicians, one can even directly sample equilibrium configurations in very large systems, as if one picked the configurations out of a ``hat'' of all configurations weighted by their Boltzmann factors. This talk will provide some of the background for these methods and discuss the connections between physics and computer science, as used by a number of groups. Recent applications of these methods to investigating phase transitions in glassy materials and to answering qualitative questions about the free energy landscape and memory effects will be discussed. This work was supported in part by NSF grant DMR-1006731. Creighton Thomas and David Huse also contributed to much of the work to be presented.
Homotopy Perturbation Method-Based Analytical Solution for Tide-Induced Groundwater Fluctuations.
Munusamy, Selva Balaji; Dhar, Anirban
2016-05-01
The groundwater variations in unconfined aquifers are governed by the nonlinear Boussinesq's equation. Analytical solution for groundwater fluctuations in coastal aquifers under tidal forcing can be solved using perturbation methods. However, the perturbation parameters should be properly selected and predefined for traditional perturbation methods. In this study, a new dimensional, higher-order analytical solution for groundwater fluctuations is proposed by using the homotopy perturbation method with a virtual perturbation parameter. Parameter-expansion method is used to remove the secular terms generated during the solution process. The solution does not require any predefined perturbation parameter and valid for higher values of amplitude parameter A/D, where A is the amplitude of the tide and D is the aquifer thickness. PMID:26340338
Analytical method for analyzing c-channel stiffener made of laminate composite
NASA Astrophysics Data System (ADS)
Kumton, Tattchapong
Composite materials play the important role in the aviation industry. Conventional materials such as aluminum were replaced by composite material on the main structures. The objective of this study focuses on development of analytical method to analyze the laminated composite structure with C-channel cross-section. A lamination theory base closed-form solution was developed to analysis ply stresses on the C-channel cross-section. The developed method contains the effects of coupling due to unsymmetrical of both laminate and structural configuration levels. The present method also included the expression of the sectional properties such as centroid, axial and bending stiffnesses of cross-section. The results obtain from analytical method showed an excellent agreement with finite element results.
Tena, Noelia; Wang, Selina C; Aparicio-Ruiz, Ramón; García-González, Diego L; Aparicio, Ramón
2015-05-13
This paper evaluates the performance of the current analytical methods (standard and widely used otherwise) that are used in olive oil for determining fatty acids, triacylglycerols, mono- and diacylglycerols, waxes, sterols, alkyl esters, erythrodiol and uvaol, tocopherols, pigments, volatiles, and phenols. Other indices that are commonly used, such as free acidity and peroxide value, are also discussed in relation to their actual utility in assessing quality and safety and their possible alternatives. The methods have been grouped on the basis of their applications: (i) purity and authenticity; (ii) sensory quality control; and (iii) unifying methods for different applications. The speed of the analysis, advantages and disadvantages, and multiple quality parameters are assessed. Sample pretreatment, physicochemical and data analysis, and evaluation of the results have been taken into consideration. Solutions based on new chromatographic methods or spectroscopic analysis and their analytical characteristics are also presented. PMID:25891853
Dynamic buckling analysis of delaminated composite plates using semi-analytical finite strip method
NASA Astrophysics Data System (ADS)
Ovesy, H. R.; Totounferoush, A.; Ghannadpour, S. A. M.
2015-05-01
The delamination phenomena can become of paramount importance when the design of the composite plates is concerned. In the current study, the effect of through-the-width delamination on dynamic buckling behavior of a composite plate is studied by implementing semi-analytical finite strip method. In this method, the energy and work integrations are computed analytically due to the implementation of trigonometric functions. Moreover, the method can lead to converged results with comparatively small number of degrees of freedom. These features have made the method quite efficient. To account for delamination effects, displacement field is enriched by adding appropriate terms. Also, the penetration of the delamination surfaces is prevented by incorporating an appropriate contact scheme into the time response analysis. Some selected results are validated against those available in the literature.
A simple analytical method for heterogeneity corrections in low dose rate prostate brachytherapy
NASA Astrophysics Data System (ADS)
Hueso-González, Fernando; Vijande, Javier; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André
2015-07-01
In low energy brachytherapy, the presence of tissue heterogeneities contributes significantly to the discrepancies observed between treatment plan and delivered dose. In this work, we present a simplified analytical dose calculation algorithm for heterogeneous tissue. We compare it with Monte Carlo computations and assess its suitability for integration in clinical treatment planning systems. The algorithm, named as RayStretch, is based on the classic equivalent path length method and TG-43 reference data. Analytical and Monte Carlo dose calculations using Penelope2008 are compared for a benchmark case: a prostate patient with calcifications. The results show a remarkable agreement between simulation and algorithm, the latter having, in addition, a high calculation speed. The proposed analytical model is compatible with clinical real-time treatment planning systems based on TG-43 consensus datasets for improving dose calculation and treatment quality in heterogeneous tissue. Moreover, the algorithm is applicable for any type of heterogeneities.
Analytical method for calculation of navigational data for the position of a satellite
NASA Technical Reports Server (NTRS)
Lala, P.
1975-01-01
A method is described for calculating the position of a satellite at the instants when measurements are made on board. The initial conditions used were the mean orbital elements of the satellite and their time derivatives in one orbit. The results of the calculation are compared with those obtained by numerical integration, and it is found that results are identical at the beginning of an orbit, but change as the orbit progresses. The advantages and disadvantages of the analytical method are presented.
Status report on analytical methods to support the disinfectant/disinfection by-products regulation
Not Available
1992-08-01
The U.S. EPA is developng national regulations to control disinfectants and disinfection by-products in public drinking water supplies. Twelve disinfectants and disinfection by-products are identified for possible regulation under this rule. The document summarizes the analytical methods that EPA intends to propose as compliance monitoring methods. A discussion of surrogate measurements that are being considered for inclusion in the regulation is also provided.
Flight and Analytical Methods for Determining the Coupled Vibration Response of Tandem Helicopters
NASA Technical Reports Server (NTRS)
Yeates, John E , Jr; Brooks, George W; Houbolt, John C
1957-01-01
Chapter one presents a discussion of flight-test and analysis methods for some selected helicopter vibration studies. The use of a mechanical shaker in flight to determine the structural response is reported. A method for the analytical determination of the natural coupled frequencies and mode shapes of vibrations in the vertical plane of tandem helicopters is presented in Chapter two. The coupled mode shapes and frequencies are then used to calculate the response of the helicopter to applied oscillating forces.
NASA Astrophysics Data System (ADS)
Worley, Christopher G.; Havrilla, George J.
2000-07-01
Accurately determining the concentration of certain elements in plutonium is of vital importance in manufacturing nuclear weapons. X-ray fluorescence (XRF) provides a means of obtaining this type of elemental information accurately, quickly, with high precision, and often with little sample preparation. In the present work, a novel method was developed to analyze the gallium concentration in plutonium samples using wavelength-dispersive XRF. A description of the analytical method will be discussed.
Strong, Gemma K; Torgerson, Carole J; Torgerson, David; Hulme, Charles
2011-01-01
Background Fast ForWord is a suite of computer-based language intervention programs designed to improve children's reading and oral language skills. The programs are based on the hypothesis that oral language difficulties often arise from a rapid auditory temporal processing deficit that compromises the development of phonological representations. Methods A systematic review was designed, undertaken and reported using items from the PRISMA statement. A literature search was conducted using the terms ‘Fast ForWord’ ‘Fast For Word’ ‘Fastforword’ with no restriction on dates of publication. Following screening of (a) titles and abstracts and (b) full papers, using pre-established inclusion and exclusion criteria, six papers were identified as meeting the criteria for inclusion (randomised controlled trial (RCT) or matched group comparison studies with baseline equivalence published in refereed journals). Data extraction and analyses were carried out on reading and language outcome measures comparing the Fast ForWord intervention groups to both active and untreated control groups. Results Meta-analyses indicated that there was no significant effect of Fast ForWord on any outcome measure in comparison to active or untreated control groups. Conclusions There is no evidence from the analysis carried out that Fast ForWord is effective as a treatment for children's oral language or reading difficulties. PMID:20950285
NASA Astrophysics Data System (ADS)
Steward, David R.; Allen, Andrew J.
2013-10-01
Groundwater studies face computational limitations when providing local detail (such as well drawdown) within regional models. We adapt the Analytic Element Method (AEM) to extend separation of variable solutions for a rectangle to domains composed of multiple interconnected rectangular elements. Each rectangle contains a series solution that satisfies the governing equations and coefficients are adjusted to match boundary conditions at the edge of the domain and continuity conditions across adjacent rectangles. A complete mathematical implementation is presented including matrices to solve boundary and continuity conditions. This approach gathers the mathematical functions associated with head and velocity within a small set of functions for each rectangle, enabling fast computation of these variables. Benchmark studies verify that conservation of mass and energy conditions are accurately satisfied using a method of images solution, and also develop a solution for heterogeneous hydraulic conductivity with log normal distribution. A case study illustrates that the methods are capable of modeling local detail within a large-scale regional model of the High Plains Aquifer in the central USA and reports the numerical costs associated with increasing resolution, where use is made of GIS datasets for thousands of rectangular elements each with unique geologic and hydrologic properties, Methods are applicable to interconnected rectangular domains in other fields of study such as heat conduction, electrical conduction, and unsaturated groundwater flow.
Fast synthesize ZnO quantum dots via ultrasonic method.
Yang, Weimin; Zhang, Bing; Ding, Nan; Ding, Wenhao; Wang, Lixi; Yu, Mingxun; Zhang, Qitu
2016-05-01
Green emission ZnO quantum dots were synthesized by an ultrasonic sol-gel method. The ZnO quantum dots were synthesized in various ultrasonic temperature and time. Photoluminescence properties of these ZnO quantum dots were measured. Time-resolved photoluminescence decay spectra were also taken to discover the change of defects amount during the reaction. Both ultrasonic temperature and time could affect the type and amount of defects in ZnO quantum dots. Total defects of ZnO quantum dots decreased with the increasing of ultrasonic temperature and time. The dangling bonds defects disappeared faster than the optical defects. Types of optical defects first changed from oxygen interstitial defects to oxygen vacancy and zinc interstitial defects. Then transformed back to oxygen interstitial defects again. The sizes of ZnO quantum dots would be controlled by both ultrasonic temperature and time as well. That is, with the increasing of ultrasonic temperature and time, the sizes of ZnO quantum dots first decreased then increased. Moreover, concentrated raw materials solution brought larger sizes and more optical defects of ZnO quantum dots. PMID:26611814
A fast-convergence POCS seismic denoising and reconstruction method
NASA Astrophysics Data System (ADS)
Ge, Zi-Jian; Li, Jing-Ye; Pan, Shu-Lin; Chen, Xiao-Hong
2015-06-01
The efficiency, precision, and denoising capabilities of reconstruction algorithms are critical to seismic data processing. Based on the Fourier-domain projection onto convex sets (POCS) algorithm, we propose an inversely proportional threshold model that defines the optimum threshold, in which the descent rate is larger than in the exponential threshold in the large-coefficient section and slower than in the exponential threshold in the small-coefficient section. Thus, the computation efficiency of the POCS seismic reconstruction greatly improves without affecting the reconstructed precision of weak reflections. To improve the flexibility of the inversely proportional threshold, we obtain the optimal threshold by using an adjustable dependent variable in the denominator of the inversely proportional threshold model. For random noise attenuation by completing the missing traces in seismic data reconstruction, we present a weighted reinsertion strategy based on the data-driven model that can be obtained by using the percentage of the data-driven threshold in each iteration in the threshold section. We apply the proposed POCS reconstruction method to 3D synthetic and field data. The results suggest that the inversely proportional threshold model improves the computational efficiency and precision compared with the traditional threshold models; furthermore, the proposed reinserting weight strategy increases the SNR of the reconstructed data.
Conventional approaches to water quality characterization can provide data on individual chemical components of each water sample. This analyte-by-analyte approach currently serves many useful research and compliance monitoring needs. However these approaches, which require a ...
Not Available
1989-05-15
This supplement contains 34 methods of analysis for 69 toxic chemical compounds and serves as an update to the NIOSH manual of analytical methods. Methods were selected on the basis of their use, input from the clients and NIOSH chemists on need for change, and the health implications of the compounds. Methods were included for acetaldehyde, acetic-acid, acrylonitrile, aldehydes, aliphatic amines, aminoethanol compounds, asbestos bulk and fibers, 1-butanethiol, chlordane, hexavalent chromium compounds, cyanuric-acid, ethyleneamines, endrin, fibers, formaldehyde, furfuryl-alcohol, glutaraldehyde, hydrogen-cyanide, isocyanates, ketones, mercury, methyl-methacrylate, nitrosamines, pentachlorophenol, quartz in coal mine dust, ribavirin respirable crystalline silica, sulfur-dioxide, toluene diamines, and valeraldehyde.
NASA Astrophysics Data System (ADS)
Moawad, S. M.
2015-02-01
In this paper, we present a solution method for constructing exact analytic solutions to magnetohydrodynamics (MHD) equations. The method is constructed via all the trigonometric and hyperbolic functions. The method is applied to MHD equilibria with mass flow. Applications to a solar system concerned with the properties of coronal mass ejections that affect the heliosphere are presented. Some examples of the constructed solutions which describe magnetic structures of solar eruptions are investigated. Moreover, the constructed method can be applied to a variety classes of elliptic partial differential equations which arise in plasma physics.
Moawad, S. M.
2015-02-15
In this paper, we present a solution method for constructing exact analytic solutions to magnetohydrodynamics (MHD) equations. The method is constructed via all the trigonometric and hyperbolic functions. The method is applied to MHD equilibria with mass flow. Applications to a solar system concerned with the properties of coronal mass ejections that affect the heliosphere are presented. Some examples of the constructed solutions which describe magnetic structures of solar eruptions are investigated. Moreover, the constructed method can be applied to a variety classes of elliptic partial differential equations which arise in plasma physics.
Analytical methods for the determination of personal care products in human samples: an overview.
Jiménez-Díaz, I; Zafra-Gómez, A; Ballesteros, O; Navalón, A
2014-11-01
Personal care products (PCPs) are organic chemicals widely used in everyday human life. Nowadays, preservatives, UV-filters, antimicrobials and musk fragrances are widely used PCPs. Different studies have shown that some of these compounds can cause adverse health effects, such as genotoxicity, which could even lead to mutagenic or carcinogenic effects, or estrogenicity because of their endocrine disruption activity. Due to the absence of official monitoring protocols, there is an increasing demand of analytical methods that allow the determination of those compounds in human samples in order to obtain more information regarding their behavior and fate in the human body. The complexity of the biological matrices and the low concentration levels of these compounds make necessary the use of advanced sample treatment procedures that afford both, sample clean-up, to remove potentially interfering matrix components, as well as the concentration of analytes. In the present work, a review of the more recent analytical methods published in the scientific literature for the determination of PCPs in human fluids and tissue samples, is presented. The work focused on sample preparation and the analytical techniques employed. PMID:25127618
Towards in-vivo K-edge imaging using a new semi-analytical calibration method
NASA Astrophysics Data System (ADS)
Schirra, Carsten; Thran, Axel; Daerr, Heiner; Roessl, Ewald; Proksa, Roland
2014-03-01
Flat field calibration methods are commonly used in computed tomography (CT) to correct for system imperfections. Unfortunately, they cannot be applied in energy-resolving CT when using bow-tie filters owing to spectral distortions imprinted by the filter. This work presents a novel semi-analytical calibration method for photon-counting spectral CT systems, which is applicable with a bow-tie filter in place and efficiently compensates pile-up effects at fourfold increased photon flux compared to a previously published method without degradation of image quality. The achieved reduction of the scan time enabled the first K-edge imaging in-vivo. The method employs a calibration measurement with a set of flat sheets of only a single absorber material and utilizes an analytical model to predict the expected photon counts, taking into account factors such as x-ray spectrum and detector response. From the ratios of the measured x-ray intensities and the corresponding simulated photon counts, a look-up table is generated. By use of this look-up table, measured photon-counts can be corrected yielding data in line with the analytical model. The corrected data show low pixel-to-pixel variations and pile-up effects are mitigated. Consequently, operations like material decomposition based on the same analytical model yield accurate results. The method was validated on a experimental spectral CT system equipped with a bow-tie filter in a phantom experiment and an in-vivo animal study. The level of artifacts in the resulting images is considerably lower than in images generated with a previously published method. First in-vivo K-edge images of a rabbit selectively depict vessel occlusion by an ytterbium-based thermoresponsive polymer.
Bardarov, Krum; Naydenov, Mladen; Djingova, Rumyana
2015-09-01
An optimized analytical method based on C8 core-shell reverse phase chromatographic separation and high resolution mass spectral (HRMS) detection is developed for a fast analysis of unbound phytochelatins (PCs) in plants. Its application to analysis of Clinopodium vulgare L. is demonstrated where proper PCs liberating and preservation conditions were employed using dithiotreitol in the extraction step. A baseline separation of glutathione (GSH) and phytochelatins from 2 to 5 (PC2-PC5) for 3 min was achieved at conventional HPLC backpressure, with detection limits from 3 ppt (for GSH) to 2.5 ppb (for PC5). It is shown, that the use of HRMS with tandem mass spectral (MS/MS) capabilities permits additional wide range screening ability for iso-phytochelatins and PC similar compounds, based on exact mass and fragment spectra in a post acquisition manner. PMID:26003687
Burtis, C.A.; Johnson, W.F.; Walker, W.A.
1985-08-05
A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises: (1) a whole blood sample disc; (2) a serum sample disc; (3) a sample preparation rotor; and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analyticaly rotor for conventional methods. 5 figs.
GPU-accelerated indirect boundary element method for voxel model analyses with fast multipole method
NASA Astrophysics Data System (ADS)
Hamada, Shoji
2011-05-01
An indirect boundary element method (BEM) that uses the fast multipole method (FMM) was accelerated using graphics processing units (GPUs) to reduce the time required to calculate a three-dimensional electrostatic field. The BEM is designed to handle cubic voxel models and is specialized to consider square voxel walls as boundary surface elements. The FMM handles the interactions among the surface charge elements and directly outputs surface integrals of the fields over each individual element. The CPU code was originally developed for field analysis in human voxel models derived from anatomical images. FMM processes are programmed using the NVIDIA Compute Unified Device Architecture (CUDA) with double-precision floating-point arithmetic on the basis of a shared pseudocode template. The electric field induced by DC-current application between two electrodes is calculated for two models with 499,629 (model 1) and 1,458,813 (model 2) surface elements. The calculation times were measured with a four-GPU configuration (two NVIDIA GTX295 cards) with four CPU cores (an Intel Core i7-975 processor). The times required by a linear system solver are 31 s and 186 s for models 1 and 2, respectively. The speed-up ratios of the FMM range from 5.9 to 8.2 for model 1 and from 5.0 to 5.6 for model 2. The calculation speed for element-interaction in this BEM analysis was comparable to that of particle-interaction using FMM on a GPU.
Characterization of rice starch and protein obtained by a fast alkaline extraction method.
Souza, Daiana de; Sbardelotto, Arthur Francisco; Ziegler, Denize Righetto; Marczak, Ligia Damasceno Ferreira; Tessaro, Isabel Cristina
2016-01-15
This study evaluated the characteristics of rice starch and protein obtained by a fast alkaline extraction method on rice flour (RF) derived from broken rice. The extraction was conducted using 0.18% NaOH at 30°C for 30min followed by centrifugation to separate the starch rich and the protein rich fractions. This fast extraction method allowed to obtain an isoelectric precipitation protein concentrate (IPPC) with 79% protein and a starchy product with low protein content. The amino acid content of IPPC was practically unchanged compared to the protein in RF. The proteins of the IPPC underwent denaturation during extraction and some of the starch suffered the cold gelatinization phenomenon, due to the alkaline treatment. With some modifications, the fast method can be interesting in a technological point of view as it enables process cost reduction and useful ingredients obtention to the food and chemical industries. PMID:26258699
Takeda, T.; Shimazu, Y.; Hibi, K.; Fujimura, K.
2012-07-01
Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of this project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)
Hexavalent chromium (CrVI) field analytical method for bioenvironmental engineers. Final report
Carlton, G.N.; Chaloux, L.; Reichert, J.M.; England, E.C.; Greebon, K.
1999-04-01
The Industrial Hygiene Branch, in a collaborative effort with the National Institute for Occupational Safety and Health (NIOSH), developed a field analytical method to measure hexavalent chromium (CrVI, chromate) levels in air. The method uses ultrasonic extraction of sampling filters, solid-phase extraction of chromates from the extracted solution, and determination of chromate concentrations by spectrophotometry. It is an alternative to NIOSH methods 7300 and 7600 and overcomes some of the disadvantages of these methods. The chromate field method is relatively easy to use, specific for CrVI, has a lower detection limit than NIOSH 7600, and allow analysis before there is a chance for significant sample degradation. The method is intended for use by Bioenvironmental Engineers. Although all Bioenvironmental Engineering shops will benefit from use of the method, those shops that take a lot of chromate samples or have significant chromate exposure problems will derive the most benefit from the method.
Comparison of five analytical methods for the determination of peroxide value in oxidized ghee.
Mehta, Bhavbhuti M; Darji, V B; Aparnathi, K D
2015-10-15
In the present study, a comparison of five peroxide analytical methods was performed using oxidized ghee. The methods included the three iodometric titration viz. Bureau of Indian Standard (BIS), Association of Analytical Communities (AOAC) and American Oil Chemists' Society (AOCS), and two colorimetric methods, the ferrous xylenol orange (FOX) and ferric thiocyanate (International Dairy Federation, IDF) methods based on oxidation of iron. Six ghee samples were stored at 80 °C to accelerate deterioration and sampled periodically (every 48 h) for peroxides. Results were compared using the five methods for analysis as well as a flavor score (9 point hedonic scale). The correlation coefficients obtained using the different methods were in the order: FOX (-0.836) > IDF (-0.821) > AOCS (-0.798) > AOAC (-0.795) > BIS (-0.754). Thus, among the five methods used for determination of peroxide value of ghee during storage, the highest coefficient of correlation was obtained for the FOX method. The high correlations between the FOX and flavor data indicated that FOX was the most suitable method tested to determine peroxide value in oxidized ghee. PMID:25952892
An analytical sensitivity method for use in integrated aeroservoelastic aircraft design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of an LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.
An analytical sensitivity method for use in integrated aeroservoelastic aircraft design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.
NASA Astrophysics Data System (ADS)
Jia, Jinhong; Wang, Hong
2015-07-01
Numerical methods for space-fractional diffusion equations often generate dense or even full stiffness matrices. Traditionally, these methods were solved via Gaussian type direct solvers, which requires O (N3) of computational work per time step and O (N2) of memory to store where N is the number of spatial grid points in the discretization. In this paper we develop a preconditioned fast Krylov subspace iterative method for the efficient and faithful solution of finite difference methods (both steady-state and time-dependent) space-fractional diffusion equations with fractional derivative boundary conditions in one space dimension. The method requires O (N) of memory and O (Nlog N) of operations per iteration. Due to the application of effective preconditioners, significantly reduced numbers of iterations were achieved that further reduces the computational cost of the fast method. Numerical results are presented to show the utility of the method.
Three-dimensional nonplanar lithography simulation using a periodic fast multipole method
NASA Astrophysics Data System (ADS)
Yeung, Michael S.; Barouch, Eytan
1997-07-01
This paper discusses an extension of the fast multipole method to electromagnetic scattering from doubly periodic, multilayer wafer topography. The novelty of our approach lies in the use of a pseudo-periodic translation operator which can be computed efficiently using fast Fourier transform. Results obtained using the rigorous boundary conditions for dielectric surfaces are compared with those obtained using the approximate impedance boundary condition. The latter is shown to give good results for the type of topography usually encountered in lithography simulation. Results of reflective-notching simulation using the IBC method are presented.
Selection of analytical methods for mixed waste analysis at the Hanford Site
Morant, P.M.
1994-09-01
This document describes the process that the US Department of Energy (DOE), Richland Operations Office (RL) and contractor laboratories use to select appropriate or develop new or modified analytical methods. These methods are needed to provide reliable mixed waste characterization data that meet project-specific quality assurance (QA) requirements while also meeting health and safety standards for handling radioactive materials. This process will provide the technical basis for DOE`s analysis of mixed waste and support requests for regulatory approval of these new methods when they are used to satisfy the regulatory requirements of the Hanford Federal Facility Agreement and Consent Order (Tri-party Agreement) (Ecology et al. 1992).
SRC-I demonstration plant analytical laboratory methods manual. Final technical report
Klusaritz, M.L.; Tewari, K.C.; Tiedge, W.F.; Skinner, R.W.; Znaimer, S.
1983-03-01
This manual is a compilation of analytical procedures required for operation of a Solvent-Refined Coal (SRC-I) demonstration or commercial plant. Each method reproduced in full includes a detailed procedure, a list of equipment and reagents, safety precautions, and, where possible, a precision statement. Procedures for the laboratory's environmental and industrial hygiene modules are not included. Required American Society for Testing and Materials (ASTM) methods are cited, and ICRC's suggested modifications to these methods for handling coal-derived products are provided.
Kartal, Mehmet E.
2013-01-01
The contour method is one of the most prevalent destructive techniques for residual stress measurement. Up to now, the method has involved the use of the finite-element (FE) method to determine the residual stresses from the experimental measurements. This paper presents analytical solutions, obtained for a semi-infinite strip and a finite rectangle, which can be used to calculate the residual stresses directly from the measured data; thereby, eliminating the need for an FE approach. The technique is then used to determine the residual stresses in a variable-polarity plasma-arc welded plate and the results show good agreement with independent neutron diffraction measurements. PMID:24204187
NASA Astrophysics Data System (ADS)
Li, Zhipeng; Xu, Xun; Xu, Shangzhi; Qian, Yeqing; Xu, Juan
2016-07-01
The car-following model is extended to take into account the characteristics of mixed traffic flow containing fast and slow vehicles. We conduct the linear stability analysis to the extended model with finding that the traffic flow can be stabilized with the increase of the percentage of the slow vehicle. It also can be concluded that the stabilization of the traffic flow closely depends on not only the average value of two maximum velocities characterizing two vehicle types, but also the standard deviation of the maximum velocities among all vehicles, when the percentage of the slow vehicles is the same as that of the fast ones. With increase of the average maximum velocity, the traffic flow becomes more and more unstable, while the increase of the standard deviation takes negative effect in stabilizing the traffic system. The direct numerical results are in good agreement with those of theoretical analysis. Moreover, the relation between the flux and the traffic density is investigated to simulate the effects of the percentage of slow vehicles on traffic flux in the whole density regions.
General analytic methods for solving coupled transport equations: From cosmology to beyond
NASA Astrophysics Data System (ADS)
White, G. A.
2016-02-01
We propose a general method to analytically solve transport equations during a phase transition without making approximations based on the assumption that any transport coefficient is large. Using a cosmic phase transition in the minimal supersymmetric standard model as a pedagogical example, we derive the solutions to a set of 3 transport equations derived under the assumption of supergauge equilibrium and the diffusion approximation. The result is then rederived efficiently using a technique we present involving a parametrized ansatz which turns the process of deriving a solution into an almost elementary problem. We then show how both the derivation and the parametrized ansatz technique can be generalized to solve an arbitrary number of transport equations. Finally we derive a perturbative series that relaxes the usual approximation that inactivates vacuum-expectation-value dependent relaxation and C P -violating source terms at the bubble wall and through the symmetric phase. Our analytical methods are able to reproduce a numerical calculation in the literature.
Analytical potential curves of some hydride molecules using algebraic and energy-consistent method
NASA Astrophysics Data System (ADS)
Fan, Qunchao; Sun, Weiguo; Feng, Hao; Zhang, Yi; Wang, Qi
2014-01-01
Based on the algebraic method (AM) and the energy consistent method (ECM), an AM-ECM protocol for analytical potential energy curves of stable diatomic electronic states is proposed as functions of the internuclear distance. Applications of the AM-ECM to the 6 hydride electronic states of HF-X1Σ+, DF-X1Σ+, D35Cl-X1Σ+, 6LiH-X1Σ+, 7LiH-X1Σ+, and 7LiD-X1Σ+ show that the AM-ECM potentials are in excellent agreement with the experimental RKR data and the full AM-RKR data, and that the AM-ECM can obtain reliable analytical potential energies in the molecular asymptotic and dissociation region for these molecular electronic states.
Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe
2014-05-01
The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. PMID:24360478
A comparative evaluation of analytical methods to allocate individual marks from a team mark
NASA Astrophysics Data System (ADS)
Nepal, Kali
2012-08-01
This study presents a comparative evaluation of analytical methods to allocate individual marks from a team mark. Only the methods that use or can be converted into some form of mathematical equations are analysed. Some of these methods focus primarily on the assessment of the quality of teamwork product (product assessment) while the others put greater emphasis on the assessment of teamwork performance (process assessment). The remaining methods try to strike a balance between product assessment and process assessment. To discuss the characteristics of these methods, graphical plots generated by the mathematical equations that collectively cover all possible team learning scenarios are discussed. Finally, a typical teamwork example is used to simplify the discussions. Although each of the methods discussed has its own merits for a particular application scenario, recent methods are relatively better in terms of a number of evaluation criteria.
A new validated analytical method for the quality control of red ginseng products
Kim, Il-Woung; Cha, Kyu-Min; Wee, Jae Joon; Ye, Michael B.; Kim, Si-Kwan
2013-01-01
The main active components of Panax ginseng are ginsenosides. Ginsenoside Rb1 and Rg1 are accepted as marker substances for quality control worldwide. The analytical methods currently used to detect these two compounds unfairly penalize steamed and dried (red) P. ginseng preparations, because it has a lower content of those ginsenosides than white ginseng. To manufacture red ginseng products from fresh ginseng, the ginseng roots are exposed to high temperatures for many hours. This heating process converts the naturally occurring ginsenoside Rb1 and Rg1 into artifact ginsenosides such as ginsenoside Rg3, Rg5, Rh1, and Rh2, among others. This study highlights the absurdity of the current analytical practice by investigating the time-dependent changes in the crude saponin and the major natural and artifact ginsenosides contents during simmering. The results lead us to recommend (20S)- and (20R)-ginsenoside Rg3 as new reference materials to complement the current P. ginseng preparation reference materials ginsenoside Rb1 and Rg1. An attempt has also been made to establish validated qualitative and quantitative analytical procedures for these four compounds that meet International Conference of Harmonization (ICH) guidelines for specificity, linearity, range, accuracy, precision, detection limit, quantitation limit, robustness and system suitability. Based on these results, we suggest a validated analytical procedure which conforms to ICH guidelines and equally values the contents of ginsenosides in white and red ginseng preparations. PMID:24235862
Weitz, Karl K.; Moore, Ronald J.
2010-07-13
A method and device are disclosed that provide for detection of fluid leaks in analytical instruments and instrument systems. The leak detection device includes a collection tube, a fluid absorbing material, and a circuit that electrically couples to an indicator device. When assembled, the leak detection device detects and monitors for fluid leaks, providing a preselected response in conjunction with the indicator device when contacted by a fluid.
Development of a fast DNA extraction method for sea food and marine species identification.
Tagliavia, Marcello; Nicosia, Aldo; Salamone, Monica; Biondo, Girolama; Bennici, Carmelo Daniele; Mazzola, Salvatore; Cuttitta, Angela
2016-07-15
The authentication of food components is one of the key issues in food safety. Similarly taxonomy, population and conservation genetics as well as food web structure analysis, also rely on genetic analyses including the DNA barcoding technology. In this scenario we developed a fast DNA extraction method without any purification step from fresh and processed seafood, suitable for any PCR analysis. The protocol allows the fast DNA amplification from any sample, including fresh, stored and processed seafood and from any waste of industrial fish processing, independently of the sample storage method. Therefore, this procedure is particularly suitable for the fast processing of samples and to carry out investigations for the authentication of seafood by means of DNA analysis. PMID:26948627
Analytical methods of the U.S. Geological Survey's New York District Water-Analysis Laboratory
Lawrence, Gregory B.; Lincoln, Tricia A.; Horan-Ross, Debra A.; Olson, Mark L.; Waldron, Laura A.
1995-01-01
The New York District of the U.S. Geological Survey (USGS) in Troy, N.Y., operates a water-analysis laboratory for USGS watershed-research projects in the Northeast that require analyses of precipitation and of dilute surface water and soil water for major ions; it also provides analyses of certain chemical constituents in soils and soil gas samples. This report presents the methods for chemical analyses of water samples, soil-water samples, and soil-gas samples collected in wateshed-research projects. The introduction describes the general materials and technicques for eachmethod and explains the USGS quality-assurance program and data-management procedures; it also explains the use of cross reference to the three most commonly used methods manuals for analysis of dilute waters. The body of the report describes the analytical procedures for (1) solution analysis, (2) soil analysis, and (3) soil-gas analysis. The methods are presented in alphabetical order by constituent. The method for each constituent is preceded by (1) reference codes for pertinent sections of the three manuals mentioned above, (2) a list of the method's applications, and (3) a summary of the procedure. The methods section for each constitutent contains the following categories: instrumentation and equipment, sample preservation and storage, reagents and standards, analytical procedures, quality control, maintenance, interferences, safety considerations, and references. Sufficient information is presented for each method to allow the resulting data to be appropriately used in environmental samples.
Comparison of segmentation using fast marching and geodesic active contours methods for bone
NASA Astrophysics Data System (ADS)
Bilqis, A.; Widita, R.
2016-03-01
Image processing is important in diagnosing diseases or damages of human organs. One of the important stages of image processing is segmentation process. Segmentation is a separation process of the image into regions of certain similar characteristics. It is used to simplify the image to make an analysis easier. The case raised in this study is image segmentation of bones. Bone's image segmentation is a way to get bone dimensions, which is needed in order to make prosthesis that is used to treat broken or cracked bones. Segmentation methods chosen in this study are fast marching and geodesic active contours. This study uses ITK (Insight Segmentation and Registration Toolkit) software. The success of the segmentation was then determined by calculating its accuracy, sensitivity, and specificity. Based on the results, the Active Contours method has slightly higher accuracy and sensitivity values than the fast marching method. As for the value of specificity, fast marching has produced three image results that have higher specificity values compared to those of geodesic active contour's. The result also indicates that both methods have succeeded in performing bone's image segmentation. Overall, geodesic active contours method is quite better than fast marching in segmenting bone images.
Kolpin, D.W.; Goolsby, D.A.; Thurman, E.M.
1995-11-01
In 1992, the U.S. Geological Survey (USGS) determined the distribution of pesticides in near-surface aquifers of the Midwestern USA to be much more widespread than originally determined during a 1991 USGS study. The frequency of pesticide detection increased from 28.4% during the 1991 study to 59.0% during the 1992 study. This increase in pesticide detection was primarily the result of a more sensitive analytical method that used reporting limits as much as 20 times lower than previously available and a threefold increase in the number of pesticide metabolites analyzed. No pesticide concentrations exceeded the U.S. Environmental Protection Agency`s (USEPAs) maximum contaminant levels or health advisory levels for drinking water. However, five of the six most frequently detected compounds during 1992 were pesticide metabolites that currently do not have drinking water standards determined. The frequent presence of pesticide metabolites for this study documents the importance of obtaining information on these compounds to understand the fate and transport of pesticides in the hydrologic system. It appears that the 56 parent compounds analyzed follow similar pathways through the hydrologic system as atrazine. When atrazine was detected by routine or sensitive analytical methods, there was an increased likelihood of detecting additional parent compounds. As expected, the frequency of pesticide detection was highly dependent on the analytical reporting limit. The number of atrazine detections more than doubled as the reporting limit decreased from 0.10 to 0.01 {mu}g/L. The 1992 data provided no indication that the frequency of pesticide detection would level off as improved analytical methods provide concentrations below 0.003 {mu}g/L. A relation was determined between groundwater age and the frequency of pesticide detection. 30 refs., 4 figs., 3 tabs.
Radiological sampling and analytical methods for National Primary Drinking Water Regulations.
Blanchard, R L; Hahne, R M; Kahn, B; McCurdy, D; Mellor, R A; Moore, W S; Sedlet, J; Whittaker, E
1985-05-01
Radiological sampling and analysis performed under the National Interim Primary Drinking Water Regulations were evaluated for the U.S. Environmental Protection Agency (EPA) Office of Drinking Water to consider whether any changes should be recommended. The authors reviewed the analytical screening scheme; sample collection, storage and analysis procedures; selection of analytical methods; reliability of results; and possible future needs. The main problem in the program has been dependence on a screening scheme of gross alpha-particle activity measurement and 226Ra analysis for predicting elevated 228Ra levels to determine compliance with the maximum contaminant level (MCL) for Ra. In some aquifers, 228Ra levels have been found to be unrelated to 226Ra levels. Several alternatives are discussed to eliminate this problem. A secondary problem is that the measurement for assuring compliance with the MCL for gross alpha-particle activity minus Ra, Rn and U uses chemical U analysis and assumes equilibrium of 238U and 234U. Because some ground waters are known to be at disequilibrium, radiometric U analysis is needed for those gross alpha-particle activities and chemical U values that could result in an erroneous conclusion relative to the MCL. In addition, studies were recommended for determining analytical uncertainties and assuring reliable sampling and sample maintenance; improvements in the system for accepting methods were suggested; and methods were identified for several radionuclides not currently in the analytical program that may be needed to assure absence of elevated radiation doses and could be useful for identifying trace contaminants. PMID:3988523
NASA Technical Reports Server (NTRS)
Hauschildt, P. H.
1992-01-01
A fast method for the solution of the radiative transfer equation in rapidly moving spherical media, based on an approximate Lambda-operator iteration, is described. The method uses the short characteristic method and a tridiagonal approximate Lambda-operator to achieve fast convergence. The convergence properties and the CPU time requirements of the method are discussed for the test problem of a two-level atom with background continuum absorption and Thomson scattering. Details of the actual implementation for fast vector and parallel computers are given. The method is accurate and fast enough to be incorporated in radiation-hydrodynamic calculations.
Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
NASA Astrophysics Data System (ADS)
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
Lee, C.; Yang, W. S.
2013-07-01
An improved resonance self-shielding method has been developed to accurately estimate the effective multigroup cross sections for heterogeneous fast reactor assembly and core calculations. In the method, the heterogeneity effect is considered by the use of isotopic escape cross sections while the resonance interference effect is accounted for through the narrow resonance approximation or slowing-down calculations for specific compositions. The isotopic escape cross sections are calculated by solving fixed-source transport equations with the method of characteristics for the whole problem domain. This method requires no pre-calculated resonance integral tables or parameters that are typically necessary in the subgroup method. Preliminary results for multi pin-cell fast reactor problems show that the escape cross sections estimated from the explicit-geometry fixed source calculations produce more accurate eigenvalue and self-shielded effective cross sections than those from conventional one-dimensional geometry models. (authors)
A Fast and Robust Ellipse-Detection Method Based on Sorted Merging
Ren, Guanghui; Zhao, Yaqin; Jiang, Lihui
2014-01-01
A fast and robust ellipse-detection method based on sorted merging is proposed in this paper. This method first represents the edge bitmap approximately with a set of line segments and then gradually merges the line segments into elliptical arcs and ellipses. To achieve high accuracy, a sorted merging strategy is proposed: the merging degrees of line segments/elliptical arcs are estimated, and line segments/elliptical arcs are merged in descending order of the merging degrees, which significantly improves the merging accuracy. During the merging process, multiple properties of ellipses are utilized to filter line segment/elliptical arc pairs, making the method very efficient. In addition, an ellipse-fitting method is proposed that restricts the maximum ratio of the semimajor axis and the semiminor axis, further improving the merging accuracy. Experimental results indicate that the proposed method is robust to outliers, noise, and partial occlusion and is fast enough for real-time applications. PMID:24782661
Robinson, D.G.
1998-06-01
This report provides an introduction to the various probabilistic methods developed roughly between 1956--1985 for performing reliability or probabilistic uncertainty analysis on complex systems. This exposition does not include the traditional reliability methods (e.g. parallel-series systems, etc.) that might be found in the many reliability texts and reference materials (e.g. and 1977). Rather, the report centers on the relatively new, and certainly less well known across the engineering community, analytical techniques. Discussion of the analytical methods has been broken into two reports. This particular report is limited to those methods developed between 1956--1985. While a bit dated, methods described in the later portions of this report still dominate the literature and provide a necessary technical foundation for more current research. A second report (Analytical Techniques 2) addresses methods developed since 1985. The flow of this report roughly follows the historical development of the various methods so each new technique builds on the discussion of strengths and weaknesses of previous techniques. To facilitate the understanding of the various methods discussed, a simple 2-dimensional problem is used throughout the report. The problem is used for discussion purposes only; conclusions regarding the applicability and efficiency of particular methods are based on secondary analyses and a number of years of experience by the author. This document should be considered a living document in the sense that as new methods or variations of existing methods are developed, the document and references will be updated to reflect the current state of the literature as much as possible. For those scientists and engineers already familiar with these methods, the discussion will at times become rather obvious. However, the goal of this effort is to provide a common basis for future discussions and, as such, will hopefully be useful to those more intimate with
A nearly analytic exponential time difference method for solving 2D seismic wave equations
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Yang, Dinghui; Song, Guojie
2014-02-01
In this paper, we propose a nearly analytic exponential time difference (NETD) method for solving the 2D acoustic and elastic wave equations. In this method, we use the nearly analytic discrete operator to approximate the high-order spatial differential operators and transform the seismic wave equations into semi-discrete ordinary differential equations (ODEs). Then, the converted ODE system is solved by the exponential time difference (ETD) method. We investigate the properties of NETD in detail, including the stability condition for 1-D and 2-D cases, the theoretical and relative errors, the numerical dispersion relation for the 2-D acoustic case, and the computational efficiency. In order to further validate the method, we apply it to simulating acoustic/elastic wave propagation in multilayer models which have strong contrasts and complex heterogeneous media, e.g., the SEG model and the Marmousi model. From our theoretical analyses and numerical results, the NETD can suppress numerical dispersion effectively by using the displacement and gradient to approximate the high-order spatial derivatives. In addition, because NETD is based on the structure of the Lie group method which preserves the quantitative properties of differential equations, it can achieve more accurate results than the classical methods.
Phonon dispersion on Ag (100) surface: A modified analytic embedded atom method study
NASA Astrophysics Data System (ADS)
Xiao-Jun, Zhang; Chang-Le, Chen
2016-01-01
Within the harmonic approximation, the analytic expression of the dynamical matrix is derived based on the modified analytic embedded atom method (MAEAM) and the dynamics theory of surface lattice. The surface phonon dispersions along three major symmetry directions , and X¯M¯ are calculated for the clean Ag (100) surface by using our derived formulas. We then discuss the polarization and localization of surface modes at points X¯ and M¯ by plotting the squared polarization vectors as a function of the layer index. The phonon frequencies of the surface modes calculated by MAEAM are compared with the available experimental and other theoretical data. It is found that the present results are generally in agreement with the referenced experimental or theoretical results, with a maximum deviation of 10.4%. The agreement shows that the modified analytic embedded atom method is a reasonable many-body potential model to quickly describe the surface lattice vibration. It also lays a significant foundation for studying the surface lattice vibration in other metals. Project supported by the National Natural Science Foundation of China (Grant Nos. 61471301 and 61078057), the Scientific Research Program Funded by Shaanxi Provincial Education Department, China (Grant No. 14JK1301), and the Specialized Research Fund for the Doctoral Program of Higher Education, China (Grant No. 20126102110045).
NASA Astrophysics Data System (ADS)
Yttri, K. E.; Schnelle-Kreiss, J.; Maenhaut, W.; Alves, C.; Bossi, R.; Bjerke, A.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Gülcin, A.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.
2014-07-01
The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wild fire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for biomass burning particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied High-Performance Anion-Exchange Chromatography (HPAEC), four used High-Performance Liquid Chromatography (HPLC) or Ultra-Performance Liquid Chromatography (UPLC), and six resorted to Gas Chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 23%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e., for 33% of
NASA Astrophysics Data System (ADS)
Yttri, K. E.; Schnelle-Kreis, J.; Maenhaut, W.; Abbaszade, G.; Alves, C.; Bjerke, A.; Bonnier, N.; Bossi, R.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.
2015-01-01
The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning (BB) aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wildfire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for BB particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied high-performance anion-exchange chromatography (HPAEC), four used high-performance liquid chromatography (HPLC) or ultra-performance liquid chromatography (UPLC) and six resorted to gas chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 20%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e. for 33% of the
An analytical method of free vibration for laminated plates including various boundary conditions
NASA Astrophysics Data System (ADS)
Xia, Chuanyou; Wen, Lizhou
1991-10-01
It is shown that, by introducing the displacement function Phi(x, y, t), the system of original differential equations developed by Whitney and Pagano (1970) for the first-order shear deformation theory of symmetric cross-ply laminated plates can be transformed into a single differential equation of the displacement function. On the basis of this differential equation an exact solution is given to the problem of free vibration of symmetric cross-ply laminated plates including various boundary conditions. It is shown that the natural frequencies obtained by the present analytical method are lower than those of results obtained using approximate methods.
A review of analytical methods for the treatment of flows with detached shocks
NASA Technical Reports Server (NTRS)
Busemann, Adolf
1949-01-01
The transonic flow theory has been considerably improved in recent years. The problems at subsonic speeds of a moving body concern chiefly the drag and the problems at supersonic speeds, the detached and attached shock waves. Inasmuch as the literature contains some information that is valuable and some other information that is misleading, the purpose of this paper is to discuss those analytical methods and their applications which are regarded as reliable in the transonic range. After these methods are reviewed, a short discussion without details and proofs follows to round out the picture. (author)
Field sampling and selecting on-site analytical methods for explosives in soil
Crockett, A.B.; Craig, H.D.; Jenkins, T.F.; Sisk, W.E.
1996-12-01
A large number of defense-related sites are contaminated with elevated levels of secondary explosives. Levels of contamination range from barely detectable to levels above 10% that need special handling because of the detonation potential. Characterization of explosives-contaminated sites is particularly difficult because of the very heterogeneous distribution of contamination in the environment and within samples. To improve site characterization, several options exist including collecting more samples, providing on-site analytical data to help direct the investigation, compositing samples, improving homogenization of the samples, and extracting larger samples. This publication is intended to provide guidance to Remedial Project Managers regarding field sampling and on-site analytical methods for detecting and quantifying secondary explosive compounds in soils, and is not intended to include discussions of the safety issues associated with sites contaminated with explosive residues.
NASA Astrophysics Data System (ADS)
Schlager, Kenneth J.; Ruchti, Timothy L.
1995-04-01
TAMM for Transcutaneous Analyte Measuring Method is a near infrared spectroscopic technique for the noninvasive measurement of human blood chemistry. A near infrared indium gallium arsenide (InGaAs) photodiode array spectrometer has been developed and tested on over 1,000 patients as a part of an SBIR program sponsored by the Naval Medical Research and Development Command. Nine (9) blood analytes have been measured and evaluated during pre-clinical testing: sodium, chloride, calcium, potassium, bicarbonate, BUN, glucose, hematocrit and hemoglobin. A reflective rather than a transmissive invasive approach to measurement has been taken to avoid variations resulting from skin color and sensor positioning. The current status of the instrumentation, neural network pattern recognition algorithms and test results will be discussed.
NASA Astrophysics Data System (ADS)
Bultinck, E.; Mahieu, S.; Depla, D.; Bogaerts, A.
2010-07-01
'Bohm diffusion' causes the electrons to diffuse perpendicularly to the magnetic field lines. However, its origin is not yet completely understood: low and high frequency electric field fluctuations are both named to cause Bohm diffusion. The importance of including this process in a Monte Carlo (MC) model is demonstrated by comparing calculated ionization rates with particle-in-cell/Monte Carlo collisions (PIC/MCC) simulations. A good agreement is found with a Bohm diffusion parameter of 0.05, which corresponds well to experiments. Since the PIC/MCC method accounts for fast electric field fluctuations, we conclude that Bohm diffusion is caused by fast electric field phenomena.
NASA Astrophysics Data System (ADS)
Jeyasankari, S.; Jeslin Drusila Nesamalar, J.; Charles Raja, S.; Venkatesh, P.
2014-04-01
Transmission cost allocation is one of the major challenges in transmission open access faced by the electric power sector. The purpose of this work is to provide an analytical method for allocating transmission transaction cost in deregulated market. This research work provides a usage based transaction cost allocation method based on line-flow impact factor (LIF) which relates the power flow in each line with respect to transacted power for the given transaction. This method provides the impact of line flows without running iterative power flow solution and is well suited for real time applications. The proposed method is compared with the Newton-Raphson (NR) method of cost allocation on sample six bus and practical Indian utility 69 bus systems by considering multilateral transaction.
Risk-based analytical method transfer: application to large multi-product transfers.
Raska, Christina S; Bennett, Tony S; Goodberlet, Scott A
2010-07-15
As pharmaceutical companies adapt their business models, a new approach to analytical method transfer is needed to efficiently handle transfers of multiple products, associated with situations such as site consolidations/closures. Using the principles of risk management, a risk-based method transfer approach is described, which defines appropriate transfer activities based on a risk assessment of the methods and experience of the receiving unit. A key step in the process is detailed knowledge transfer from the transferring unit to the receiving unit. The amount of transfer testing required can be streamlined or eliminated on the basis of a number of factors, including method capability, receiving unit familiarity, and method past performance. PMID:20557030
Snee, Lawrence W.
2002-01-01
40Ar/39Ar geochronology is an experimentally robust and versatile method for constraining time and temperature in geologic processes. The argon method is the most broadly applied in mineral-deposit studies. Standard analytical methods and formulations exist, making the fundamentals of the method well defined. A variety of graphical representations exist for evaluating argon data. A broad range of minerals found in mineral deposits, alteration zones, and host rocks commonly is analyzed to provide age, temporal duration, and thermal conditions for mineralization events and processes. All are discussed in this report. The usefulness of and evolution of the applicability of the method are demonstrated in studies of the Panasqueira, Portugal, tin-tungsten deposit; the Cornubian batholith and associated mineral deposits, southwest England; the Red Mountain intrusive system and associated Urad-Henderson molybdenum deposits; and the Eastern Goldfields Province, Western Australia.
Anantharam, Poojya; Shao, Dahai; Imerman, Paula M; Burrough, Eric; Schrunk, Dwayne; Sedkhuu, Tsevelmaa; Tang, Shusheng; Rumbeiha, Wilson
2016-01-01
Orellanine (OR) toxin is produced by mushrooms of the genus Cortinarius which grow in North America and in Europe. OR poisoning is characterized by severe oliguric acute renal failure, with a mortality rate of 10%-30%. Diagnosis of OR poisoning currently hinges on a history of ingestion of Cortinarius mushrooms and histopathology of renal biopsies. A key step in the diagnostic approach is analysis of tissues for OR. Currently, tissue-based analytical methods for OR are nonspecific and lack sensitivity. The objectives of this study were: (1) to develop definitive HPLC and LC-MS/MS tissue-based analytical methods for OR; and (2) to investigate toxicological effects of OR in mice. The HPLC limit of quantitation was 10 µg/g. For fortification levels of 15 µg/g to 50 µg/g OR in kidney, the relative standard deviation was between 1.3% and 9.8%, and accuracy was within 1.5% to 7.1%. A matrix-matched calibration curve was reproduced in this range with a correlation coefficient (r) of 0.97-0.99. The limit of detection was 20 ng/g for LC-MS/MS. In OR-injected mice, kidney OR concentrations were 97 ± 51 µg/g on Day 0 and 17 ± 1 µg/g on termination Day 3. Splenic and liver injuries were novel findings in this mouse model. The new tissue-based analytical tests will improve diagnosis of OR poisoning, while the mouse model has yielded new data advancing knowledge on OR-induced pathology. The new tissue-based analytical tests will improve diagnosis of OR poisoning, while the mouse model has yielded new data advancing knowledge on OR-induced pathology. PMID:27213453
Anantharam, Poojya; Shao, Dahai; Imerman, Paula M.; Burrough, Eric; Schrunk, Dwayne; Sedkhuu, Tsevelmaa; Tang, Shusheng; Rumbeiha, Wilson
2016-01-01
Orellanine (OR) toxin is produced by mushrooms of the genus Cortinarius which grow in North America and in Europe. OR poisoning is characterized by severe oliguric acute renal failure, with a mortality rate of 10%–30%. Diagnosis of OR poisoning currently hinges on a history of ingestion of Cortinarius mushrooms and histopathology of renal biopsies. A key step in the diagnostic approach is analysis of tissues for OR. Currently, tissue-based analytical methods for OR are nonspecific and lack sensitivity. The objectives of this study were: (1) to develop definitive HPLC and LC-MS/MS tissue-based analytical methods for OR; and (2) to investigate toxicological effects of OR in mice. The HPLC limit of quantitation was 10 µg/g. For fortification levels of 15 µg/g to 50 µg/g OR in kidney, the relative standard deviation was between 1.3% and 9.8%, and accuracy was within 1.5% to 7.1%. A matrix-matched calibration curve was reproduced in this range with a correlation coefficient (r) of 0.97–0.99. The limit of detection was 20 ng/g for LC-MS/MS. In OR-injected mice, kidney OR concentrations were 97 ± 51 µg/g on Day 0 and 17 ± 1 µg/g on termination Day 3. Splenic and liver injuries were novel findings in this mouse model. The new tissue-based analytical tests will improve diagnosis of OR poisoning, while the mouse model has yielded new data advancing knowledge on OR-induced pathology. The new tissue-based analytical tests will improve diagnosis of OR poisoning, while the mouse model has yielded new data advancing knowledge on OR-induced pathology. PMID:27213453
Laboratory Techniques in Geology: Embedding Analytical Methods into the Undergraduate Curriculum
NASA Astrophysics Data System (ADS)
Baedke, S. J.; Johnson, E. A.; Kearns, L. E.; Mazza, S. E.; Gazel, E.
2014-12-01
Paid summer REU experiences successfully engage undergraduate students in research and encourage them to continue to graduate school and scientific careers. However these programs only accommodate a limited number of students due to funding constraints, faculty time commitments, and limited access to needed instrumentation. At JMU, the Department of Geology and Environmental Science has embedded undergraduate research into the curriculum. Each student fulfilling a BS in Geology or a BA in Earth Science completes 3 credits of research, including a 1-credit course on scientific communication, 2 credits of research or internship, followed by a presentation of that research. Our department has successfully acquired many analytical instruments and now has an XRD, SEM/EDS, FTIR, handheld Raman, AA, ion chromatograph, and an IRMS. To give as many students as possible an overview to the scientific uses and operation methods for these instruments, we revived a laboratory methods course that includes theory and practical use of instrumentation at JMU, plus XRF sample preparation and analysis training at Virginia Tech during a 1-day field trip. In addition to practical training, projects included analytical concepts such as evaluating analytical vs. natural uncertainty, determining error on multiple measurements, signal-to-noise ratio, and evaluating data quality. State funding through the 4-VA program helped pay for analytical supplies and support for students to complete research projects over the summer or during the next academic year using instrumentation from the course. This course exemplifies an alternative path to broadening participation in undergraduate research and creating stronger partnerships between PUI's and research universities.
Simplex and duplex event-specific analytical methods for functional biotech maize.
Lee, Seong-Hun; Kim, Su-Jeong; Yi, Bu-Young
2009-08-26
Analytical methods are very important in the control of genetically modified organism (GMO) labeling systems or living modified organism (LMO) management for biotech crops. Event-specific primers and probes were developed for qualitative and quantitative analysis for biotech maize event 3272 and LY 038 on the basis of the 3' flanking regions, respectively. The qualitative primers confirmed the specificity by a single PCR product and sensitivity to 0.05% as a limit of detection (LOD). Simplex and duplex quantitative methods were also developed using TaqMan real-time PCR. One synthetic plasmid was constructed from two taxon-specific DNA sequences of maize and two event-specific 3' flanking DNA sequences of event 3272 and LY 038 as reference molecules. In-house validation of the quantitative methods was performed using six levels of mixing samples, from 0.1 to 10.0%. As a result, the biases from the true value and the relative deviations were all within the range of +/-30%. Limits of quantitation (LOQs) of the quantitative methods were all 0.1% for simplex real-time PCRs of event 3272 and LY 038 and 0.5% for duplex real-time PCR of LY 038. This study reports that event-specific analytical methods were applicable for qualitative and quantitative analysis for biotech maize event 3272 and LY 038. PMID:19650633
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
Analytical method for space-fractional telegraph equation by homotopy perturbation transform method
NASA Astrophysics Data System (ADS)
Prakash, Amit
2016-06-01
The object of the present article is to study spacefractional telegraph equation by fractional Homotopy perturbation transform method (FHPTM). The homotopy perturbation transform method is an innovative adjustment in Laplace transform algorithm. Three test examples are presented to show the efficiency of the proposed technique.
Thermal analysis of 3D composites by a new fast multipole hybrid boundary node method
NASA Astrophysics Data System (ADS)
Miao, Yu; Wang, Qiao; Zhu, Hongping; Li, Yinping
2014-01-01
This paper applies the hybrid boundary node method (Hybrid BNM) for the thermal analysis of 3D composites. A new formulation is derived for the inclusion-based composites. In the new formulation, the unknowns of the interfaces are assembled only once in the final system equation, which can reduce nearly one half of degrees of freedom (DOFs) compared with the conventional multi-domain solver when there are lots of inclusions. A new version of the fast multipole method (FMM) is also coupled with the new formulation and the technique is applied to thermal analysis of composites with many inclusions. In the new fast multipole hybrid boundary node method (FM-HBNM), a diagonal form for translation operators is used and the method presented can be applied to the computation of more than 1,000,000 DOFs on a personal computer. Numerical examples are presented to analyze the thermal behavior of composites with many inclusions.
Fast multiscale Gaussian beam methods for wave equations in bounded convex domains
Bao, Gang; Lai, Jun; Qian, Jianliang
2014-03-15
Motivated by fast multiscale Gaussian wavepacket transforms and multiscale Gaussian beam methods which were originally designed for pure initial-value problems of wave equations, we develop fast multiscale Gaussian beam methods for initial boundary value problems of wave equations in bounded convex domains in the high frequency regime. To compute the wave propagation in bounded convex domains, we have to take into account reflecting multiscale Gaussian beams, which are accomplished by enforcing reflecting boundary conditions during beam propagation and carrying out suitable reflecting beam summation. To propagate multiscale beams efficiently, we prove that the ratio of the squared magnitude of beam amplitude and the beam width is roughly conserved, and accordingly we propose an effective indicator to identify significant beams. We also prove that the resulting multiscale Gaussian beam methods converge asymptotically. Numerical examples demonstrate the accuracy and efficiency of the method.
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT
Nigg, D.W.; Wemple, C.A.; Hartwell, J.K.; Harker, Y.D.; Venhuizen, J.R.; Risler, R.
1997-12-01
A closed-form direct method for unfolding neutron spectra from foil activation data is presented. The method is applied to measurements of the free-field neutron spectrum produced by the proton-cyclotron-based fast-neutron radiotherapy facility at the University of Washington (UW) School of Medicine. The results compare favorably with theoretical expectations based on an a-priori calculational model of the target and neutron beamline configuration of the UW facility.
Khalsa, Siri Sahib; Siegel, Nathan Phillip; Ho, Clifford Kuofei
2010-04-01
This paper introduces a new analytical 'stretch' function that accurately predicts the flux distribution from on-axis point-focus collectors. Different dish sizes and slope errors can be assessed using this analytical function with a ratio of the focal length to collector diameter fixed at 0.6 to yield the maximum concentration ratio. Results are compared to data, and the stretch function is shown to provide more accurate flux distributions than other analytical methods employing cone optics.
Evaluation of FTIR-based analytical methods for the analysis of simulated wastes
Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.
1994-09-30
Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data.
Dynamical analysis of the avian-human influenza epidemic model using the semi-analytical method
NASA Astrophysics Data System (ADS)
Jabbari, Azizeh; Kheiri, Hossein; Bekir, Ahmet
2015-03-01
In this work, we present a dynamic behavior of the avian-human influenza epidemic model by using efficient computational algorithm, namely the multistage differential transform method(MsDTM). The MsDTM is used here as an algorithm for approximating the solutions of the avian-human influenza epidemic model in a sequence of time intervals. In order to show the efficiency of the method, the obtained numerical results are compared with the fourth-order Runge-Kutta method (RK4M) and differential transform method(DTM) solutions. It is shown that the MsDTM has the advantage of giving an analytical form of the solution within each time interval which is not possible in purely numerical techniques like RK4M.
Analytical and Biological Methods for Probing the Blood-Brain Barrier
NASA Astrophysics Data System (ADS)
Kuhnline, Sloan; Courtney, D.; Nandi, Pradyot; Linz, Thomas H.; Aldrich, Jane V.; Audus, Kenneth L.; Lunte, Susan M.
2012-07-01
The blood-brain barrier (BBB) is an important interface between the peripheral and central nervous systems. It protects the brain against the infiltration of harmful substances and regulates the permeation of beneficial endogenous substances from the blood into the extracellular fluid of the brain. It can also present a major obstacle in the development of drugs that are targeted for the central nervous system. Several methods have been developed to investigate the transport and metabolism of drugs, peptides, and endogenous compounds at the BBB. In vivo methods include intravenous injection, brain perfusion, positron emission tomography, and microdialysis sampling. Researchers have also developed in vitro cell-culture models that can be employed to investigate transport and metabolism at the BBB without the complication of systemic involvement. All these methods require sensitive and selective analytical methods to monitor the transport and metabolism of the compounds of interest at the BBB.