Fast Analytical Methods for Macroscopic Electrostatic Models in Biomolecular Simulations*
Xu, Zhenli; Cai, Wei
2013-01-01
We review recent developments of fast analytical methods for macroscopic electrostatic calculations in biological applications, including the Poisson–Boltzmann (PB) and the generalized Born models for electrostatic solvation energy. The focus is on analytical approaches for hybrid solvation models, especially the image charge method for a spherical cavity, and also the generalized Born theory as an approximation to the PB model. This review places much emphasis on the mathematical details behind these methods. PMID:23745011
A semi-analytical numerical method for fast metamaterial absorber design
NASA Astrophysics Data System (ADS)
Song, Y. C.; Ding, J.; Guo, C. J.
2015-09-01
In this paper, a semi-analytical numerical approach utilizing a novel non-grounded model and interpolation technique is introduced to design the frequency selective surface (FSS) based metamaterial absorbers (MAs) with dramatically reduced time consumption. Different from commonly used trial-and-error technology, our method mainly utilize the numerically computed FSS layer impedance with slow-varying feature in the vicinity of operating frequency. The introduced non-grounded model establishes the quantitative relationship between geometry parameters and equivalent lumped circuit components in conventional transmission line (TL) model with reasonable accuracy. The interpolation technique, on the other hand, promises a relative sparse parameter sweep. The detailed design flow as well as analytical explanation with carefully deduced expressions is presented. With the purpose of validating the proposed method and analytical models, a MA with slotted patches is designed through both the semi-analytical numerical approach and the trial-and-error method, where an over 2300 times acceleration is observed. Additionally, results from the analytical computation and full wave simulation agree well with each other.
Fast and "green" method for the analytical monitoring of haloketones in treated water.
Serrano, María; Silva, Manuel; Gallego, Mercedes
2014-09-01
Several groups of organic compounds have emerged as being particularly relevant as environmental pollutants, including disinfection by-products (DBPs). Haloketones (HKs), which belong to the unregulated volatile fraction of DBPs, have become a priority because of their occurrence in drinking water at concentrations below 1μg/L. The absence of a comprehensive method for HKs has led to the development of the first method for determining fourteen of these species. In an effort to miniaturise, this study develops a micro liquid-liquid extraction (MLLE) method adapted from EPA Method 551.1. In this method practically, the whole extract (50μL) was injected into a programmed temperature vaporiser-gas chromatography-mass spectrometer in order to improve sensitivity. The method was validated by comparing it to EPA Method 551.1 and showed relevant advantages such as: lower sample pH (1.5), higher aqueous/organic volume ratio (60), lower solvent consumption (200μL) and fast and cost-saving operation. The MLLE method achieved detection limits ranging from 6 to 60ng/L (except for 1,1,3-tribromo-3-chloroacetone, 120ng/L) with satisfactory precision (RSD, ∼6%) and high recoveries (95-99%). An evaluation was carried out of the influence of various dechlorinating agents as well as of the sample pH on the stability of the fourteen HKs in treated water. To ensure the HKs integrity for at least 1 week during storage at 4°C, the samples were acidified at pH ∼1.5, which coincides with the sample pH required for MLLE. The green method was applied to the speciation of fourteen HKs in tap and swimming pool waters, where one and seven chlorinated species, respectively, were found. The concentration of 1.1-dichloroacetone in swimming pool water increased ∼25 times in relation to tap water. PMID:25042440
Enantioselective Liquid-Solid Extraction (ELSE)--An Unexplored, Fast, and Precise Analytical Method.
Ulatowski, Filip; Hamankiewicz, Paulina; Jurczak, Janusz
2015-09-14
A novel method of evaluating the enantioselectivity of chiral receptors is investigated. It involves extraction of an ionic guest in racemic form from an ion-exchange resin to the organic solvent, where it is bound by a chiral receptor. The enantioselectivity of the examined receptor is determined simply by measuring the enantiomeric excess of the extracted guest. We show that the concept is viable for neutral receptors binding chiral organic anions extracted into acetonitile. This method was determined to be more accurate and far less time-consuming than the classical titrations. Multiple racemic guests can be applied to a resin in a single experiment, giving the method a very high throughput. PMID:26263300
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
Samin, Adib; Lahti, Erik; Zhang, Jinsuo
2015-08-15
Cyclic voltammetry is a powerful tool that is used for characterizing electrochemical processes. Models of cyclic voltammetry take into account the mass transport of species and the kinetics at the electrode surface. Analytical solutions of these models are not well-known due to the complexity of the boundary conditions. In this study we present closed form analytical solutions of the planar voltammetry model for two soluble species with fast electron transfer and equal diffusivities using the eigenfunction expansion method. Our solution methodology does not incorporate Laplace transforms and yields good agreement with the numerical solution. This solution method can be extended to cases that are more general and may be useful for benchmarking purposes.
NASA Astrophysics Data System (ADS)
Samin, Adib; Lahti, Erik; Zhang, Jinsuo
2015-08-01
Cyclic voltammetry is a powerful tool that is used for characterizing electrochemical processes. Models of cyclic voltammetry take into account the mass transport of species and the kinetics at the electrode surface. Analytical solutions of these models are not well-known due to the complexity of the boundary conditions. In this study we present closed form analytical solutions of the planar voltammetry model for two soluble species with fast electron transfer and equal diffusivities using the eigenfunction expansion method. Our solution methodology does not incorporate Laplace transforms and yields good agreement with the numerical solution. This solution method can be extended to cases that are more general and may be useful for benchmarking purposes.
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland). PMID:26666658
Kim, Junghyun; Suh, Joon Hyuk; Cho, Hyun-Deok; Kang, Wonjae; Choi, Yong Seok; Han, Sang Beom
2016-01-01
A multi-class, multi-residue analytical method based on LC-MS/MS detection was developed for the screening and confirmation of 28 veterinary drug and metabolite residues in flatfish, shrimp and eel. The chosen veterinary drugs are prohibited or unauthorised compounds in Korea, which were categorised into various chemical classes including nitroimidazoles, benzimidazoles, sulfones, quinolones, macrolides, phenothiazines, pyrethroids and others. To achieve fast and simultaneous extraction of various analytes, a simple and generic liquid extraction procedure using EDTA-ammonium acetate buffer and acetonitrile, without further clean-up steps, was applied to sample preparation. The final extracts were analysed by ultra-high-performance liquid chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS). The method was validated for each compound in each matrix at three different concentrations (5, 10 and 20 ng g(-1)) in accordance with Codex guidelines (CAC/GL 71-2009). For most compounds, the recoveries were in the range of 60-110%, and precision, expressed as the relative standard deviation (RSD), was in the range of 5-15%. The detection capabilities (CCβs) were below or equal to 5 ng g(-1), which indicates that the developed method is sufficient to detect illegal fishery products containing the target compounds above the residue limit (10 ng g(-1)) of the new regulatory system (Positive List System - PLS). PMID:26751111
Fast micromagnetic simulations using an analytic mathematical model
NASA Astrophysics Data System (ADS)
Tsiantos, Vassilios; Miles, Jim
2006-02-01
In this paper an analytic mathematical model is presented for fast micromagnetic simulations. In dynamic micromagnetic simulations the Landau-Lifshitz-Gilbert (LLG) equation is solved for the observation of the reversal magnetisation mechanisms. In stiff micromagnetic simulations the large system of ordinary differential equations has to be solved with an appropriate method, such as the Backward Differentiation Formulas (BDF) method, which leads to the solution of a large linear system. The latter is solved efficiently employing matrix-free techniques, such as Krylov methods with preconditioning. Within the Krylov methods framework a product of a matrix times a vector is involved which is usually approximated with directional differences. This paper provides an analytic mathematical model to calculate efficiently this product, leading to more accurate calculations and consequently faster micromagnetic simulations due to better convergence properties.
Analytical model for fast-shock ignition
Ghasemi, S. A. Farahbod, A. H.; Sobhanian, S.
2014-07-15
A model and its improvements are introduced for a recently proposed approach to inertial confinement fusion, called fast-shock ignition (FSI). The analysis is based upon the gain models of fast ignition, shock ignition and considerations for the fast electrons penetration into the pre-compressed fuel to examine the formation of an effective central hot spot. Calculations of fast electrons penetration into the dense fuel show that if the initial electron kinetic energy is of the order ∼4.5 MeV, the electrons effectively reach the central part of the fuel. To evaluate more realistically the performance of FSI approach, we have used a quasi-two temperature electron energy distribution function of Strozzi (2012) and fast ignitor energy formula of Bellei (2013) that are consistent with 3D PIC simulations for different values of fast ignitor laser wavelength and coupling efficiency. The general advantages of fast-shock ignition in comparison with the shock ignition can be estimated to be better than 1.3 and it is seen that the best results can be obtained for the fuel mass around 1.5 mg, fast ignitor laser wavelength ∼0.3 micron and the shock ignitor energy weight factor about 0.25.
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Fast profiling of food by analytical pyrolysis.
Halket, J M; Schulten, H R
1988-03-01
The analytical application of direct pyrolysis (Py) field ionization (FI)-mass spectrometry (MS) und Curie-point pyrolysis gas chromatography-mass spectrometry (Py-GC/FIMS) to various whole foodstuffs is described for the first time. The former technique yields highly differentiated information from the sample in typically 15 min, namely the molecular weight distribution of released volatiles and pyrolysis products in a single spectrum which, owing to the good reproducibility and high significance of the resulting data, has previously been shown to be suitable for the application of chemometric methods. Such mass spectral peaks are further characterized and assigned by high resolution mass measurement and/or by electron ionization after Curie-point pyrolysis and gas chromatographic separation of the components. In this first report, typical results are presented for ground roasted coffee, rosehip tea, wheatmeal biscuit, chocolate drink powder and milk chocolate. The FI mass spectrum obtained from the latter sample is compared with those obtained using the complementary soft ionization techniques of chemical ionization (CI) and direct chemical ionization (DCI). PMID:3369241
Boisson, F; Bekaert, V; Reilhac, A; Wurtz, J; Brasse, D
2015-03-21
In SPECT imaging, improvement or deterioration of performance is mostly due to collimator design. Classical SPECT systems mainly use parallel hole or pinhole collimators. Rotating slat collimators (RSC) can be an interesting alternative to optimize the tradeoff between detection efficiency and spatial resolution. The present study was conducted using a RSC system for small animal imaging called CLiR. The CLiR system was used in planar mode only. In a previous study, planar 2D projections were reconstructed using the well-known filtered backprojection algorithm (FBP). In this paper, we investigated the use of the statistical reconstruction algorithm maximum likelihood expectation maximization (MLEM) to reconstruct 2D images with the CLiR system using a probability matrix calculated using an analytic approach. The primary objective was to propose a method to quickly generate a light system matrix, which facilitates its handling and storage, while providing accurate and reliable performance. Two other matrices were calculated using GATE Monte Carlo simulations to investigate the performance obtained using the matrix calculated analytically. The first matrix calculated using GATE took all the physics processes into account, where the second did not consider for the scattering, as the analytical matrix did not take this physics process into account either. 2D images were reconstructed using FBP and MLEM with the three different probability matrices. Both simulated and experimental data were used. A comparative study of these images was conducted using different metrics: the modulation transfert function, the signal-to-noise ratio and quantification measurement. All the results demonstrated the suitability of using a probability matrix calculated analytically. It provided similar results in terms of spatial resolution (about 0.6 mm with differences <5%), signal-to-noise ratio (differences <10%), or quality of image. PMID:25716556
NASA Astrophysics Data System (ADS)
Boisson, F.; Bekaert, V.; Reilhac, A.; Wurtz, J.; Brasse, D.
2015-03-01
In SPECT imaging, improvement or deterioration of performance is mostly due to collimator design. Classical SPECT systems mainly use parallel hole or pinhole collimators. Rotating slat collimators (RSC) can be an interesting alternative to optimize the tradeoff between detection efficiency and spatial resolution. The present study was conducted using a RSC system for small animal imaging called CLiR. The CLiR system was used in planar mode only. In a previous study, planar 2D projections were reconstructed using the well-known filtered backprojection algorithm (FBP). In this paper, we investigated the use of the statistical reconstruction algorithm maximum likelihood expectation maximization (MLEM) to reconstruct 2D images with the CLiR system using a probability matrix calculated using an analytic approach. The primary objective was to propose a method to quickly generate a light system matrix, which facilitates its handling and storage, while providing accurate and reliable performance. Two other matrices were calculated using GATE Monte Carlo simulations to investigate the performance obtained using the matrix calculated analytically. The first matrix calculated using GATE took all the physics processes into account, where the second did not consider for the scattering, as the analytical matrix did not take this physics process into account either. 2D images were reconstructed using FBP and MLEM with the three different probability matrices. Both simulated and experimental data were used. A comparative study of these images was conducted using different metrics: the modulation transfert function, the signal-to-noise ratio and quantification measurement. All the results demonstrated the suitability of using a probability matrix calculated analytically. It provided similar results in terms of spatial resolution (about 0.6 mm with differences <5%), signal-to-noise ratio (differences <10%), or quality of image.
Peruga, Aranzazu; Hidalgo, Carmen; Sancho, Juan V; Hernández, Félix
2013-09-13
Pyrethrins are natural insecticides derived from chrysanthemum flowers containing a mixture of six components: pyrethrin I, cinerin I, jasmolin I, pyrethrin II, cinerin II, and jasmolin II. In this work, a rapid and sensitive LC-(ESI)-MS/MS method has been developed for the individual quantification and confirmation of pyrethrin residues in fruit and vegetable samples by monitoring two specific transitions for each pyrethrin component under Selected Reaction Monitoring (SRM) mode. Samples were extracted with acetone/water or acetone, depending on the sample type, and raw extracts were directly injected in the LC-MS/MS system. Method validation was carried out evaluating linearity, accuracy, precision, specificity, limit of quantification (LOQ) and limit of detection (LOD) in eight types of fruit and vegetable samples at 0.05mg/kg and 0.5mg/kg (referred to the sum of all pyrethrins). The method based on acetone/water (70:30) extraction led to satisfactory recoveries (70-110%) and good precision (below 14%) for all pyrethrin components in lettuce, pepper, strawberry and potato. The method based on acetone extraction allowed satisfactory recoveries for lettuce, cucumber, tomato and rice samples with recoveries between 71 and 107% and RSDs below 15%. For pistachio samples, satisfactory results were obtained only for some analytes and extracts were also injected using APCI interface, but the lower sensitivity achieved allowed only the validation at 0.5mg/kg. The analytical methodology developed was applied to the analysis of fruit and vegetable samples. PMID:23938081
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.; Berry, Ray A.
1999-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream.
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.; Berry, R.A.
1999-08-10
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream. 8 figs.
ANALYTICAL METHOD DEVELOPMENT FOR PHENOLS
This project focused on the development of an analytical method for the analysis of phenols in drinking water. The need for this project is associated with the recently published Contaminant Candidate List (CCL). The following phenolic compounds are listed on the current CCL, a...
Analytical methods under emergency conditions
Sedlet, J.
1983-01-01
This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references. (ACR)
Waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.
1995-05-01
The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department`s goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision.
NASA Astrophysics Data System (ADS)
Shannon, Andrew; Mustill, Alexander J.; Wyatt, Mark
2015-03-01
Dust grains migrating under Poynting-Robertson drag may be trapped in mean-motion resonances with planets. Such resonantly trapped grains are observed in the Solar system. In extrasolar systems, the exozodiacal light produced by dust grains is expected to be a major obstacle to future missions attempting to directly image terrestrial planets. The patterns made by resonantly trapped dust, however, can be used to infer the presence of planets, and the properties of those planets, if the capture and evolution of the grains can be modelled. This has been done with N-body methods, but such methods are computationally expensive, limiting their usefulness when considering large, slowly evolving grains, and for extrasolar systems with unknown planets and parent bodies, where the possible parameter space for investigation is large. In this work, we present a semi-analytic method for calculating the capture and evolution of dust grains in resonance, which can be orders of magnitude faster than N-body methods. We calibrate the model against N-body simulations, finding excellent agreement for Earth to Neptune mass planets, for a variety of grain sizes, initial eccentricities, and initial semimajor axes. We then apply the model to observations of dust resonantly trapped by the Earth. We find that resonantly trapped, asteroidally produced grains naturally produce the `trailing blob' structure in the zodiacal cloud, while to match the intensity of the blob, most of the cloud must be composed of cometary grains, which owing to their high eccentricity are not captured, but produce a smooth disc.
Liu, J; Bourland, J
2014-06-01
Purpose: To analytically estimate first-order x-ray scatter for kV cone beam x-ray imaging with high computational efficiency. Methods: In calculating first-order scatter using the Klein-Nishina formula, we found that by integrating the point-to-point scatter along an interaction line, a “pencil-beam” scatter kernel (BSK) can be approximated to a quartic expression when the imaging field is small. This BSK model for monoenergetic, 100keV x-rays has been verified on homogeneous cube and cylinder water phantoms by comparing with the exact implementation of KN formula. For heterogeneous medium, the water-equivalent length of a BSK was acquired with an improved Siddon's ray-tracing algorithm, which was also used in calculating pre- and post- scattering attenuation. To include the electron binding effect for scattering of low-kV photons, the mean corresponding scattering angle is determined from the effective point of scattered photons of a BSK. The behavior of polyenergetic x-rays was also investigated for 120kV x-rays incident to a sandwiched infinite heterogeneous slab phantom, with the electron binding effect incorporated. Exact computation and Monte Carlo simulations were performed for comparisons, using the EGSnrc code package. Results: By reducing the 3D volumetric target (o(n{sup 3})) to 2D pencil-beams (o(n{sup 2})), the computation expense can be generally lowered by n times, which our experience verifies. The scatter distribution on a flat detector shows high agreement between the analytic BSK model and exact calculations. The pixel-to-pixel differences are within (-2%, 2%) for the homogeneous cube and cylinder phantoms and within (0, 6%) for the heterogeneous slab phantom. However, the Monte Carlo simulation shows increased deviation of the BSK model toward detector periphery. Conclusion: The proposed BSK model, accommodating polyenergetic x-rays and electron binding effect at low kV, shows great potential in efficiently estimating the first
A fast neighbor joining method.
Li, J F
2015-01-01
With the rapid development of sequencing technologies, an increasing number of sequences are available for evolutionary tree reconstruction. Although neighbor joining is regarded as the most popular and fastest evolutionary tree reconstruction method [its time complexity is O(n(3)), where n is the number of sequences], it is not sufficiently fast to infer evolutionary trees containing more than a few hundred sequences. To increase the speed of neighbor joining, we herein propose FastNJ, a fast implementation of neighbor joining, which was motivated by RNJ and FastJoin, two improved versions of conventional neighbor joining. The main difference between FastNJ and conventional neighbor joining is that, in the former, many pairs of nodes selected by the rule used in RNJ are joined in each iteration. In theory, the time complexity of FastNJ can reach O(n(2)) in the best cases. Experimental results show that FastNJ yields a significant increase in speed compared to RNJ and conventional neighbor joining with a minimal loss of accuracy. PMID:26345805
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
Fast gas chromatography for pesticide residues analysis using analyte protectants.
Kirchner, Michal; Húsková, Renáta; Matisová, Eva; Mocák, Ján
2008-04-01
Fast GC-MS with narrow-bore columns combined with effective sample preparation technique (QuEChERS method) was used for evaluation of various calibration approaches in pesticide residues analysis. In order to compare the performance of analyte protectants (APs) with matrix-matched standards calibration curves of selected pesticides were searched in terms of linearity of responses, repeatability of measurements and reached limit of quantifications utilizing the following calibration standards in the concentration range 1-500 ng mL(-1)(the equivalent sample concentration 1-500 microg kg(-1)): in neat solvent (acetonitrile) with/without addition of APs, matrix-matched standards with/without addition of APs. For APs results are in a good agreement with matrix-matched standards. To evaluate errors of determination of concentration synthetic samples at concentration level of pesticides 50 ng mL(-1) (50 microg kg(-1)) were analyzed and quantified using the above given standards. For less troublesome pesticides very good estimation of concentration was obtained utilizing APs, while for more troublesome pesticides such as methidathion, malathion, phosalone and deltamethrin significant overestimation reaching up to 80% occurred. According to presented results APs can be advantegously used for "easy" pesticides determination. For "difficult" pesticides an alternative calibration approach is required for samples potentially violating MRLs. An example of real sample measurement is shown. In this paper also the use of internal standards (triphenylphosphate (TPP) and heptachlor (HEPT)) for peak areas normalization is discussed in terms of repeatability of measurements and quantitative data obtained. TPP normalization provided slightly better results than the use of absolute peak areas measurements on the contrary to HEPT. PMID:17920613
2013-01-01
Background The aim of this paper was the validation of a new analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals (Ag, Cd, Co, Cr, Cu, Ni, Pb and Zn) in soil after microwave assisted digestion in aqua regia. Determinations were performed on the ContrAA 300 (Analytik Jena) air-acetylene flame spectrometer equipped with xenon short-arc lamp as a continuum radiation source for all elements, double monochromator consisting of a prism pre-monocromator and an echelle grating monochromator, and charge coupled device as detector. For validation a method-performance study was conducted involving the establishment of the analytical performance of the new method (limits of detection and quantification, precision and accuracy). Moreover, the Bland and Altman statistical method was used in analyzing the agreement between the proposed assay and inductively coupled plasma optical emission spectrometry as standardized method for the multielemental determination in soil. Results The limits of detection in soil sample (3σ criterion) in the high-resolution continuum source flame atomic absorption spectrometry method were (mg/kg): 0.18 (Ag), 0.14 (Cd), 0.36 (Co), 0.25 (Cr), 0.09 (Cu), 1.0 (Ni), 1.4 (Pb) and 0.18 (Zn), close to those in inductively coupled plasma optical emission spectrometry: 0.12 (Ag), 0.05 (Cd), 0.15 (Co), 1.4 (Cr), 0.15 (Cu), 2.5 (Ni), 2.5 (Pb) and 0.04 (Zn). Accuracy was checked by analyzing 4 certified reference materials and a good agreement for 95% confidence interval was found in both methods, with recoveries in the range of 94–106% in atomic absorption and 97–103% in optical emission. Repeatability found by analyzing real soil samples was in the range 1.6–5.2% in atomic absorption, similar with that of 1.9–6.1% in optical emission spectrometry. The Bland and Altman method showed no statistical significant difference
Jagetic, Lydia J; Newhauser, Wayne D
2015-06-21
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833
Wilson, Lydia J; Newhauser, Wayne D
2015-01-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833
NASA Astrophysics Data System (ADS)
Jagetic, Lydia J.; Newhauser, Wayne D.
2015-06-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity,...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze...
HTGR analytical methods and design verification
Neylan, A.J.; Northup, T.E.
1982-05-01
Analytical methods for the high-temperature gas-cooled reactor (HTGR) include development, update, verification, documentation, and maintenance of all computer codes for HTGR design and analysis. This paper presents selected nuclear, structural mechanics, seismic, and systems analytical methods related to the HTGR core. This paper also reviews design verification tests in the reactor core, reactor internals, steam generator, and thermal barrier.
Fast neutron imaging device and method
Popov, Vladimir; Degtiarenko, Pavel; Musatov, Igor V.
2014-02-11
A fast neutron imaging apparatus and method of constructing fast neutron radiography images, the apparatus including a neutron source and a detector that provides event-by-event acquisition of position and energy deposition, and optionally timing and pulse shape for each individual neutron event detected by the detector. The method for constructing fast neutron radiography images utilizes the apparatus of the invention.
Analytical methods for solving the Boltzmann equation
NASA Astrophysics Data System (ADS)
Struminskii, V. V.
The principal analytical methods for solving the Boltzmann equation are reviewed, and a very general solution is proposed. The method makes it possible to obtain a solution to the Cauchy problem for the nonlinear Boltzmann equation and thus determine the applicability regions for the various analytical methods. The method proposed here also makes it possible to demonstrate that Hilbert's theorem of macroscopic causality does not apply and Hilbert's paradox does not exist.
Method of identity analyte-binding peptides
Kauvar, L.M.
1990-10-16
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4--20 amino acids for specific affinity to the analyte. 5 figs.
Method of identity analyte-binding peptides
Kauvar, Lawrence M.
1990-01-01
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4-20 amino acids for specific affinity to the analyte.
Method and apparatus for detecting an analyte
Allendorf, Mark D.; Hesketh, Peter J.
2011-11-29
We describe the use of coordination polymers (CP) as coatings on microcantilevers for the detection of chemical analytes. CP exhibit changes in unit cell parameters upon adsorption of analytes, which will induce a stress in a static microcantilever upon which a CP layer is deposited. We also describe fabrication methods for depositing CP layers on surfaces.
Analytical Methods in Mesoscopic Systems
NASA Astrophysics Data System (ADS)
Mason, Douglas Joseph
The prospect of designing technologies around the quantum behavior of mesoscopic devices is enticing. This thesis present several tools to facilitate the process of calculating and analyzing the quantum properties of such devices - resonance, boundary conditions, and the quantum-classical correspondence are major themes that we study with these tools. In Chapter 1, we begin by laying the groundwork for the tools that follow by defining the Hamiltonian, the Green's function, the scattering matrix, and the Landauer formalism for ballistic conduction. In Chapter 2, we present an efficient and easy-to-implement algorithm called the Outward Wave Algorithm, which calculates the conductance function and scattering density matrix when a system is coupled to an environment in a variety of geometries and contexts beyond the simple two-lead schematic. In Chapter 3, we present a unique geometry and numerical method called the Boundary Reflectin Matrix that allows us to calculate the full scattering matrix from arbitrary boundaries of a lattice system, and introduce the phenomenon of internal Bragg diffraction. In Chapter 4, we present a new method for visualizing wavefunctions called the Husimi map, which uses measurement by coherent states to form a bridge between the quantum flux operator and semiclassics. We extend the formalism from Chapter 4 to lattice systems in Chapter 5, and comment on our results in Chapter 3 and other work in the literature. These three tools - the Outward Wave Algorithm, the Boundary Reflection Matrix, and the Husimi map - work together to throw light on our interpretation of resonance and scattering in quantum systems, effectively codifying the expertise developed in semiclassics over the past few decades in an efficient and robust package. The data and images that they make available promise to help design better technologies based on quantum scattering.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
2002-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
1998-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
2002-09-24
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.
1998-05-12
A fast quench reactor includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This ``freezes`` the desired end product(s) in the heated equilibrium reaction stage. 7 figs.
Analytical Methods for Trace Metals. Training Manual.
ERIC Educational Resources Information Center
Office of Water Program Operations (EPA), Cincinnati, OH. National Training and Operational Technology Center.
This training manual presents material on the theoretical concepts involved in the methods listed in the Federal Register as approved for determination of trace metals. Emphasis is on laboratory operations. This course is intended for chemists and technicians with little or no experience in analytical methods for trace metals. Students should have…
Methods of Analyte Concentration in a Capillary
NASA Astrophysics Data System (ADS)
Kubalczyk, Paweł; Bald, Edward
Online sample concentration techniques in capillary electrophoresis separations have rapidly grown in popularity over the past few years. During the concentration process, diluted analytes in long injected sample are concentrated into a short zone, then the analytes are separated and detected. A large number of contributions have been published on this subject proposing many names for procedures utilizing the same concentration principles. This chapter brings a unified view on concentration, describes the basic principles utilized, and shows a list of recognized current operational procedures. Several online concentration methods based on velocity gradient techniques are described, in which the electrophoretic velocities of the analyte molecules are manipulated by field amplification, sweeping and isotachophoretic migration, resulting in the online concentration of the analyte.
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Public Health Association (APHA), the American Water Works Association (AWWA) and the Water...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Public Health Association (APHA), the American Water Works Association (AWWA) and the Water...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Association (APHA), the American Water Works Association (AWWA) and the Water Pollution Control...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS SERVICES AND GENERAL INFORMATION...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS POULTRY AND EGG PRODUCTS Mandatory...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Association (APHA), the American Water Works Association (AWWA) and the Water Pollution Control...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Public Health Association (APHA), the American Water Works Association (AWWA) and the Water...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Association (APHA), the American Water Works Association (AWWA) and the Water Pollution Control...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS PROCESSED FRUITS AND VEGETABLES...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
....89 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... 136 of this title. This need only be accomplished if the laboratory will be processing source...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS POULTRY AND EGG PRODUCTS Mandatory...
Surface Analytical Methods Applied to Magnesium Corrosion.
Dauphin-Ducharme, Philippe; Mauzeroll, Janine
2015-08-01
Understanding magnesium alloy corrosion is of primary concern, and scanning probe techniques are becoming key analytical characterization methods for that purpose. This Feature presents recent trends in this field as the progressive substitution of steel and aluminum car components by magnesium alloys to reduce the overall weight of vehicles is an irreversible trend. PMID:25826577
Transcutaneous Analyte Measuring Methods (TAMM), phase 2
NASA Astrophysics Data System (ADS)
Schlager, Kenneth J.
1991-11-01
The primary objectives of the first quarter of Phase 2 TAMM were the following: the design of a near infrared (NIR)-800 photodiode array spectrometer, two of which would be used in clinical testing during 1992; the development of advanced pattern recognition software for analyzing the data collected with the spectrometer; and the establishment of an ongoing, internal test program with the B1-102 infrared analyzer. The major effect during the first three months of the project was in developing the analytical software NETGEN. NETGEN is a set of analytical programs that combine the best features of neural networks and genetic algorithms. Artificial neural networks (ANNs) are a form of distributed parallel processing of information that attempts to simulate the human brain. For application in TAMM, ANNs are an alternative to previous pattern recognition methods used for predicting blood analyte concentrations from NIR spectra.
Analytic sequential methods for detecting network intrusions
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Walker, Ernest
2014-05-01
In this paper, we propose an analytic sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. We have developed explicit formulae for quick determination of the parameters of the new detection algorithm.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
An Analytical Method of Estimating Turbine Performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1948-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.
The multigrid method: Fast relaxation
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Brandt, A.
1976-01-01
A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse (e.g. 4 x 2 mesh cells) to fine (e.g. 64 x 32); the coarser grids are used to diminish the magnitude of the smooth part of the residuals, hopefully with far less total work than would be required with optimal iterations on the finest grid. To date the method was applied quite successfully to the solution of the transonic small-disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is the example studied, with meshes of both constant and variable step size.
FAST TRACK COMMUNICATION: Uniqueness of static black holes without analyticity
NASA Astrophysics Data System (ADS)
Chruściel, Piotr T.; Galloway, Gregory J.
2010-08-01
We show that the hypothesis of analyticity in the uniqueness theory of vacuum, or electrovacuum, static black holes is not needed. More generally, we show that prehorizons covering a closed set cannot occur in well-behaved domains of outer communications.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Secondary waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S.; Schilling, J.B.
1995-07-01
The characterization phase of site remediation is an important and costly part of the process. Because toxic solvents and other hazardous materials are used in common analytical methods, characterization is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the volume or form of hazardous waste produced either in the sample preparation step or in the measurement step. The authors are examining alternative methods in the areas of inorganic, radiological, and organic analysis. For determining inorganic constituents, alternative methods were studied for sample introduction into inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as their associated waste volumes, were compared with the conventional approaches. In the radiological area, the authors are comparing conventional methods for gross {alpha}/{beta} measurements of soil samples to an alternative method that uses high-pressure microwave dissolution. For determination of organic constituents, microwave-assisted extraction was studied for RCRA regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil; polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil. Extraction efficiencies were determined under varying conditions of time, temperature, microwave power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in conventional extraction methods to about 30 mL. Extraction results varied from one matrix to another. In most cases, the microwave-assisted extraction technique was as efficient as the more common Soxhlet or sonication extraction techniques.
Directory of Analytical Methods, Department 1820
Whan, R.E.
1986-01-01
The Materials Characterization Department performs chemical, physical, and thermophysical analyses in support of programs throughout the Laboratories. The department has a wide variety of techniques and instruments staffed by experienced personnel available for these analyses, and we strive to maintain near state-of-the-art technology by continued updates. We have prepared this Directory of Analytical Methods in order to acquaint you with our capabilities and to help you identify personnel who can assist with your analytical needs. The descriptions of the various capabilities are requester-oriented and have been limited in length and detail. Emphasis has been placed on applications and limitations with notations of estimated analysis time and alternative or related techniques. A short, simplified discussion of underlying principles is also presented along with references if more detail is desired. The contents of this document have been organized in the order: bulky analysis, microanalysis, surface analysis, optical and thermal property measurements.
A pragmatic overview of fast multipole methods
Strickland, J.H.; Baty, R.S.
1995-12-01
A number of physics problems can be modeled by a set of N elements which have pair-wise interactions with one another. A direct solution technique requires computational effort which is O(N{sup 2}). Fast multipole methods (FMM) have been widely used in recent years to obtain solutions to these problems requiring a computational effort of only 0 (N lnN) or O (N). In this paper we present an overview of several variations of the fast multipole method along with examples of its use in solving a variety of physical problems.
Analytical methods for toxic gases from thermal degradation of polymers
NASA Technical Reports Server (NTRS)
Hsu, M.-T. S.
1977-01-01
Toxic gases evolved from the thermal oxidative degradation of synthetic or natural polymers in small laboratory chambers or in large scale fire tests are measured by several different analytical methods. Gas detector tubes are used for fast on-site detection of suspect toxic gases. The infrared spectroscopic method is an excellent qualitative and quantitative analysis for some toxic gases. Permanent gases such as carbon monoxide, carbon dioxide, methane and ethylene, can be quantitatively determined by gas chromatography. Highly toxic and corrosive gases such as nitrogen oxides, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and sulfur dioxide should be passed into a scrubbing solution for subsequent analysis by either specific ion electrodes or spectrophotometric methods. Low-concentration toxic organic vapors can be concentrated in a cold trap and then analyzed by gas chromatography and mass spectrometry. The limitations of different methods are discussed.
The greening of PCB analytical methods
Erickson, M.D.; Alvarado, J.S.; Aldstadt, J.H.
1995-12-01
Green chemistry incorporates waste minimization, pollution prevention and solvent substitution. The primary focus of green chemistry over the past decade has been within the chemical industry; adoption by routine environmental laboratories has been slow because regulatory standard methods must be followed. A related paradigm, microscale chemistry has gained acceptance in undergraduate teaching laboratories, but has not been broadly applied to routine environmental analytical chemistry. We are developing green and microscale techniques for routine polychlorinated biphenyl (PCB) analyses as an example of the overall potential within the environmental analytical community. Initial work has focused on adaptation of commonly used routine EPA methods for soils and oils. Results of our method development and validation demonstrate that: (1) Solvent substitution can achieve comparable results and eliminate environmentally less-desirable solvents, (2) Microscale extractions can cut the scale of the analysis by at least a factor of ten, (3) We can better match the amount of sample used with the amount needed for the GC determination step, (4) The volume of waste generated can be cut by at least a factor of ten, and (5) Costs are reduced significantly in apparatus, reagent consumption, and labor.
Delgado-Aparicio, L.; Tritz, K.; Kramer, T.; Stutman, D.; Finkentha, M.; Hill, K.; Bitter, M.
2010-08-26
A new set of analytic formulae describes the transmission of soft X-ray (SXR) continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler, Rev. Sci. Instrum., 20, 599, (1999)]. The new analytic formulae can improve the interpretation of the experimental results and thus contribute in obtaining fast teperature measurements in between intermittent Thomson Scattering data.
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Analytical methods to assess nanoparticle toxicity.
Marquis, Bryce J; Love, Sara A; Braun, Katherine L; Haynes, Christy L
2009-03-01
During the past 20 years, improvements in nanoscale materials synthesis and characterization have given scientists great control over the fabrication of materials with features between 1 and 100 nm, unlocking many unique size-dependent properties and, thus, promising many new and/or improved technologies. Recent years have found the integration of such materials into commercial goods; a current estimate suggests there are over 800 nanoparticle-containing consumer products (The Project on Emerging Nanotechnologies Consumer Products Inventory, , accessed Oct. 2008), accounting for 147 billion USD in products in 2007 (Nanomaterials state of the market Q3 2008: stealth success, broad impact, Lux Research Inc., New York, NY, 2008). Despite this increase in the prevalence of engineered nanomaterials, there is little known about their potential impacts on environmental health and safety. The field of nanotoxicology has formed in response to this lack of information and resulted in a flurry of research studies. Nanotoxicology relies on many analytical methods for the characterization of nanomaterials as well as their impacts on in vitro and in vivo function. This review provides a critical overview of these techniques from the perspective of an analytical chemist, and is intended to be used as a reference for scientists interested in conducting nanotoxicological research as well as those interested in nanotoxicological assay development. PMID:19238274
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
Analytical methods for optical remote sensing
Spellicy, R.L.
1997-12-31
Optical monitoring systems are very powerful because of their ability to see many compounds simultaneously as well as their ability to report results in real time. However, these strengths also present unique problems to analysis of the resulting data and validation of observed results. Today, many FTIR and UV-DOAS systems are in use. Some of these are manned systems supporting short term tests while others are totally unmanned systems which are expected to operate without intervention for weeks or months at a time. The analytical methods needed to support both the diversity of compounds and the diversity of applications is challenging. In this paper, the fundamental concepts of spectral analysis for IR/UV systems are presented. This is followed by examples of specific field data from both short term measurement programs looking at unique sources and long-term unmanned monitoring systems looking at ambient air.
Analytical Methods for Immunogenetic Population Data
Mack, Steven J.; Gourraud, Pierre-Antoine; Single, Richard M.; Thomson, Glenys; Hollenbach, Jill A.
2014-01-01
In this chapter, we describe analyses commonly applied to immunogenetic population data, along with software tools that are currently available to perform those analyses. Where possible, we focus on tools that have been developed specifically for the analysis of highly polymorphic immunogenetic data. These analytical methods serve both as a means to examine the appropriateness of a dataset for testing a specific hypothesis, as well as a means of testing hypotheses. Rather than treat this chapter as a protocol for analyzing any population dataset, each researcher and analyst should first consider their data, the possible analyses, and any available tools in light of the hypothesis being tested. The extent to which the data and analyses are appropriate to each other should be determined before any analyses are performed. PMID:22665237
Pyrroloquinoline quinone: Metabolism and analytical methods
Smidt, C.R.
1990-01-01
Pyrroloquinoline quinone (PQQ) functions as a cofactor for bacterial oxidoreductases. Whether or not PQQ serves as a cofactor in higher plants and animals remains controversial. Nevertheless, strong evidence exists that PQQ has nutritional importance. In highly purified, chemically defined diets PQQ stimulates animal growth. Further PQQ deprivation impairs connective tissue maturation, particularly when initiated in utero and throughout perinatal development. The study addresses two main objectives: (1) to elucidate basic aspects of the metabolism of PQQ in animals, and (2) to develop and improve existing analytical methods for PQQ. To study intestinal absorption of PQQ, ten mice were administered [[sup 14]C]-PQQ per os. PQQ was readily absorbed (62%) in the lower intestine and was excreted by the kidney within 24 hours. Significant amounts of labeled-PQQ were retained only by skin and kidney. Three approaches were taken to answer the question whether or not PQQ is synthesized by the intestinal microflora of mice. First, dietary antibiotics had no effect on fecal PQQ excretion. Then, no bacterial isolates could be identified that are known to synthesize PQQ. Last, cecal contents were incubated anaerobically with radiolabeled PQQ-precursors with no label appearing in isolated PQQ. Thus, intestinal PQQ synthesis is unlikely. Analysis of PQQ in biological samples is problematic since PQQ forms adducts with nucleophilic compounds and binds to the protein fraction. Existing analytical methods are reviewed and a new approach is introduced that allows for detection of PQQ in animal tissue and foods. PQQ is freed from proteins by ion exchange chromatography, purified on activated silica cartridges, detected by a colorimetric redox-cycling assay, and identified by mass spectrometry. That compounds with the properties of PQQ may be nutritionally important offers interesting areas for future investigation.
Fast analytic simulation toolkit for generation of 4D PET-MR data from real dynamic MR acquisitions
NASA Astrophysics Data System (ADS)
Tsoumpas, C.; Buerger, C.; Mollet, P.; Marsden, P. K.
2011-09-01
This work introduces and evaluates a fast analytic simulation toolkit (FAST) for simulating dynamic PET-MR data from real MR acquisitions. Realistic radiotracer values are assigned to segmented MR images. PET data are generated using analytic forward-projections (including attenuation and Poisson statistics) with the reconstruction software STIR, which is also used to produce the PET images that are spatially and temporally correlated with the real MR images. The simulation is compared with the GATE Monte Carlo package, which has more accurate physical modelling but it is 150 times slower compared to FAST for ten respiratory positions and 7000× slower, when repeating the simulation. The region of interest for mean values and coefficients of variation obtained with FAST and GATE, from 65 million and 104 million coincidences, respectively, were compared. Agreement between the two different simulation methods is good. In particular, the percentage differences of the mean values are: 10% for liver, and 19% for the myocardium and a warm lesion. The utility of FAST is demonstrated with the simulation of multiple volunteers with different breathing patterns. The package will be used for studying the performance of reconstruction, motion correction and attenuation correction algorithms for dynamic simultaneous PET-MR data.
An overview of fast multipole methods
Strickland, J.H.; Baty, R.S.
1995-11-01
A number of physics problems may be cast in terms of Hilbert-Schmidt integral equations. In many cases, the integrals tend to be zero over a large portion of the domain of interest. All of the information is contained in compact regions of the domain which renders their use very attractive from the standpoint of efficient numerical computation. Discrete representation of these integrals leads to a system of N elements which have pair-wise interactions with one another. A direct solution technique requires computational effort which is O(N{sup 2}). Fast multipole methods (FMM) have been widely used in recent years to obtain solutions to these problems requiring a computational effort of only O(Nln N) or O(N). In this paper we present an overview of several variations of the fast multipole method along with examples of its use in solving a variety of physical problems.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS... § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must...
Fractional tiers in fast multipole method calculations
NASA Astrophysics Data System (ADS)
White, Christopher A.; Head-Gordon, Martin
1996-08-01
One defining characteristic of the fast multipole calculation is the number of tiers (depth of tree) used to group the particles. For three dimensions, the standard boxing scheme restricts the number of lowest level boxes to be a power of eight. We present a method which through a simple scaling of the particle coordinates allows an arbitrary number of lowest level boxes. Consequently, one can better balance the near-field and far-field work by minimizing the variation in the number of particles per lowest level box from its optimal value. Test calculations show systems where this method gives a speedup approaching two times.
Fast multipole methods for particle dynamics
Kurzak, J.; Pettitt, B. M.
2008-01-01
The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems. PMID:19194526
Novel applications of fast neutron interrogation methods
NASA Astrophysics Data System (ADS)
Gozani, Tsahi
1994-12-01
The development of non-intrusive inspection methods for contraband consisting primarily of carbon, nitrogen, oxygen, and hydrogen requires the use of fast neutrons. While most elements can be sufficiently well detected by the thermal neutron capture process, some important ones, e.g., carbon and in particular oxygen, cannot be detected by this process. Fortunately, fast neutrons, with energies above the threshold for inelastic scattering, stimulate relatively strong and specific gamma ray lines from these elements. The main lines are: 6.13 for O, 4.43 for C, and 5.11, 2.31 and 1.64 MeV for N. Accelerator-generated neutrons in the energy range of 7 to 15 MeV are being considered as interrogating radiations in a variety of non-intrusive inspection systems for contraband, from explosives to drugs and from coal to smuggled, dutiable goods. In some applications, mostly for inspection of small items such as luggage, the decision process involves a rudimentary imaging, akin to emission tomography, to obtain the localized concentration of various elements. This technique is called FNA — Fast Neutron Analysis. While this approach offers improvements over the TNA (Thermal Neutron Analysis), it is not applicable to large objects such as shipping containers and trucks. For these challenging applications, a collimated beam of neutrons is rastered along the height of the moving object. In addition, the neutrons are generated in very narrow nanosecond pulses. The point of their interaction inside the object is determined by the time of flight (TOF) method, that is measuring the time elapsed from the neutron generation to the time of detection of the stimulated gamma rays. This technique, called PFNA (Pulsed Fast Neutron Analysis), thus directly provides the elemental, and by inference, the chemical composition of the material at every volume element (voxel) of the object. The various neutron-based techniques are briefly described below.
40 CFR 141.25 - Analytical methods for radioactivity.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods for radioactivity. 141.25 Section 141.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Monitoring and Analytical Requirements § 141.25 Analytical methods for radioactivity....
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...
Weaver, Abigail A.; Reiser, Hannah; Barstis, Toni; Benvenuti, Michael; Ghosh, Debarati; Hunckler, Michael; Joy, Brittney; Koenig, Leah; Raddell, Kellie; Lieberman, Marya
2013-01-01
Reports of low quality pharmaceuticals have been on the rise in the last decade with the greatest prevalence of substandard medicines in developing countries, where lapses in manufacturing quality control or breaches in the supply chain allow substandard medicines to reach the marketplace. Here, we describe inexpensive test cards for fast field screening of pharmaceutical dosage forms containing beta lactam antibiotics or combinations of the four first-line antituberculosis (TB) drugs. The devices detect the active pharmaceutical ingredients (APIs) ampicillin, amoxicillin, rifampicin, isoniazid, ethambutol, and pyrazinamide, and also screen for substitute pharmaceuticals such as acetaminophen and chloroquine that may be found in counterfeit pharmaceuticals. The tests can detect binders and fillers like chalk, talc, and starch not revealed by traditional chromatographic methods. These paper devices contain twelve lanes, separated by hydrophobic barriers, with different reagents deposited in the lanes. The user rubs some of the solid pharmaceutical across the lanes and dips the edge of the paper into water. As water climbs up the lanes by capillary action, it triggers a library of different chemical tests and a timer to indicate when the tests are completed. The reactions in each lane generate colors to form a “color bar code” which can be analyzed visually by comparison to standard outcomes. While quantification of the APIs is poor compared to conventional analytical methods, the sensitivity and selectivity for the analytes is high enough to pick out suspicious formulations containing no API or a substitute API, as well as formulations containing APIs that have been “cut” with inactive ingredients. PMID:23725012
Weaver, Abigail A; Reiser, Hannah; Barstis, Toni; Benvenuti, Michael; Ghosh, Debarati; Hunckler, Michael; Joy, Brittney; Koenig, Leah; Raddell, Kellie; Lieberman, Marya
2013-07-01
Reports of low-quality pharmaceuticals have been on the rise in the past decade, with the greatest prevalence of substandard medicines in developing countries, where lapses in manufacturing quality control or breaches in the supply chain allow substandard medicines to reach the marketplace. Here, we describe inexpensive test cards for fast field screening of pharmaceutical dosage forms containing beta lactam antibiotics or combinations of the four first-line antituberculosis (TB) drugs. The devices detect the active pharmaceutical ingredients (APIs) ampicillin, amoxicillin, rifampicin, isoniazid, ethambutol, and pyrazinamide and also screen for substitute pharmaceuticals, such as acetaminophen and chloroquine that may be found in counterfeit pharmaceuticals. The tests can detect binders and fillers such as chalk, talc, and starch not revealed by traditional chromatographic methods. These paper devices contain 12 lanes, separated by hydrophobic barriers, with different reagents deposited in the lanes. The user rubs some of the solid pharmaceutical across the lanes and dips the edge of the paper into water. As water climbs up the lanes by capillary action, it triggers a library of different chemical tests and a timer to indicate when the tests are completed. The reactions in each lane generate colors to form a "color bar code" which can be analyzed visually by comparison with standard outcomes. Although quantification of the APIs is poor compared with conventional analytical methods, the sensitivity and selectivity for the analytes is high enough to pick out suspicious formulations containing no API or a substitute API as well as formulations containing APIs that have been "cut" with inactive ingredients. PMID:23725012
Analytical estimates of electron quasi-linear diffusion by fast magnetosonic waves
NASA Astrophysics Data System (ADS)
Mourenas, D.; Artemyev, A. V.; Agapitov, O. V.; Krasnoselskikh, V.
2013-06-01
Quantifying the loss of relativistic electrons from the Earth's radiation belts requires to estimate the effects of many kinds of observed waves, ranging from ULF to VLF. Analytical estimates of electron quasi-linear diffusion coefficients for whistler-mode chorus and hiss waves of arbitrary obliquity have been recently derived, allowing useful analytical approximations for lifetimes. We examine here the influence of much lower frequency and highly oblique, fast magnetosonic waves (also called ELF equatorial noise) by means of both approximate analytical formulations of the corresponding diffusion coefficients and full numerical simulations. Further analytical developments allow us to identify the most critical wave and plasma parameters necessary for a strong impact of fast magnetosonic waves on electron lifetimes and acceleration in the simultaneous presence of chorus, hiss, or lightning-generated waves, both inside and outside the plasmasphere. In this respect, a relatively small frequency over ion gyrofrequency ratio appears more favorable, and other propitious circumstances are characterized. This study should be useful for a comprehensive appraisal of the potential effect of fast magnetosonic waves throughout the magnetosphere.
NASA Astrophysics Data System (ADS)
Kurylyk, Barret L.; Irvine, Dylan J.
2016-02-01
This study details the derivation and application of a new analytical solution to the one-dimensional, transient conduction-advection equation that is applied to trace vertical subsurface fluid fluxes. The solution employs a flexible initial condition that allows for nonlinear temperature-depth profiles, providing a key improvement over most previous solutions. The boundary condition is composed of any number of superimposed step changes in surface temperature, and thus it accommodates intermittent warming and cooling periods due to long-term changes in climate or land cover. The solution is verified using an established numerical model of coupled groundwater flow and heat transport. A new computer program FAST (Flexible Analytical Solution using Temperature) is also presented to facilitate the inversion of this analytical solution to estimate vertical groundwater flow. The program requires surface temperature history (which can be estimated from historic climate data), subsurface thermal properties, a present-day temperature-depth profile, and reasonable initial conditions. FAST is written in the Python computing language and can be run using a free graphical user interface. Herein, we demonstrate the utility of the analytical solution and FAST using measured subsurface temperature and climate data from the Sendia Plain, Japan. Results from these illustrative examples highlight the influence of the chosen initial and boundary conditions on estimated vertical flow rates.
Green analytical method development for statin analysis.
Assassi, Amira Louiza; Roy, Claude-Eric; Perovitch, Philippe; Auzerie, Jack; Hamon, Tiphaine; Gaudin, Karen
2015-02-01
Green analytical chemistry method was developed for pravastatin, fluvastatin and atorvastatin analysis. HPLC/DAD method using ethanol-based mobile phase with octadecyl-grafted silica with various grafting and related-column parameters such as particle sizes, core-shell and monolith was studied. Retention, efficiency and detector linearity were optimized. Even for column with particle size under 2 μm, the benefit of keeping efficiency within a large range of flow rate was not obtained with ethanol based mobile phase compared to acetonitrile one. Therefore the strategy to shorten analysis by increasing the flow rate induced decrease of efficiency with ethanol based mobile phase. An ODS-AQ YMC column, 50 mm × 4.6 mm, 3 μm was selected which showed the best compromise between analysis time, statin separation, and efficiency. HPLC conditions were at 1 mL/min, ethanol/formic acid (pH 2.5, 25 mM) (50:50, v/v) and thermostated at 40°C. To reduce solvent consumption for sample preparation, 0.5mg/mL concentration of each statin was found the highest which respected detector linearity. These conditions were validated for each statin for content determination in high concentrated hydro-alcoholic solutions. Solubility higher than 100mg/mL was found for pravastatin and fluvastatin, whereas for atorvastatin calcium salt the maximum concentration was 2mg/mL for hydro-alcoholic binary mixtures between 35% and 55% of ethanol in water. Using atorvastatin instead of its calcium salt, solubility was improved. Highly concentrated solution of statins offered potential fluid for per Buccal Per-Mucous(®) administration with the advantages of rapid and easy passage of drugs. PMID:25582487
An analytical method for computing atomic contact areas in biomolecules.
Mach, Paul; Koehl, Patrice
2013-01-15
We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. PMID:22965816
Fast Multipole Methods for Particle Dynamics.
Kurzak, Jakub; Pettitt, Bernard M.
2006-08-30
The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems.
SINGLE-LABORATORY EVALUATION OF OSMIUM ANALYTICAL METHODS
The results of a single-laboratory study of osmium analytical methods are described. The methods studied include direct-aspiration atomic absorption spectroscopy (EPA Method 7550), furnace atomic absorption spectroscopy and inductively coupled plasma atomic emission spectroscopy ...
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
77 FR 56176 - Analytical Methods Used in Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... From the Federal Register Online via the Government Publishing Office POSTAL REGULATORY COMMISSION 39 CFR Part 3001 Analytical Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission... consider changes in the analytical methods approved for use in periodic reporting.\\1\\ \\1\\ Petition of...
Methods and Instruments for Fast Neutron Detection
Jordan, David V.; Reeder, Paul L.; Cooper, Matthew W.; McCormick, Kathleen R.; Peurrung, Anthony J.; Warren, Glen A.
2005-05-01
Pacific Northwest National Laboratory evaluated the performance of a large-area (~0.7 m2) plastic scintillator time-of-flight (TOF) sensor for direct detection of fast neutrons. This type of sensor is a readily area-scalable technology that provides broad-area geometrical coverage at a reasonably low cost. It can yield intrinsic detection efficiencies that compare favorably with moderator-based detection methods. The timing resolution achievable should permit substantially more precise time windowing of return neutron flux than would otherwise be possible with moderated detectors. The energy-deposition threshold imposed on each scintillator contributing to the event-definition trigger in a TOF system can be set to blind the sensor to direct emission from the neutron generator. The primary technical challenge addressed in the project was to understand the capabilities of a neutron TOF sensor in the limit of large scintillator area and small scintillator separation, a size regime in which the neutral particle’s flight path between the two scintillators is not tightly constrained.
Fast Single Image Super-Resolution Using a New Analytical Solution for l2 - l2 Problems.
Zhao, Ningning; Wei, Qi; Basarab, Adrian; Dobigeon, Nicolas; Kouame, Denis; Tourneret, Jean-Yves
2016-08-01
This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 - l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques. PMID:27187960
Learner Language Analytic Methods and Pedagogical Implications
ERIC Educational Resources Information Center
Dyson, Bronwen
2010-01-01
Methods for analysing interlanguage have long aimed to capture learner language in its own right. By surveying the cognitive methods of Error Analysis, Obligatory Occasion Analysis and Frequency Analysis, this paper traces reformulations to attain this goal. The paper then focuses on Emergence Analysis, which fine-tunes learner language analysis…
Analytical methods used in a study of coke oven effluent.
Schulte, K A; Larsen, D J; Hornung, R W; Crable, J V
1975-02-01
In a coke oven study conducted by NIOSH, selected chemical analyses of airborne particulates, vapors, and metals in the emissions from five coke ovens were done. Eight sampling procedures and seven analytical techniques were used to analyze samples collected for the study. Six of the analytical methods used are discussed. PMID:1146677
Optimization of reversed-phase chromatography methods for peptide analytics.
Khalaf, Rushd; Baur, Daniel; Pfister, David
2015-12-18
The analytical description and quantification of peptide solutions is an essential part in the quality control of peptide production processes and in peptide mapping techniques. Traditionally, an important tool is analytical reversed phase liquid chromatography. In this work, we develop a model-based tool to find optimal analytical conditions in a clear, efficient and robust manner. The model, based on the Van't Hoff equation, the linear solvent strength correlation, and an analytical solution of the mass balance on a chromatographic column describing peptide retention in gradient conditions is used to optimize the analytical scale separation between components in a peptide mixture. The proposed tool is then applied in the design of analytical reversed phase liquid chromatography methods of five different peptide mixtures. PMID:26620597
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... MEALS, READY-TO-EAT (MREs), MEATS, AND MEAT PRODUCTS MREs, Meats, and Related Meat Food Products § 98.4... of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...
Analytical chemistry methods for mixed oxide fuel, March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of materials used to produce mixed oxide fuel. These materials are ceramic fuel and insulator pellets and the plutonium and uranium oxides and nitrates used to fabricate these pellets.
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining pentachlorophenol (PCP) contamination in soil and wa...
FIELD ANALYTICAL SCREENING PROGRAM PCB METHOD: INNOVATIVE TECHNOLOGY EVALUATION REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Analytical techniques for instrument design - matrix methods
Robinson, R.A.
1997-09-01
We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Handbook of Analytical Methods for Textile Composites
NASA Technical Reports Server (NTRS)
Cox, Brian N.; Flanagan, Gerry
1997-01-01
The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.
Analytical techniques for instrument design -- Matrix methods
Robinson, R.A.
1997-12-31
The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Comparison of finite-difference and analytic microwave calculation methods
Friedlander, F.I.; Jackson, H.W.; Barmatz, M.; Wagner, P.
1996-12-31
Normal modes and power absorption distributions in microwave cavities containing lossy dielectric samples were calculated for problems of interest in materials processing. The calculations were performed both using a commercially available finite-difference electromagnetic solver and by numerical evaluation of exact analytic expressions. Results obtained by the two methods applied to identical physical situations were compared. The studies validate the accuracy of the finite-difference electromagnetic solver. Relative advantages of the analytic and finite-difference methods are discussed.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-01-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.
1994-01-01
Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.
Analytical instruments, ionization sources, and ionization methods
Atkinson, David A.; Mottishaw, Paul
2006-04-11
Methods and apparatus for simultaneous vaporization and ionization of a sample in a spectrometer prior to introducing the sample into the drift tube of the analyzer are disclosed. The apparatus includes a vaporization/ionization source having an electrically conductive conduit configured to receive sample particulate which is conveyed to a discharge end of the conduit. Positioned proximate to the discharge end of the conduit is an electrically conductive reference device. The conduit and the reference device act as electrodes and have an electrical potential maintained between them sufficient to cause a corona effect, which will cause at least partial simultaneous ionization and vaporization of the sample particulate. The electrical potential can be maintained to establish a continuous corona, or can be held slightly below the breakdown potential such that arrival of particulate at the point of proximity of the electrodes disrupts the potential, causing arcing and the corona effect. The electrical potential can also be varied to cause periodic arcing between the electrodes such that particulate passing through the arc is simultaneously vaporized and ionized. The invention further includes a spectrometer containing the source. The invention is particularly useful for ion mobility spectrometers and atmospheric pressure ionization mass spectrometers.
Fracture mechanics life analytical methods verification testing
NASA Astrophysics Data System (ADS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-09-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Rotary fast tool servo system and methods
Montesanti, Richard C.; Trumper, David L.
2007-10-02
A high bandwidth rotary fast tool servo provides tool motion in a direction nominally parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. Three or more flexure blades having all ends fixed are used to form an axis of rotation for a swing arm that carries a cutting tool at a set radius from the axis of rotation. An actuator rotates a swing arm assembly such that a cutting tool is moved in and away from the lathe-mounted, rotating workpiece in a rapid and controlled manner in order to machine the workpiece. A pair of position sensors provides rotation and position information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in-feed slide of a precision lathe.
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method for sample selection, preparation, extraction and clean up is prescribed. For analysis,...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method for sample selection, preparation, extraction and clean up is prescribed. For analysis,...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method for sample selection, preparation, extraction and clean up is prescribed. For analysis,...
Internal R and D task summary report: analytical methods development
Schweighardt, F.K.
1983-07-01
International Coal Refining Company (ICRC) conducted two research programs to develop analytical procedures for characterizing the feed, intermediates,and products of the proposed SRC-I Demonstration Plant. The major conclusion is that standard analytical methods must be defined and assigned statistical error limits of precision and reproducibility early in development. Comparing all SRC-I data or data from different processes is complex and expensive if common data correlation procedures are not followed. ICRC recommends that processes be audited analytically and statistical analyses generated as quickly as possible, in order to quantify process-dependent and -independent variables. 16 references, 10 figures, 20 tables.
Analytical method transfer: new descriptive approach for acceptance criteria definition.
de Fontenay, Gérald
2008-01-01
Within the pharmaceutical industry, method transfers are now commonplace during the life cycle of an analytical method. Setting acceptance criteria for analytical transfers is, however, much more difficult than usually described. Criteria which are too wide may lead to the acceptance of a laboratory providing non-equivalent results, resulting in bad release/reject decisions for pharmaceutical products (a consumer risk). On the contrary, criteria which are too tight may lead to the rejection of an equivalent laboratory, resulting in time costs and delay in the transfer process (an industrial risk). The consumer risk has to be controlled first. But the risk does depend on the method capability (tolerance to method precision ratio). Analytical transfers were simulated for different scenarios (different method capabilities and transfer designs, 10,000 simulations per test). The results of the simulations showed that the method capability has a strong influence on the probability of success of its transfer. For the transfer design, the number of independent analytical runs to be performed on a same batch has much more influence than the number of replicates per run, especially when the inter-day variability of the method is high. A classic descriptive approach for analytical method transfer does not take into account the variability of the method, and therefore, no risks are controlled. Tools for designing analytical transfers and defining a new descriptive acceptance criterion, which take into account the intra- and inter-day variability of the method, are provided for a better risk evaluation by non-statisticians. PMID:17961955
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Analytical methods for quantitation of prenylated flavonoids from hops
Nikolić, Dejan; van Breemen, Richard B.
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical methods for quantitation of prenylated flavonoids from hops.
Nikolić, Dejan; van Breemen, Richard B
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Fast Method of Detection of Periodical Radio Sources
NASA Astrophysics Data System (ADS)
Rodin, A. E.; Samodourov, V. A.; Oreshko, V. V.
2015-11-01
A fast method for searching periodical radio sources based on the Fast Fourier Transform at the radio telescope LPA LPI (the Large Phased Array of the Lebedev Physical Institute) is described. Examples of detection of already known pulsars and a list of new periodical radio sources with coordinates, period, and dispersion measure are presented.
An analytical method for designing low noise helicopter transmissions
NASA Technical Reports Server (NTRS)
Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.
1978-01-01
The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.
Development of quality-by-design analytical methods.
Vogt, Frederick G; Kord, Alireza S
2011-03-01
Quality-by-design (QbD) is a systematic approach to drug development, which begins with predefined objectives, and uses science and risk management approaches to gain product and process understanding and ultimately process control. The concept of QbD can be extended to analytical methods. QbD mandates the definition of a goal for the method, and emphasizes thorough evaluation and scouting of alternative methods in a systematic way to obtain optimal method performance. Candidate methods are then carefully assessed in a structured manner for risks, and are challenged to determine if robustness and ruggedness criteria are satisfied. As a result of these studies, the method performance can be understood and improved if necessary, and a control strategy can be defined to manage risk and ensure the method performs as desired when validated and deployed. In this review, the current state of analytical QbD in the industry is detailed with examples of the application of analytical QbD principles to a range of analytical methods, including high-performance liquid chromatography, Karl Fischer titration for moisture content, vibrational spectroscopy for chemical identification, quantitative color measurement, and trace analysis for genotoxic impurities. PMID:21280050
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
Comparison of scalable fast methods for long-range interactions.
Arnold, Axel; Fahrenberger, Florian; Holm, Christian; Lenz, Olaf; Bolten, Matthias; Dachsel, Holger; Halver, Rene; Kabadshow, Ivo; Gähler, Franz; Heber, Frederik; Iseringhausen, Julian; Hofmann, Michael; Pippig, Michael; Potts, Daniel; Sutmann, Godehard
2013-12-01
Based on a parallel scalable library for Coulomb interactions in particle systems, a comparison between the fast multipole method (FMM), multigrid-based methods, fast Fourier transform (FFT)-based methods, and a Maxwell solver is provided for the case of three-dimensional periodic boundary conditions. These methods are directly compared with respect to complexity, scalability, performance, and accuracy. To ensure comparable conditions for all methods and to cover typical applications, we tested all methods on the same set of computers using identical benchmark systems. Our findings suggest that, depending on system size and desired accuracy, the FMM- and FFT-based methods are most efficient in performance and stability. PMID:24483585
Beamforming and holography image formation methods: an analytic study.
Solimene, Raffaele; Cuccaro, Antonio; Ruvio, Giuseppe; Tapia, Daniel Flores; O'Halloran, Martin
2016-04-18
Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the conﬁguration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) conﬁguration.. PMID:27137336
A New Analytic Alignment Method for a SINS
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
Analytical methods for water disinfection byproducts in foods and beverages.
Raymer, J H; Pellizzari, E; Childs, B; Briggs, K; Shoemaker, J A
2000-01-01
The determination of exposure to drinking water disinfection byproducts (DBPs) requires an understanding of how drinking water comes into contact with human through multiple pathways. In order to facilitate the investigation of human exposure to DBPs via foods and beverages, analytical method development efforts were initiated for haloacetonitriles, haloketones, chloropicrin, and the haloacetic acids (HAAs) in these matrices. The recoveries of the target analytes were investigated from composite foods and beverages. Individual foods and beverages used to investigate the general applicability of the developed methods were selected for testing based on their watercontent and frequency of consumption. The haloacetonitriles, the haloketones, and chloral hydrate were generally well recovered (70-130%), except for bromochloroacetonitrile (64%) and dibromoacetonitrile (55%), from foods spiked after homogenization and following extraction with methyl-t-butyl ether (MTBE); the addition of acetone was found to be necessary to improve recoveries from beverages. The process of homogenization resulted in decreased recoveries for the more volatile analytes despite the presence of dry ice. The HAAs were generally well recovered (70-130%), except for trichloroacetic acid (58%) and tribromoacetic acid (132%), from foods but low recoveries and emulsion formation were experienced with some beverages. With both groups of analytes, certain matrices were more problematic (as measured by volatility losses, emulsion formation) than others with regard to processing and analyte recovery. PMID:11138673
A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.
Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua
2016-05-01
Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. PMID:26907860
Laser: a Tool for Optimization and Enhancement of Analytical Methods
Preisler, Jan
1997-01-01
In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
ANALYTICAL METHOD READINESS FOR THE CONTAMINANT CANDIDATE LIST
The Contaminant Candidate List (CCL), which was promulgated in March 1998, includes 50 chemical and 10 microbiological contaminants/contaminant groups. At the time of promulgation, analytical methods were available for 6 inorganic and 28 organic contaminants. Since then, 4 anal...
Analytical chemistry methods for metallic core components: Revision March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of alloys used to fabricate core components. These alloys are 302, 308, 316, 316-Ti, and 321 stainless steels and 600 and 718 Inconels and they may include other 300-series stainless steels.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
Fast and Sensitive Method for Determination of Domoic Acid in Mussel Tissue.
Barbaro, Elena; Zangrando, Roberta; Barbante, Carlo; Gambaro, Andrea
2016-01-01
Domoic acid (DA), a neurotoxic amino acid produced by diatoms, is the main cause of amnesic shellfish poisoning (ASP). In this work, we propose a very simple and fast analytical method to determine DA in mussel tissue. The method consists of two consecutive extractions and requires no purification steps, due to a reduction of the extraction of the interfering species and the application of very sensitive and selective HILIC-MS/MS method. The procedural method was validated through the estimation of trueness, extract yield, precision, detection, and quantification limits of analytical method. The sample preparation was also evaluated through qualitative and quantitative evaluations of the matrix effect. These evaluations were conducted both on the DA-free matrix spiked with known DA concentration and on the reference certified material (RCM). We developed a very selective LC-MS/MS method with a very low value of method detection limit (9 ng g(-1)) without cleanup steps. PMID:26904720
Fast and Sensitive Method for Determination of Domoic Acid in Mussel Tissue
Barbaro, Elena; Zangrando, Roberta; Barbante, Carlo; Gambaro, Andrea
2016-01-01
Domoic acid (DA), a neurotoxic amino acid produced by diatoms, is the main cause of amnesic shellfish poisoning (ASP). In this work, we propose a very simple and fast analytical method to determine DA in mussel tissue. The method consists of two consecutive extractions and requires no purification steps, due to a reduction of the extraction of the interfering species and the application of very sensitive and selective HILIC-MS/MS method. The procedural method was validated through the estimation of trueness, extract yield, precision, detection, and quantification limits of analytical method. The sample preparation was also evaluated through qualitative and quantitative evaluations of the matrix effect. These evaluations were conducted both on the DA-free matrix spiked with known DA concentration and on the reference certified material (RCM). We developed a very selective LC-MS/MS method with a very low value of method detection limit (9 ng g−1) without cleanup steps. PMID:26904720
Fast total focusing method for ultrasonic imaging
NASA Astrophysics Data System (ADS)
Carcreff, Ewen; Dao, Gavin; Braconnier, Dominique
2016-02-01
Synthetic aperture focusing technique (SAFT) and total focusing method (TFM) have become popular tools in the field of ultrasonic non destructive testing. In particular, they are employed for detection and characterization of flaws. From data acquired with a transducer array, those techniques aim at reconstructing an image of the inspected object from coherent summations. In this paper, we make a comparison between the standard technique and a migration approach. Using experimental data, we show that the developed approach is faster and offers a better signal to noise ratio than the standard total focusing method. Moreover, the migration is particularly effective for near-surface imaging where standard methods used to fail. On the other hand, the migration approach is only adapted to layered objects whereas the standard technique can fit complex geometries. The methods are tested on homogeneous pieces containing artificial flaws such as side drilled holes.
A New Splitting Method for Both Analytical and Preparative LC/MS
NASA Astrophysics Data System (ADS)
Cai, Yi; Adams, Daniel; Chen, Hao
2013-11-01
This paper presents a novel splitting method for liquid chromatography/mass spectrometry (LC/MS) application, which allows fast MS detection of LC-separated analytes and subsequent online analyte collection. In this approach, a PEEK capillary tube with a micro-orifice drilled on the tube side wall is used to connect with LC column. A small portion of LC eluent emerging from the orifice can be directly ionized by desorption electrospray ionization (DESI) with negligible time delay (6~10 ms) while the remaining analytes exiting the tube outlet can be collected. The DESI-MS analysis of eluted compounds shows narrow peaks and high sensitivity because of the extremely small dead volume of the orifice used for LC eluent splitting (as low as 4 nL) and the freedom to choose favorable DESI spray solvent. In addition, online derivatization using reactive DESI is possible for supercharging proteins and for enhancing their signals without introducing extra dead volume. Unlike UV detector used in traditional preparative LC experiments, this method is applicable to compounds without chromophores (e.g., saccharides) due to the use of MS detector. Furthermore, this splitting method well suits monolithic column-based ultra-fast LC separation at a high elution flow rate of 4 mL/min. [Figure not available: see fulltext.
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
Use of scientometrics to assess nuclear and other analytical methods
Lyon, W.S.
1986-01-01
Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs.
Fast linear method of illumination classification
NASA Astrophysics Data System (ADS)
Cooper, Ted J.; Baqai, Farhan A.
2003-01-01
We present a simple method for estimating the scene illuminant for images obtained by a Digital Still Camera (DSC). The proposed method utilizes basis vectors obtained from known memory color reflectance to identify the memory color objects in the image. Once the memory color pixels are identified, we use the ratios of the red/green and blue/green to determine the most likely illuminant in the image. The critical part of the method is to estimate the smallest set of basis vectors that closely represent the memory color reflectances. Basis vectors obtained from both Principal Component Analysis (PCA) and Independent Component Analysis (ICA) are used. We will show that only two ICA basis vectors are needed to get an acceptable estimate.
Analytic methods and free-space dyadic Green's functions
NASA Astrophysics Data System (ADS)
Weiglhofer, Werner S.
1993-09-01
A number of mathematical techniques are presented which have proven successful in obtaining analytic solutions to the differential equations for the dyadic Green's functions of electromagnetic theory. The emphasis is on infinite-medium (or free-space) time-harmonic solutions throughout, thus putting the focus on the physical medium in which the electromagnetic process takes place. The medium's properties enter Maxwell's equations through the constitutive relations, and a comprehensive listing of dyadic Green's functions for which closed-form solutions exist, is given. Presently, the list of media contains (achiral) isotropic, biisotropic (including chiral), generally uniaxial, electrically (or magnetically) gyrotropic, diffusive and moving media as well as certain plasmas. A critical evaluation of the achievements, successes, limits, and failures of the analytic techniques is provided, and a prognosis is put forward about the future place of analytic methods within the general context of the search for solutions to electromagnetic field problems.
Nascimento, Carina F; Rocha, Diogo L; Rocha, Fábio R P
2015-02-15
An environmental friendly procedure was developed for fast melamine determination as an adulterant of protein content in milk. Triton X-114 was used for sample clean-up and as a fluorophore, whose fluorescence was quenched by the analyte. A linear response was observed from 1.0 to 6.0mgL(-1) melamine, described by the Stern-Volmer equation I°/I=(0.999±0.002)+(0.0165±0.004) CMEL (r=0.999). The detection limit was estimated at 0.8mgL(-1) (95% confidence level), which allows detecting as low as 320μg melamine in 100g of milk. Coefficients of variation (n=8) were estimated at 0.4% and 1.4% with and without melamine, respectively. Recoveries to melamine spiked to milk samples from 95% to 101% and similar slopes of calibration graphs obtained with and without milk indicated the absence of matrix effects. Results for different milk samples agreed with those obtained by high performance liquid chromatography at the 95% confidence level. PMID:25236232
Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis
NASA Astrophysics Data System (ADS)
JEONG, TAESEOK; SINGH, RAJENDRA
2000-06-01
This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries. PMID:27123360
Fast tomographic methods for the tokamak ISTTOK
Carvalho, P. J.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.; Gori, S.; Toussaint, U. v.
2008-04-07
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Fast timing methods for semiconductor detectors. Revision
Spieler, H.
1984-10-01
This tutorial paper discusses the basic parameters which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter.
A fast full constraints unmixing method
NASA Astrophysics Data System (ADS)
Ye, Zhang; Wei, Ran; Wang, Qing Yan
2012-10-01
Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.
Active controls: A look at analytical methods and associated tools
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Adams, W. M., Jr.; Mukhopadhyay, V.; Tiffany, S. H.; Abel, I.
1984-01-01
A review of analytical methods and associated tools for active controls analysis and design problems is presented. Approaches employed to develop mathematical models suitable for control system analysis and/or design are discussed. Significant efforts have been expended to develop tools to generate the models from the standpoint of control system designers' needs and develop the tools necessary to analyze and design active control systems. Representative examples of these tools are discussed. Examples where results from the methods and tools have been compared with experimental data are also presented. Finally, a perspective on future trends in analysis and design methods is presented.
Analytical method for determination of benzene-arsenic acids
Mitchell, G.L.; Bayse, G.S.
1988-01-01
A sensitive analytical method has been modified for use in determination of several benzenearsonic acids, including arsanilic acid (p-aminobenzenearsonic acid), Roxarsone (3-nitro-4-hydroxybenzenearsonic acid), and p-ureidobenzene arsonic acid. Controlled acid hydrolysis of these compounds produces a quantitative yield of arsenate, which is measured colorimetrically as the molybdenum blue complex at 865 nm. The method obeys Beer's Law over the micromolar concentration range. These benzenearsonic acids are routinely used as feed additives in poultry and swine. This method should be useful in assessing tissue levels of the arsenicals in appropriate extracts.
Analytical Methods for Measuring Mercury in Water, Sediment and Biota
Lasorsa, Brenda K.; Gill, Gary A.; Horvat, Milena
2012-06-07
Mercury (Hg) exists in a large number of physical and chemical forms with a wide range of properties. Conversion between these different forms provides the basis for mercury's complex distribution pattern in local and global cycles and for its biological enrichment and effects. Since the 1960’s, the growing awareness of environmental mercury pollution has stimulated the development of more accurate, precise and efficient methods of determining mercury and its compounds in a wide variety of matrices. During recent years new analytical techniques have become available that have contributed significantly to the understanding of mercury chemistry in natural systems. In particular, these include ultra sensitive and specific analytical equipment and contamination-free methodologies. These improvements allow for the determination of total mercury as well as major species of mercury to be made in water, sediments and soils, and biota. Analytical methods are selected depending on the nature of the sample, the concentration levels of mercury, and what species or fraction is to be quantified. The terms “speciation” and “fractionation” in analytical chemistry were addressed by the International Union for Pure and Applied Chemistry (IUPAC) which published guidelines (Templeton et al., 2000) or recommendations for the definition of speciation analysis. "Speciation analysis is the analytical activity of identifying and/or measuring the quantities of one or more individual chemical species in a sample. The chemical species are specific forms of an element defined as to isotopic composition, electronic or oxidation state, and/or complex or molecular structure. The speciation of an element is the distribution of an element amongst defined chemical species in a system. In case that it is not possible to determine the concentration of the different individual chemical species that sum up the total concentration of an element in a given matrix, meaning it is impossible to
Fast and accurate determination of the Wigner rotation matrices in the fast multipole method.
Dachsel, Holger
2006-04-14
In the rotation based fast multipole method the accurate determination of the Wigner rotation matrices is essential. The combination of two recurrence relations and the control of the error accumulations allow a very precise determination of the Wigner rotation matrices. The recurrence formulas are simple, efficient, and numerically stable. The advantages over other recursions are documented. PMID:16626188
Fast Erase Method and Apparatus For Digital Media
NASA Technical Reports Server (NTRS)
Oakely, Ernest C. (Inventor)
2006-01-01
A non-contact fast erase method for erasing information stored on a magnetic or optical media. The magnetic media element includes a magnetic surface affixed to a toroidal conductor and stores information in a magnetic polarization pattern. The fast erase method includes applying an alternating current to a planar inductive element positioned near the toroidal conductor, inducing an alternating current in the toroidal conductor, and heating the magnetic surface to a temperature that exceeds the Curie-point so that information stored on the magnetic media element is permanently erased. The optical disc element stores information in a plurality of locations being defined by pits and lands in a toroidal conductive layer. The fast erase method includes similarly inducing a plurality of currents in the optical media element conductive layer and melting a predetermined portion of the conductive layer so that the information stored on the optical medium is destroyed.
Control of irradiated food: Recent developments in analytical detection methods.
NASA Astrophysics Data System (ADS)
Delincée, H.
1993-07-01
An overview of recent international efforts, i.e. programmes of "ADMIT" (FAO/IAEA) and of BCR (EC) towards the development of analytical detection methods for radiation processed foods will be given. Some larger collaborative studies have already taken place, e.g. ESR of bones from chicken, prok, beef, frog legs and fish, thermoluminescence of insoluble minerals isolated from herbs and spices, GC analysis of long-chain hydrocarbons derived from the lipid fraction of chicken and other meats, and the microbiological APC/DEFT procedure for spices. These methods could soon be implemented in international standard protocols.
A new analytical method for groundwater recharge and discharge estimation
NASA Astrophysics Data System (ADS)
Liang, Xiuyu; Zhang, You-Kuan
2012-07-01
SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.
An analytical method for Mathieu oscillator based on method of variation of parameter
NASA Astrophysics Data System (ADS)
Li, Xianghong; Hou, Jingyu; Chen, Jufeng
2016-08-01
A simple, but very accurate analytical method for forced Mathieu oscillator is proposed, the idea of which is based on the method of variation of parameter. Assuming that the time-varying parameter in Mathieu oscillator is constant, one could easily obtain its accurately analytical solution. Then the approximately analytical solution for Mathieu oscillator could be established after substituting periodical time-varying parameter for the constant one in the obtained accurate analytical solution. In order to certify the correctness and precision of the proposed analytical method, the first-order and ninth-order approximation solutions by harmonic balance method (HBM) are also presented. The comparisons between the results by the proposed method with those by the numerical simulation and HBM verify that the results by the proposed analytical method agree very well with those by the numerical simulation. Moreover, the precision of the proposed new analytical method is not only higher than the approximation solution by first-order HBM, but also better than the approximation solution by the ninth-order HBM in large ranges of system parameters.
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-06-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
Methods for quantifying uncertainty in fast reactor analyses.
Fanning, T. H.; Fischer, P. F.
2008-04-07
Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-09-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
A fast multipole boundary element method for solving two-dimensional thermoelasticity problems
NASA Astrophysics Data System (ADS)
Liu, Y. J.; Li, Y. X.; Huang, S.
2014-09-01
A fast multipole boundary element method (BEM) for solving general uncoupled steady-state thermoelasticity problems in two dimensions is presented in this paper. The fast multipole BEM is developed to handle the thermal term in the thermoelasticity boundary integral equation involving temperature and heat flux distributions on the boundary of the problem domain. Fast multipole expansions, local expansions and related translations for the thermal term are derived using complex variables. Several numerical examples are presented to show the accuracy and effectiveness of the developed fast multipole BEM in calculating the displacement and stress fields for 2-D elastic bodies under various thermal loads, including thin structure domains that are difficult to mesh using the finite element method (FEM). The BEM results using constant elements are found to be accurate compared with the analytical solutions, and the accuracy of the BEM results is found to be comparable to that of the FEM with linear elements. In addition, the BEM offers the ease of use in generating the mesh for a thin structure domain or a domain with complicated geometry, such as a perforated plate with randomly distributed holes for which the FEM fails to provide an adequate mesh. These results clearly demonstrate the potential of the developed fast multipole BEM for solving 2-D thermoelasticity problems.
Aurigemma, Christine; Farrell, William
2010-09-24
Medicinal chemists often depend on analytical instrumentation for reaction monitoring and product confirmation at all stages of pharmaceutical discovery and development. To obtain pure compounds for biological assays, the removal of side products and final compounds through purification is often necessary. Prior to purification, chemists often utilize open-access analytical LC/MS instruments because mass confirmation is fast and reliable, and the chromatographic separation of most sample constituents is sufficient. Supercritical fluid chromatography (SFC) is often used as an orthogonal technique to HPLC or when isolation of the free base of a compound is desired. In laboratories where SFC is the predominant technique for analysis and purification of compounds, a reasonable approach for quickly determining suitable purification conditions is to screen the sample against different columns. This can be a bottleneck to the purification process. To commission SFC for open-access use, a walk-up analytical SFC/MS screening system was implemented in the medicinal chemistry laboratory. Each sample is automatically screened through six column/method conditions, and on-demand data processing occurs for the chromatographers after each screening method is complete. This paper highlights the "FastTrack" approach to expediting samples through purification. PMID:20728893
A fast multipole hybrid boundary node method for composite materials
NASA Astrophysics Data System (ADS)
Wang, Qiao; Miao, Yu; Zhu, Hongping
2013-06-01
This article presents a multi-domain fast multipole hybrid boundary node method for composite materials in 3D elasticity. The hybrid boundary node method (hybrid BNM) is a meshless method which only requires nodes constructed on the surface of a domain. The method is applied to 3D simulation of composite materials by a multi-domain solver and accelerated by the fast multipole method (FMM) in this paper. The preconditioned GMRES is employed to solve the final system equation and precondition techniques are discussed. The matrix-vector multiplication in each iteration is divided into smaller scale ones at the sub-domain level and then accelerated by FMM within individual sub-domains. The computed matrix-vector products at the sub-domain level are then combined according to the continuity conditions on the interfaces. The algorithm is implemented on a computer code written in C + +. Numerical results show that the technique is accurate and efficient.
Organic analysis and analytical methods development: FY 1995 progress report
Clauss, S.A.; Hoopes, V.; Rau, J.
1995-09-01
This report describes the status of organic analyses and developing analytical methods to account for the organic components in Hanford waste tanks, with particular emphasis on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-103 (Tank 103-SY). The analytical data are to serve as an example of the status of methods development and application. Samples of the convective and nonconvective layers from Tank 103-SY were analyzed for total organic carbon (TOC). The TOC value obtained for the nonconvective layer using the hot persulfate method was 10,500 {mu}g C/g. The TOC value obtained from samples of Tank 101-SY was 11,000 {mu}g C/g. The average value for the TOC of the convective layer was 6400 {mu}g C/g. Chelator and chelator fragments in Tank 103-SY samples were identified using derivatization. gas chromatography/mass spectrometry (GC/MS). Organic components were quantified using GC/flame ionization detection. Major components in both the convective and nonconvective-layer samples include ethylenediaminetetraacetic acid (EDTA), nitrilotriacetic acid (NTA), succinic acid, nitrosoiminodiacetic acid (NIDA), citric acid, and ethylenediaminetriacetic acid (ED3A). Preliminary results also indicate the presence of C16 and C18 carboxylic acids in the nonconvective-layer sample. Oxalic acid was one of the major components in the nonconvective layer as determined by derivatization GC/flame ionization detection.
ANALYTICAL METHODS FOR KINETIC STUDIES OF BIOLOGICAL INTERACTIONS: A REVIEW
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S.
2015-01-01
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721
Analytical methods for kinetic studies of biological interactions: A review.
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S
2015-09-10
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721
Evolution of microbiological analytical methods for dairy industry needs
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
Evolution of microbiological analytical methods for dairy industry needs.
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
NASA Astrophysics Data System (ADS)
Atteia, O.; Höhener, P.
2012-09-01
The aim of this work was to extend and to validate the flux tube-mixed instantaneous and kinetics superposition sequence approach (FT-MIKSS) to reaction chains of degrading species. Existing analytical solutions for the reactive transport of chains of decaying solutes were embedded in the flux-tube approach in order to conceive a semi-analytical model that allows fast parameter fitting. The model was applied for chloroethenes undergoing reductive dechlorination and oxidation in homogeneous and heterogeneous aquifers with sorption. The results from the semi-analytical model were compared to results from three numerical models (RT3D, PH3TD, PHAST). All models were validated in a homogeneous domain with an existing analytical solution. In heterogeneous domains, we found significant differences between the four models. FT-MIKSS gave intermediate results for all modelled cases. Results were obtained almost instantaneously, whereas other models had calculation times of up to several hours. Chloroethene plumes and redox conditions at the Plattsburgh field site were realistically modelled by FT-MIKSS, although results differed somewhat from those of PHT3D and PHAST. It is concluded that it may be tedious to obtain correct modelling results in heterogeneous media with degradation chain reactions and that the comparison of two different models may be useful. FT-MIKSS is a valuable tool for fast parameter fitting at field sites and should be used in the preparation of longer model runs with other numerical models.
Igor D. Kaganovich; Edward A. Startsev; Ronald C. Davidson
2003-11-25
Plasma neutralization of an intense ion beam pulse is of interest for many applications, including plasma lenses, heavy ion fusion, high energy physics, etc. Comprehensive analytical, numerical, and experimental studies are underway to investigate the complex interaction of a fast ion beam with a background plasma. The positively charged ion beam attracts plasma electrons, and as a result the plasma electrons have a tendency to neutralize the beam charge and current. A suite of particle-in-cell codes has been developed to study the propagation of an ion beam pulse through the background plasma. For quasi-steady-state propagation of the ion beam pulse, an analytical theory has been developed using the assumption of long charge bunches and conservation of generalized vorticity. The analytical results agree well with the results of the numerical simulations. The visualization of the data obtained in the numerical simulations shows complex collective phenomena during beam entry into and ex it from the plasma.
Fast and stable numerical method for neuronal modelling
NASA Astrophysics Data System (ADS)
Hashemi, Soheil; Abdolali, Ali
2016-11-01
Excitable cell modelling is of a prime interest in predicting and targeting neural activity. Two main limits in solving related equations are speed and stability of numerical method. Since there is a tradeoff between accuracy and speed, most previously presented methods for solving partial differential equations (PDE) are focused on one side. More speed means more accurate simulations and therefore better device designing. By considering the variables in finite differenced equation in proper time and calculating the unknowns in the specific sequence, a fast, stable and accurate method is introduced in this paper for solving neural partial differential equations. Propagation of action potential in giant axon is studied by proposed method and traditional methods. Speed, consistency and stability of the methods are compared and discussed. The proposed method is as fast as forward methods and as stable as backward methods. Forward methods are known as fastest methods and backward methods are stable in any circumstances. Complex structures can be simulated by proposed method due to speed and stability of the method.
Analytical methods for human biomonitoring of pesticides. A review.
Yusa, Vicent; Millet, Maurice; Coscolla, Clara; Roca, Marta
2015-09-01
Biomonitoring of both currently-used and banned-persistent pesticides is a very useful tool for assessing human exposure to these chemicals. In this review, we present current approaches and recent advances in the analytical methods for determining the biomarkers of exposure to pesticides in the most commonly used specimens, such as blood, urine, and breast milk, and in emerging non-invasive matrices such as hair and meconium. We critically discuss the main applications for sample treatment, and the instrumental techniques currently used to determine the most relevant pesticide biomarkers. We finally look at the future trends in this field. PMID:26388361
The Augmented Fast Marching Method for Level Set Reinitialization
NASA Astrophysics Data System (ADS)
Salac, David
2011-11-01
The modeling of multiphase fluid flows typically requires accurate descriptions of the interface and curvature of the interface. Here a new reinitialization technique based on the fast marching method for gradient-augmented level sets is presented. The method is explained and results in both 2D and 3D are presented. Overall the method is more accurate than reinitialization methods based on similar stencils and the resulting curvature fields are much smoother. The method will also be demonstrated in a sample application investigating the dynamic behavior of vesicles in general fluid flows. Support provided by University at Buffalo - SUNY.
Performance of analytical methods for tomographic gamma scanning
Prettyman, T.H.; Mercer, D.J.
1997-06-01
The use of gamma-ray computerized tomography for nondestructive assay of radioactive materials has led to the development of specialized analytical methods. Over the past few years, Los Alamos has developed and implemented a computer code, called ARC-TGS, for the analysis of data obtained by tomographic gamma scanning (TGS). ARC-TGS reduces TGS transmission and emission tomographic data, providing the user with images of the sample contents, the activity or mass of selected radionuclides, and an estimate of the uncertainty in the measured quantities. The results provided by ARC-TGS can be corrected for self-attenuation when the isotope of interest emits more than one gamma-ray. In addition, ARC-TGS provides information needed to estimate TGS quantification limits and to estimate the scan time needed to screen for small amounts of radioactivity. In this report, an overview of the analytical methods used by ARC-TGS is presented along with an assessment of the performance of these methods for TGS.
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
Using analytic network process for evaluating mobile text entry methods.
Ocampo, Lanndon A; Seva, Rosemary R
2016-01-01
This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods. PMID:26360215
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-01
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. PMID:26363946
Efficient Displacement Discontinuity Method Using Fast Multipole Techniques
Morris, J.P.; Blair, S.C.
2000-02-18
The Displacement Discontinuity method has been widely used in geomechanics because it accurately captures the behavior of fractures within a rock mass by explicitly accounting for discontinuities. Unfortunately, boundary element techniques require the interactions between all pairs of elements to be evaluated and traditional approaches to the Displacement Discontinuity method are computationally expensive for large problem sizes. Approximate summation techniques, such as the Fast Multipole Method (FMM), calculate the interactions between N entities in time proportional to N. We have implemented a modified Fast Multipole approach which performs the necessary calculations in optimal time and with reduced memory usage. Furthermore, the FMM introduces parameters which can be selected to give the desired trade-off between efficiency and accuracy. The FMM approach permits much larger problems to be solved using desktop computers, opening up a range of applications. We present results demonstrating the speed of the code and several test cases involving rock fracture in compression.
A linear analytical boundary element method (BEM) for 2D homogeneous potential problems
NASA Astrophysics Data System (ADS)
Friedrich, Jürgen
2002-06-01
The solution of potential problems is not only fundamental for geosciences, but also an essential part of related subjects like electro- and fluid-mechanics. In all fields, solution algorithms are needed that should be as accurate as possible, robust, simple to program, easy to use, fast and small in computer memory. An ideal technique to fulfill these criteria is the boundary element method (BEM) which applies Green's identities to transform volume integrals into boundary integrals. This work describes a linear analytical BEM for 2D homogeneous potential problems that is more robust and precise than numerical methods because it avoids numerical schemes and coordinate transformations. After deriving the solution algorithm, the introduced approach is tested against different benchmarks. Finally, the gained method was incorporated into an existing software program described before in this journal by the same author.
NASA Astrophysics Data System (ADS)
Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.
2014-05-01
Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in ambient air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C9-C15 BVOC composition of single plant emissions may be characterised within a 14.5 min analysis time. Moreover, in-situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an 11.7 min chromatographic separation time (increasing to 19.7 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). These analysis times potentially allow for a twofold to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in-situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC (OBVOC) linalool in ambient air. During this field deployment within a suburban forest
NASA Astrophysics Data System (ADS)
Theis, L. S.; Motzoi, F.; Wilhelm, F. K.
2016-01-01
We present a few-parameter ansatz for pulses to implement a broad set of simultaneous single-qubit rotations in frequency-crowded multilevel systems. Specifically, we consider a system of two qutrits whose working and leakage transitions suffer from spectral crowding (detuned by δ ). In order to achieve precise controllability, we make use of two driving fields (each having two quadratures) at two different tones to simultaneously apply arbitrary combinations of rotations about axes in the X -Y plane to both qubits. Expanding the waveforms in terms of Hanning windows, we show how analytic pulses containing smooth and composite-pulse features can easily achieve gate errors less than 10-4 and considerably outperform known adiabatic techniques. Moreover, we find a generalization of the WAHWAH (Weak AnHarmonicity With Average Hamiltonian) method by Schutjens et al. [R. Schutjens, F. A. Dagga, D. J. Egger, and F. K. Wilhelm, Phys. Rev. A 88, 052330 (2013)], 10.1103/PhysRevA.88.052330 that allows precise separate single-qubit rotations for all gate times beyond a quantum speed limit. We find in all cases a quantum speed limit slightly below 2 π /δ for the gate time and show that our pulses are robust against variations in system parameters and filtering due to transfer functions, making them suitable for experimental implementations.
Nanita, Sergio C; Stry, James J; Pentz, Anne M; McClory, Joseph P; May, John H
2011-07-27
A prototype multiresidue method based on fast extraction and dilution of samples followed by flow injection mass spectrometric analysis is proposed here for high-throughput chemical screening in complex matrices. The method was tested for sulfonylurea herbicides (triflusulfuron methyl, azimsulfuron, chlorimuron ethyl, sulfometuron methyl, chlorsulfuron, and flupyrsulfuron methyl), carbamate insecticides (oxamyl and methomyl), pyrimidine carboxylic acid herbicides (aminocyclopyrachlor and aminocyclopyrachlor methyl), and anthranilic diamide insecticides (chlorantraniliprole and cyantraniliprole). Lemon and pecan were used as representative high-water and low-water content matrices, respectively, and a sample extraction procedure was designed for each commodity type. Matrix-matched external standards were used for calibration, yielding linear responses with correlation coefficients (r) consistently >0.99. The limits of detection (LOD) were estimated to be between 0.01 and 0.03 mg/kg for all analytes, allowing execution of recovery tests with samples fortified at ≥0.05 mg/kg. Average analyte recoveries obtained during method validation for lemon and pecan ranged from 75 to 118% with standard deviations between 3 and 21%. Representative food processed fractions were also tested, that is, soybean oil and corn meal, yielding individual analyte average recoveries ranging from 62 to 114% with standard deviations between 4 and 18%. An intralaboratory blind test was also performed; the method excelled with 0 false positives and 0 false negatives in 240 residue measurements (20 samples × 12 analytes). The daily throughput of the fast extraction and dilution (FED) procedure is estimated at 72 samples/chemist, whereas the flow injection mass spectrometry (FI-MS) throughput could be as high as 4.3 sample injections/min, making very efficient use of mass spectrometers with negligible instrumental analysis time compared to the sample homogenization, preparation, and data
A novel unified coding analytical method for Internet of Things
NASA Astrophysics Data System (ADS)
Sun, Hong; Zhang, JianHong
2013-08-01
This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.
Validation of Analytical Methods for Biomarkers Employed in Drug Development
Chau, Cindy H.; Rixe, Olivier; McLeod, Howard; Figg, William D.
2008-01-01
The role of biomarkers in drug discovery and development has gained precedence over the years. As biomarkers become integrated into drug development and clinical trials, quality assurance and in particular assay validation becomes essential with the need to establish standardized guidelines for analytical methods used in biomarker measurements. New biomarkers can revolutionize both the development and use of therapeutics, but is contingent upon the establishment of a concrete validation process that addresses technology integration and method validation as well as regulatory pathways for efficient biomarker development. This perspective focuses on the general principles of the biomarker validation process with an emphasis on assay validation and the collaborative efforts undertaken by various sectors to promote the standardization of this procedure for efficient biomarker development. PMID:18829475
GenoSets: Visual Analytic Methods for Comparative Genomics
Cain, Aurora A.; Kosara, Robert; Gibas, Cynthia J.
2012-01-01
Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest. PMID:23056299
MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...
Algebraic and analytic reconstruction methods for dynamic tomography.
Desbat, L; Rit, S; Clackdoyle, R; Mennessier, C; Promayon, E; Ntalampeki, S
2007-01-01
In this work, we discuss algebraic and analytic approaches for dynamic tomography. We present a framework of dynamic tomography for both algebraic and analytic approaches. We finally present numerical experiments. PMID:18002059
The methods of decrease operating pressure of fast neutrals source
NASA Astrophysics Data System (ADS)
Barchenko, V. T.; Komlev, A. E.; Babinov, N. A.; Vinogradov, M. L.
2015-11-01
The fast neutral particles sources are more and more widely used in technologies of surface processing and coatings deposition, especially in the case of dielectric surfaces processing. However for substantial expansion of the sources applications scope it is necessary to decrease the pressure in the vacuum chamber at which they can operate. This article describes the methods to reduce the operating pressure of the fast neutral particles source with combined ions acceleration and its neutralization regions. This combination provide a total absence of the high-energy ions in the particles beam. The main discussed methods are creation of pressure drop between internal and external volumes of the source and working gas preionization which is provided by combustion of auxiliary gas discharge.
New analytical methods for determining trace elements in coal
Dale, L.S.; Riley, K.W.
1996-12-31
New and improved analytical methods, based on modem spectroscopic techniques, have been developed to provide more reliable data on the levels of environmentally significant elements in Australian bituminous thermal coals. Arsenic, selenium and antimony are determined using hydride generation atomic absorption or fluorescence spectrometry, applied to an Eschka fusion of the raw coal. Boron is determined on the same digest using inductively coupled plasma atomic emission spectrometry (ICPAES). ICPAES is also used to determine beryllium, chromium, cobalt, copper, manganese, molybdenum, nickel, lead and zinc, after fusion of a low temperature ash with lithium borate. Other elements of concern including cadmium, uranium and thorium are analyzed by inductively coupled plasma mass spectrometry on a mixed acid digest of a low temperature ash. This technique was also suitable for determining elements analyzed by the ICPAES. Improved methods for chlorine and fluorine have also been developed. Details of the methods will be given and results of validation trials discussed on some of the methods which are anticipated to be designated Australian standard methods.
Application of surface analytical methods in thin film analysis
NASA Astrophysics Data System (ADS)
Wen, Xingu
Self-assembly and the sol-gel process are two promising methods for the preparation of novel materials and thin films. In this research, these two methods were utilized to prepare two types of thin films: self-assembled monolayers of peptides on gold and SiO2 sol-gel thin films modified with Ru(II) complexes. The properties of the resulting thin films were investigated by several analytical techniques in order to explore their potential applications in biomaterials, chemical sensors, nonlinear optics and catalysis. Among the analytical techniques employed in the study, surface analytical techniques, such as X-ray photoelectron spectroscopy (XPS) and grazing angle reflection absorption Fourier transform infrared spectroscopy (RA-FTIR), are particularly useful in providing information regarding the compositions and structures of the thin films. In the preparation of peptide thin films, monodisperse peptides were self-assembled on gold substrate via the N-terminus-coupled lipoic acid. The film compositions were investigated by XPS and agreed well with the theoretical values. XPS results also revealed that the surface coverage of the self-assembled films was significantly larger than that of the physisorbed films and that the chemisorption between the peptides and gold surface was stable in solvent. Studies by angle dependent XPS (ADXPS) and grazing angle RA-FTIR indicated that the peptides were on average oriented at a small angle from the surface normal. By using a model of orientation distribution function, both the peptide tilt angle and film thickness can be well calculated. Ru(II) complex doped SiO2 sol-gel thin films were prepared by low temperature sol-gel process. The ability of XPS coupled with Ar + ion sputtering to provide both chemical and compositional depth profile information of these sol-gel films was evaluated. This technique, together with UV-VIS and electrochemical measurements, was used to investigate the stability of Ru complexes in the composite
21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Procedure for announcing analytical methods for...-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a) FDA may issue an order announcing a specific analytical method or methods for the quantification...
Differential correction method applied to measurement of the FAST reflector
NASA Astrophysics Data System (ADS)
Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng
2016-08-01
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%–80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.
Analytical Methods in Untargeted Metabolomics: State of the Art in 2015
Alonso, Arnald; Marsal, Sara; Julià, Antonio
2015-01-01
Metabolomics comprises the methods and techniques that are used to measure the small molecule composition of biofluids and tissues, and is actually one of the most rapidly evolving research fields. The determination of the metabolomic profile – the metabolome – has multiple applications in many biological sciences, including the developing of new diagnostic tools in medicine. Recent technological advances in nuclear magnetic resonance and mass spectrometry are significantly improving our capacity to obtain more data from each biological sample. Consequently, there is a need for fast and accurate statistical and bioinformatic tools that can deal with the complexity and volume of the data generated in metabolomic studies. In this review, we provide an update of the most commonly used analytical methods in metabolomics, starting from raw data processing and ending with pathway analysis and biomarker identification. Finally, the integration of metabolomic profiles with molecular data from other high-throughput biotechnologies is also reviewed. PMID:25798438
Analytical methods for volatile compounds in wheat bread.
Pico, Joana; Gómez, Manuel; Bernal, José; Bernal, José Luis
2016-01-01
Bread aroma is one of the main requirements for its acceptance by consumers, since it is one of the first attributes perceived. Sensory analysis, crucial to be correlated with human perception, presents limitations and needs to be complemented with instrumental analysis. Gas chromatography coupled to mass spectrometry is usually selected as the technique to determine bread volatile compounds, although proton-transfer reaction mass spectrometry begins also to be used to monitor aroma processes. Solvent extraction, supercritical fluid extraction and headspace analysis are the main options for the sample treatment. The present review focuses on the different sample treatments and instrumental alternatives reported in the literature to analyse volatile compounds in wheat bread, providing advantages and limitations. Usual parameters employed in these analytical methods are also described. PMID:26452307
Analytical Failure Prediction Method Developed for Woven and Braided Composites
NASA Technical Reports Server (NTRS)
Min, James B.
2003-01-01
Historically, advances in aerospace engine performance and durability have been linked to improvements in materials. Recent developments in ceramic matrix composites (CMCs) have led to increased interest in CMCs to achieve revolutionary gains in engine performance. The use of CMCs promises many advantages for advanced turbomachinery engine development and may be especially beneficial for aerospace engines. The most beneficial aspects of CMC material may be its ability to maintain its strength to over 2500 F, its internal material damping, and its relatively low density. Ceramic matrix composites reinforced with two-dimensional woven and braided fabric preforms are being considered for NASA s next-generation reusable rocket turbomachinery applications (for example, see the preceding figure). However, the architecture of a textile composite is complex, and therefore, the parameters controlling its strength properties are numerous. This necessitates the development of engineering approaches that combine analytical methods with limited testing to provide effective, validated design analyses for the textile composite structures development.
A two-dimensional, semi-analytic expansion method for nodal calculations
Palmtag, S.P.
1995-08-01
Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure.
[Analytical methods for control of foodstuffs made from bioengineered plants].
Chernysheva, O N; Sorokina, E Iu
2013-01-01
Foodstuffs made by modern biotechnology are requiring for special control. Analytical methods used for these purposes are being constantly perfected. When choosing a strategy for the analysis, several factors have to be assessed: specificity, sensitivity, practically of the method and time efficiency. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. Protein detection methods are based mainly on ELISA. The specific detection of a novel protein synthesized by gene introduced during transformation constitutes an alternative approach for the identification of GMO. The genetic modification is not always specifically directed at the production of a novel protein and does not always result in protein expression levels sufficient for detection purposes. In addition, some proteins may be expressed only in specific parts of the plant or expressed at different levels in distinct parts of plant. As DNA is a rather stable molecule relative to proteins, it is preferred target for any kind of sample. These methods are more sensitive and specific than protein detection methods. PCR-based test can be categorized into several levels of specificity. The least specific methods are commonly called "screening methods" and relate to target DNA elements, such as promoters and terminators that are present in many different GMOs. For routine screening purpose regulatory elements 35S promoter, derived from the Cauliflower Mosaic Virus and the NOS terminator, derived from the nopaline synthase gene of Agrobacterium tumefaciens, are used as target sequences. The second level is "gene-specific methods". These methods target a part of the DNA harbouring the active gene associated with the specific genetic modification. The highest specificity is seen when the target is the unique junction found at the integration locus between the inserted DNA and the recipient genome. These are called "event-specific methods". For a
Kroniger, K; Herzog, M; Landry, G; Dedes, G; Parodi, K; Traneus, E
2015-06-15
Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used as irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.
Piri-Moghadam, Hamed; Ahmadi, Fardin; Gómez-Ríos, German Augusto; Boyacı, Ezel; Reyes-Garcés, Nathaly; Aghakhani, Ali; Bojko, Barbara; Pawliszyn, Janusz
2016-06-20
Herein we report the development of solid-phase microextraction (SPME) devices designed to perform fast extraction/enrichment of target analytes present in small volumes of complex matrices (i.e. V≤10 μL). Micro-sampling was performed with the use of etched metal tips coated with a thin layer of biocompatible nano-structured polypyrrole (PPy), or by using coated blade spray (CBS) devices. These devices can be coupled either to liquid chromatography (LC), or directly to mass spectrometry (MS) via dedicated interfaces. The reported results demonstrated that the whole analytical procedure can be carried out within a few minutes with high sensitivity and quantitation precision, and can be used to sample from various biological matrices such as blood, urine, or Allium cepa L single-cells. PMID:27158909
Gaussian and finite-element Coulomb method for the fast evaluation of Coulomb integrals
NASA Astrophysics Data System (ADS)
Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko
2007-04-01
The authors propose a new linear-scaling method for the fast evaluation of Coulomb integrals with Gaussian basis functions called the Gaussian and finite-element Coulomb (GFC) method. In this method, the Coulomb potential is expanded in a basis of mixed Gaussian and finite-element auxiliary functions that express the core and smooth Coulomb potentials, respectively. Coulomb integrals can be evaluated by three-center one-electron overlap integrals among two Gaussian basis functions and one mixed auxiliary function. Thus, the computational cost and scaling for large molecules are drastically reduced. Several applications to molecular systems show that the GFC method is more efficient than the analytical integration approach that requires four-center two-electron repulsion integrals. The GFC method realizes a near linear scaling for both one-dimensional alanine α-helix chains and three-dimensional diamond pieces.
Gaussian and finite-element Coulomb method for the fast evaluation of Coulomb integrals.
Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko
2007-04-14
The authors propose a new linear-scaling method for the fast evaluation of Coulomb integrals with Gaussian basis functions called the Gaussian and finite-element Coulomb (GFC) method. In this method, the Coulomb potential is expanded in a basis of mixed Gaussian and finite-element auxiliary functions that express the core and smooth Coulomb potentials, respectively. Coulomb integrals can be evaluated by three-center one-electron overlap integrals among two Gaussian basis functions and one mixed auxiliary function. Thus, the computational cost and scaling for large molecules are drastically reduced. Several applications to molecular systems show that the GFC method is more efficient than the analytical integration approach that requires four-center two-electron repulsion integrals. The GFC method realizes a near linear scaling for both one-dimensional alanine alpha-helix chains and three-dimensional diamond pieces. PMID:17444700
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
Analytical solutions for radiation-driven winds in massive stars. I. The fast regime
Araya, I.; Curé, M.; Cidale, L. S.
2014-11-01
Accurate mass-loss rate estimates are crucial keys in the study of wind properties of massive stars and for testing different evolutionary scenarios. From a theoretical point of view, this implies solving a complex set of differential equations in which the radiation field and the hydrodynamics are strongly coupled. The use of an analytical expression to represent the radiation force and the solution of the equation of motion has many advantages over numerical integrations. Therefore, in this work, we present an analytical expression as a solution of the equation of motion for radiation-driven winds in terms of the force multiplier parameters. This analytical expression is obtained by employing the line acceleration expression given by Villata and the methodology proposed by Müller and Vink. On the other hand, we find useful relationships to determine the parameters for the line acceleration given by Müller and Vink in terms of the force multiplier parameters.
A PDE-Based Fast Local Level Set Method
NASA Astrophysics Data System (ADS)
Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo
1999-11-01
We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
Electrical impedance tomography and the fast multipole method
NASA Astrophysics Data System (ADS)
Bikowski, Jutta; Mueller, Jennifer L.
2004-10-01
A 3-D linearization-based reconstruction algorithm for Electrical Impedance Tomography suitable for breast cancer detection using data collected on a rectangular array was introduced by Mueller et al. [IEEE Biomed. Eng., 46(11), 1999]. By considering the scenario as an electrostatic problem, it is possible to model the electrodes with various charges, facilitating the use of the Fast Multipole Method (FMM) for calculating particle interactions and also supporting the use of different electrode models. In this paper the use of FMM is explained and results in form of reconstructed images from experimental data show that this method is an improvement.
Parabolic approximation method for fast magnetosonic wave propagation in tokamaks
Phillips, C.K.; Perkins, F.W.; Hwang, D.Q.
1985-07-01
Fast magnetosonic wave propagation in a cylindrical tokamak model is studied using a parabolic approximation method in which poloidal variations of the wave field are considered weak in comparison to the radial variations. Diffraction effects, which are ignored by ray tracing mthods, are included self-consistently using the parabolic method since continuous representations for the wave electromagnetic fields are computed directly. Numerical results are presented which illustrate the cylindrical convergence of the launched waves into a diffraction-limited focal spot on the cyclotron absorption layer near the magnetic axis for a wide range of plasma confinement parameters.
Basal buoyancy and fast-moving glaciers: in defense of analytic force balance
NASA Astrophysics Data System (ADS)
van der Veen, C. J.
2016-06-01
The geometric approach to force balance advocated by T. Hughes in a series of publications has challenged the analytic approach by implying that the latter does not adequately account for basal buoyancy on ice streams, thereby neglecting the contribution to the gravitational driving force associated with this basal buoyancy. Application of the geometric approach to Byrd Glacier, Antarctica, yields physically unrealistic results, and it is argued that this is because of a key limiting assumption in the geometric approach. A more traditional analytic treatment of force balance shows that basal buoyancy does not affect the balance of forces on ice streams, except locally perhaps, through bridging effects.
Arcadu, Filippo; Stampanoni, Marco; Marone, Federica
2016-06-27
This paper introduces new gridding projectors designed to efficiently perform analytical and iterative tomographic reconstruction, when the forward model is represented by the derivative of the Radon transform. This inverse problem is tightly connected with an emerging X-ray tube- and synchrotron-based imaging technique: differential phase contrast based on a grating interferometer. This study shows, that the proposed projectors, compared to space-based implementations of the same operators, yield high quality analytical and iterative reconstructions, while improving the computational efficiency by few orders of magnitude. PMID:27410628
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
Asli Tandogan, Anatoly V. Radyushkin
2011-11-01
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x < 1/2 or x > 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x)){sup t} dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|{sup 2t} appears due to a jump at x = 1/2. A good convergence was observed in the t {approx}< 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
NASA Astrophysics Data System (ADS)
Gemayel, R.; Temime-Roussel, B.; Hellebust, S.; Gligorovski, S.; Wortham, H.
2014-12-01
A comprehensive understanding of the chemical composition of the atmospheric particles is of paramount importance in order to understand their impact on the health and climate. Hence, there is an imperative need for the development of appropriate analytical methods of analysis for the on-line, time-resolved measurements of atmospheric particles. Laser Ablation Aerosol Particle Time of Flight Mass Spectrometry (LAAP-TOF-MS) allows a real time qualitative analysis of nanoparticles of differing composition and size. LAAP-TOF-MS is aimed for on-line and continuous measurements of atmospheric particles with the fast time resolution in order of millisecond. This system uses a 193 nm excimer laser for particle ablation/ionization and a 403 nm scattering laser for sizing (and single particle detection/triggering). The charged ions are then extracted into a bi-polar Time-of-Flight mass spectrometer. Here we present an analytical methodology for quantitative determination of the composition and size-distribution of the particles by LAAP-TOF instrument. We developed and validate an analytical methodology of this high time resolution instrument by comparison with the conventional analysis systems with lower time resolution (electronic microscopy, optical counters…) with final aim to render the methodology quantitative. This was performed with the aid of other instruments for on-line and off-line measurement such as Scanning Mobility Particle Sizer, electronic microscopy... Validation of the analytical method was performed under laboratory conditions by detection and identification of the targeted main types such as SiO2, CeO2, and TiO2
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2010 CFR
2010-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...
The Analytical Methods Manual for the Western Lake Survey - Phase I is a supplement to the Analytical Methods Manual for the Eastern Lake Survey Phase I. The supplement provides a general description of the analytical methods that are used by the field laboratories and by the ana...
21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Analytical methods for an in vivo bioavailability... Analytical methods for an in vivo bioavailability or bioequivalence study. (a) The analytical method used in... ingredient or therapeutic moiety, or its active metabolite(s), achieved in the body. (b) When the...
21 CFR 530.40 - Safe levels and availability of analytical methods.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe... accordance with § 530.22, the following analytical methods have been accepted by FDA:...
Fast Market Splitting Matching Method for Spot Electric Power Market
NASA Astrophysics Data System (ADS)
Sawa, Toshiyuki; Nakata, Yuji; Tsurugai, Mitsuo; Sugiyama, Shigenari
We have developed a fast, innovative matching method for the spot power market, considering network constraints. In this method, buy and sell order bids are respectively divided into the aggregated volume of several band prices. Then the aggregated volume and the center of each band price are used to calculate a band clearing price, which contains the real clearing price. The dividing and calculating process is iterated until the band price is less than the tick size of the bidding price. We applied this method to a real problem in the Japanese power market with 9 areas, 10 area-connecting lines, and 9000 orders (volume/price pairs). Our simulation results show that the new method is ten times faster than conventional linear programming. This demonstrates the effectiveness of the developed method.
How to assess the quality of your analytical method?
Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten
2015-10-01
Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees. PMID:26408611
Crovelli, Robert A.; revised by Charpentier, Ronald R.
2012-01-01
The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.
An analytical method for predicting postwildfire peak discharges
Moody, John A.
2012-01-01
An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The
Quality control and analytical methods for baculovirus-based products.
Roldão, António; Vicente, Tiago; Peixoto, Cristina; Carrondo, Manuel J T; Alves, Paula M
2011-07-01
Recombinant baculoviruses (rBac) are used for many different applications, ranging from bio-insecticides to the production of heterologous proteins, high-throughput screening of gene functions, drug delivery, in vitro assembly studies, design of antiviral drugs, bio-weapons, building blocks for electronics, biosensors and chemistry, and recently as a delivery system in gene therapy. Independent of the application, the quality, quantity and purity of rBac-based products are pre-requisites demanded by regulatory authorities for product licensing. To guarantee maximization utility, it is necessary to delineate optimized production schemes either using trial-and-error experimental setups ("brute force" approach) or rational design of experiments by aid of in silico mathematical models (Systems Biology approach). For that, one must define all of the main steps in the overall process, identify the main bioengineering issues affecting each individual step and implement, if required, accurate analytical methods for product characterization. In this review, current challenges for quality control (QC) technologies for up- and down-stream processing of rBac-based products are addressed. In addition, a collection of QC methods for monitoring/control of the production of rBac derived products are presented as well as innovative technologies for faster process optimization and more detailed product characterization. PMID:21784235
Feature extraction from mammographic images using fast marching methods
NASA Astrophysics Data System (ADS)
Bottigli, U.; Golosio, B.
2002-07-01
Features extraction from medical images represents a fundamental step for shape recognition and diagnostic support. The present work faces the problem of the detection of large features, such as massive lesions and organ contours, from mammographic images. The regions of interest are often characterized by an average grayness intensity that is different from the surrounding. In most cases, however, the desired features cannot be extracted by simple gray level thresholding, because of image noise and non-uniform density of the surrounding tissue. In this work, edge detection is achieved through the fast marching method (Level Set Methods and Fast Marching Methods, Cambridge University Press, Cambridge, 1999), which is based on the theory of interface evolution. Starting from a seed point in the shape of interest, a front is generated which evolves according to an appropriate speed function. Such function is expressed in terms of geometric properties of the evolving interface and of image properties, and should become zero when the front reaches the desired boundary. Some examples of application of such method to mammographic images from the CALMA database (Nucl. Instr. and Meth. A 460 (2001) 107) are presented here and discussed.