NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Fast and "green" method for the analytical monitoring of haloketones in treated water.
Serrano, María; Silva, Manuel; Gallego, Mercedes
2014-09-05
Several groups of organic compounds have emerged as being particularly relevant as environmental pollutants, including disinfection by-products (DBPs). Haloketones (HKs), which belong to the unregulated volatile fraction of DBPs, have become a priority because of their occurrence in drinking water at concentrations below 1μg/L. The absence of a comprehensive method for HKs has led to the development of the first method for determining fourteen of these species. In an effort to miniaturise, this study develops a micro liquid-liquid extraction (MLLE) method adapted from EPA Method 551.1. In this method practically, the whole extract (50μL) was injected into a programmed temperature vaporiser-gas chromatography-mass spectrometer in order to improve sensitivity. The method was validated by comparing it to EPA Method 551.1 and showed relevant advantages such as: lower sample pH (1.5), higher aqueous/organic volume ratio (60), lower solvent consumption (200μL) and fast and cost-saving operation. The MLLE method achieved detection limits ranging from 6 to 60ng/L (except for 1,1,3-tribromo-3-chloroacetone, 120ng/L) with satisfactory precision (RSD, ∼6%) and high recoveries (95-99%). An evaluation was carried out of the influence of various dechlorinating agents as well as of the sample pH on the stability of the fourteen HKs in treated water. To ensure the HKs integrity for at least 1 week during storage at 4°C, the samples were acidified at pH ∼1.5, which coincides with the sample pH required for MLLE. The green method was applied to the speciation of fourteen HKs in tap and swimming pool waters, where one and seven chlorinated species, respectively, were found. The concentration of 1.1-dichloroacetone in swimming pool water increased ∼25 times in relation to tap water.
A fast analytic dose calculation method for arc treatments for kilovoltage small animal irradiators.
Marco-Rius, I; Wack, L; Tsiamas, P; Tryggestad, E; Berbeco, R; Hesser, J; Zygmanski, P
2013-09-01
Arc treatments require calculation of dose for collections of discrete gantry angles. The sampling of angles must balance between short computation time of small angle sets and the better calculation reliability of large sets. In this paper, an analytical formula is presented that allows calculation of dose delivered during continuous rotation of the gantry. The formula holds valid for continuous short arcs of up to about 30° and is derived by integrating a dose formula over gantry angles within a small angle approximation. Doses for longer arcs may be obtained in terms of doses for shorter arcs. The formula is derived with an empirical beam model in water and extended to inhomogeneous media. It is validated with experimental data obtained by applying arc treatment using kV small animal irradiator to a phantom of solid water and lung-equivalent material. The results are a promising step towards efficient 3D dose calculation and inverse planning purposes. In principle, this method also applies to VMAT dose calculation and optimization but requires extensions.
Improved meta-analytic methods show no effect of chromium supplements on fasting glucose.
Bailey, Christopher H
2014-01-01
The trace mineral chromium has been extensively researched over the years in its role in glucose metabolism. Dietary supplement companies have attempted to make claims that chromium may be able to treat or prevent diabetes. Previous meta-analyses/systematic reviews have indicated that chromium supplementation results in a significant lowering of fasting glucose in diabetics but not in nondiabetics. A meta-analysis was conducted using an alternative measure of effect size, d(ppc2) in order to account for changes in the control group as well as the chromium group. The literature search included MEDLINE, the Cochrane Controlled Trials Register, and previously published article reviews, systematic reviews, and meta-analyses. Included studies were randomized, placebo-controlled trials in the English language with subjects that were nonpregnant adults, both with and without diabetes. Sixteen studies with 809 participants (440 diabetics and 369 nondiabetics) were included in the analysis. Screening for publication bias indicated symmetry of the data. Tests of heterogeneity indicated the use of a fixed-effect model (I² = 0 %). The analysis indicated that there was no significant effect of chromium supplementation in diabetics or nondiabetics, with a weighted average effect size of 0.02 (SE = 0.07), p = 0.787, CI 95 % = -0.12 to 0.16. Chromium supplementation appears to provide no benefits to populations where chromium deficiency is unlikely.
2012-01-01
Background The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests’ accuracy, precision, repeatability, linearity and range. Methods As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. Results All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Conclusions Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good
Generalized fast multipole method
NASA Astrophysics Data System (ADS)
Létourneau, Pierre-David; Cecka, Cristopher; Darve, Eric
2010-06-01
The fast multipole method (FMM) is a technique allowing the fast calculation of long-range interactions between N points in O(N) or O(N ln N) steps with some prescribed error tolerance. The FMM has found many applications in the field of integral equations and boundary element methods, in particular by accelerating the solution of dense linear systems arising from such formulations. Original FMMs required analytical expansions of the kernel, for example using spherical harmonics or Taylor expansions. In recent years, the range of applicability and the ease of use of FMMs has been extended by the introduction of black box [1] or kernel independent techniques [2]. In these approaches, the user only provides a subroutine to numerically calculate the interaction kernel. This allows changing the definition of the kernel with minimal change to the computer program. In this talk we will present a novel kernel independent FMM, which leads to diagonal multipole-to-local operators. This results in a significant reduction in the computational cost [1], in particular when high accuracy is needed. The approach is based on Cauchy's integral formula and the Laplace transform. We will present a numerical analysis of the convergence, methods to choose the parameters in the FMM given some tolerance, and the steps required to build a multilevel scheme from the single level formulation. Numerical results are given for benchmark calculations to demonstrate the accuracy as a function of the number of multipole coefficients, and the computational cost of the different steps in the method.
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
Samin, Adib; Lahti, Erik; Zhang, Jinsuo
2015-08-15
Cyclic voltammetry is a powerful tool that is used for characterizing electrochemical processes. Models of cyclic voltammetry take into account the mass transport of species and the kinetics at the electrode surface. Analytical solutions of these models are not well-known due to the complexity of the boundary conditions. In this study we present closed form analytical solutions of the planar voltammetry model for two soluble species with fast electron transfer and equal diffusivities using the eigenfunction expansion method. Our solution methodology does not incorporate Laplace transforms and yields good agreement with the numerical solution. This solution method can be extended to cases that are more general and may be useful for benchmarking purposes.
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland).
Kim, Junghyun; Suh, Joon Hyuk; Cho, Hyun-Deok; Kang, Wonjae; Choi, Yong Seok; Han, Sang Beom
2016-01-01
A multi-class, multi-residue analytical method based on LC-MS/MS detection was developed for the screening and confirmation of 28 veterinary drug and metabolite residues in flatfish, shrimp and eel. The chosen veterinary drugs are prohibited or unauthorised compounds in Korea, which were categorised into various chemical classes including nitroimidazoles, benzimidazoles, sulfones, quinolones, macrolides, phenothiazines, pyrethroids and others. To achieve fast and simultaneous extraction of various analytes, a simple and generic liquid extraction procedure using EDTA-ammonium acetate buffer and acetonitrile, without further clean-up steps, was applied to sample preparation. The final extracts were analysed by ultra-high-performance liquid chromatography coupled with tandem mass spectrometry (UHPLC-MS/MS). The method was validated for each compound in each matrix at three different concentrations (5, 10 and 20 ng g(-1)) in accordance with Codex guidelines (CAC/GL 71-2009). For most compounds, the recoveries were in the range of 60-110%, and precision, expressed as the relative standard deviation (RSD), was in the range of 5-15%. The detection capabilities (CCβs) were below or equal to 5 ng g(-1), which indicates that the developed method is sufficient to detect illegal fishery products containing the target compounds above the residue limit (10 ng g(-1)) of the new regulatory system (Positive List System - PLS).
Salvo, Andrea; La Torre, Giovanna Loredana; Di Stefano, Vita; Capocchiano, Valentina; Mangano, Valentina; Saija, Emanuele; Pellizzeri, Vito; Casale, Katia Erminia; Dugo, Giacomo
2017-04-15
A fast reversed-phase UPLC method was developed for squalene determination in Sicilian pistachio samples that entry in the European register of the products with P.D.O. In the present study the SPE procedure was optimized for the squalene extraction prior to the UPLC/PDA analysis. The precision of the full analytical procedure was satisfactory and the mean recoveries were 92.8±0.3% and 96.6±0.1% for 25 and 50mgL(-1) level of addition, respectively. Selected chromatographic conditions allowed a very fast squalene determination; in fact it was well separated in ∼0.54min with good resolution. Squalene was detected in all the pistachio samples analyzed and the levels ranged from 55.45-226.34mgkg(-1). Comparing our results with those of other studies it emerges that squalene contents in P.D.O. Sicilian pistachio samples, generally, were higher than those measured for other samples of different geographic origins. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Clean Water Act Analytical Methods
EPA publishes laboratory analytical methods (test procedures) that are used by industries and municipalities to analyze the chemical, physical and biological components of wastewater and other environmental samples required by the Clean Water Act.
NASA Astrophysics Data System (ADS)
Boisson, F.; Bekaert, V.; Reilhac, A.; Wurtz, J.; Brasse, D.
2015-03-01
In SPECT imaging, improvement or deterioration of performance is mostly due to collimator design. Classical SPECT systems mainly use parallel hole or pinhole collimators. Rotating slat collimators (RSC) can be an interesting alternative to optimize the tradeoff between detection efficiency and spatial resolution. The present study was conducted using a RSC system for small animal imaging called CLiR. The CLiR system was used in planar mode only. In a previous study, planar 2D projections were reconstructed using the well-known filtered backprojection algorithm (FBP). In this paper, we investigated the use of the statistical reconstruction algorithm maximum likelihood expectation maximization (MLEM) to reconstruct 2D images with the CLiR system using a probability matrix calculated using an analytic approach. The primary objective was to propose a method to quickly generate a light system matrix, which facilitates its handling and storage, while providing accurate and reliable performance. Two other matrices were calculated using GATE Monte Carlo simulations to investigate the performance obtained using the matrix calculated analytically. The first matrix calculated using GATE took all the physics processes into account, where the second did not consider for the scattering, as the analytical matrix did not take this physics process into account either. 2D images were reconstructed using FBP and MLEM with the three different probability matrices. Both simulated and experimental data were used. A comparative study of these images was conducted using different metrics: the modulation transfert function, the signal-to-noise ratio and quantification measurement. All the results demonstrated the suitability of using a probability matrix calculated analytically. It provided similar results in terms of spatial resolution (about 0.6 mm with differences <5%), signal-to-noise ratio (differences <10%), or quality of image.
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.; Berry, R.A.
1999-08-10
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream. 8 figs.
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.; Berry, R.A.
1999-08-10
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream. 8 figs.
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.; Berry, Ray A.
1999-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a means of rapidly expanding a reactant stream, such as a restrictive convergent-divergent nozzle at its outlet end. Metal halide reactants are injected into the reactor chamber. Reducing gas is added at different stages in the process to form a desired end product and prevent back reactions. The resulting heated gaseous stream is then rapidly cooled by expansion of the gaseous stream.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; Tang, Yanfei; Scarola, Vito; Summers, Michael Stuart; Maier, Thomas A
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; Tang, Yanfei; Scarola, Vito; Summers, Michael Stuart; Maier, Thomas A
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
Fast and efficient stochastic optimization for analytic continuation
NASA Astrophysics Data System (ADS)
Bao, F.; Tang, Y.; Summers, M.; Zhang, G.; Webster, C.; Scarola, V.; Maier, T. A.
2016-09-01
The analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000), 10.1103/PhysRevB.62.6317], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. We generally find that our FESOM approach yields spectra similar to the maximum entropy results. In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. We therefore believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less
Peruga, Aranzazu; Hidalgo, Carmen; Sancho, Juan V; Hernández, Félix
2013-09-13
Pyrethrins are natural insecticides derived from chrysanthemum flowers containing a mixture of six components: pyrethrin I, cinerin I, jasmolin I, pyrethrin II, cinerin II, and jasmolin II. In this work, a rapid and sensitive LC-(ESI)-MS/MS method has been developed for the individual quantification and confirmation of pyrethrin residues in fruit and vegetable samples by monitoring two specific transitions for each pyrethrin component under Selected Reaction Monitoring (SRM) mode. Samples were extracted with acetone/water or acetone, depending on the sample type, and raw extracts were directly injected in the LC-MS/MS system. Method validation was carried out evaluating linearity, accuracy, precision, specificity, limit of quantification (LOQ) and limit of detection (LOD) in eight types of fruit and vegetable samples at 0.05mg/kg and 0.5mg/kg (referred to the sum of all pyrethrins). The method based on acetone/water (70:30) extraction led to satisfactory recoveries (70-110%) and good precision (below 14%) for all pyrethrin components in lettuce, pepper, strawberry and potato. The method based on acetone extraction allowed satisfactory recoveries for lettuce, cucumber, tomato and rice samples with recoveries between 71 and 107% and RSDs below 15%. For pistachio samples, satisfactory results were obtained only for some analytes and extracts were also injected using APCI interface, but the lower sensitivity achieved allowed only the validation at 0.5mg/kg. The analytical methodology developed was applied to the analysis of fruit and vegetable samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Kawana, Shuichi; Nakagawa, Katsuhiro; Hasegawa, Yuki; Yamaguchi, Seiji
2010-11-15
A simple and rapid method for quantitative analysis of amino acids, including valine (Val), leucine (Leu), isoleucine (Ile), methionine (Met) and phenylalanine (Phe), in whole blood has been developed using GC/MS. In this method, whole blood was collected using a filter paper technique, and a 1/8 in. blood spot punch was used for sample preparation. Amino acids were extracted from the sample, and the extracts were purified using cation-exchange resins. The isotope dilution method using ²H₈-Val, ²H₃-Leu, ²H₃-Met and ²H₅-Phe as internal standards was applied. Following propyl chloroformate derivatization, the derivatives were analyzed using fast-GC/MS. The extraction recoveries using these techniques ranged from 69.8% to 87.9%, and analysis time for each sample was approximately 26 min. Calibration curves at concentrations from 0.0 to 1666.7 μmol/l for Val, Leu, Ile and Phe and from 0.0 to 333.3 μmol/l for Met showed good linearity with regression coefficients=1. The method detection limits for Val, Leu, Ile, Met and Phe were 24.2, 16.7, 8.7, 1.5 and 12.9 μmol/l, respectively. This method was applied to blood spot samples obtained from patients with phenylketonuria (PKU), maple syrup urine disease (MSUD), hypermethionine and neonatal intrahepatic cholestasis caused by citrin deficiency (NICCD), and the analysis results showed that the concentrations of amino acids that characterize these diseases were increased. These results indicate that this method provides a simple and rapid procedure for precise determination of amino acids in whole blood.
Analytical methods under emergency conditions
Sedlet, J.
1983-01-01
This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references. (ACR)
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2017-08-29
In postmortem toxicology, fast methods can provide a triage to avoid unnecessary autopsies. Usually, this requires multiple qualitative and quantitative analytical methods. The aim of the present study was to develop a postmortem LC-QTOF method for simultaneous screening and quantitation using easy sample preparation and reduced alternative calibration models. Hence, a method for 24 highly relevant substances in forensic toxicology was fully validated using the following calibration models: one-point external, one-point internal via corresponding deuterated standards, multi-point external daily calibration, and multi-point external weekly calibration. Two hundred microliters of postmortem blood were spiked with internal deuterated standard mixture and extracted by acetonitrile protein precipitation. Analysis was performed on a Sciex 6600 QTOF instrument with ESI+ mode using data-independent acquisition (DIA) namely sequential window acquisition of all theoretical mass spectra (SWATH). Validation of the different calibration models included selectivity, autosampler stability, recovery, matrix effects, accuracy, and precision for 24 substances. In addition, corresponding deuterated analogs of 52 substances were included to the internal standard mix for semi-quantitative concentration assessment. The simple protein precipitation provided recoveries higher than 55 and 75% for all analytes at low and high concentrations, respectively. Accuracy and precision criteria (bias and imprecision ± 15 and ± 20% near the limit of quantitation) were fulfilled by the different calibration models for most analytes. The validated method was successfully applied to more than 100 authentic postmortem samples and 3 proficiency tests. Furthermore, the one-point internal calibration via corresponding deuterated standard proved to be a considerably time saving technique for 76 analytes. Graphical abstract One-point and multi-point calibration and the resulting beta
Liu, J; Bourland, J
2014-06-01
Purpose: To analytically estimate first-order x-ray scatter for kV cone beam x-ray imaging with high computational efficiency. Methods: In calculating first-order scatter using the Klein-Nishina formula, we found that by integrating the point-to-point scatter along an interaction line, a “pencil-beam” scatter kernel (BSK) can be approximated to a quartic expression when the imaging field is small. This BSK model for monoenergetic, 100keV x-rays has been verified on homogeneous cube and cylinder water phantoms by comparing with the exact implementation of KN formula. For heterogeneous medium, the water-equivalent length of a BSK was acquired with an improved Siddon's ray-tracing algorithm, which was also used in calculating pre- and post- scattering attenuation. To include the electron binding effect for scattering of low-kV photons, the mean corresponding scattering angle is determined from the effective point of scattered photons of a BSK. The behavior of polyenergetic x-rays was also investigated for 120kV x-rays incident to a sandwiched infinite heterogeneous slab phantom, with the electron binding effect incorporated. Exact computation and Monte Carlo simulations were performed for comparisons, using the EGSnrc code package. Results: By reducing the 3D volumetric target (o(n{sup 3})) to 2D pencil-beams (o(n{sup 2})), the computation expense can be generally lowered by n times, which our experience verifies. The scatter distribution on a flat detector shows high agreement between the analytic BSK model and exact calculations. The pixel-to-pixel differences are within (-2%, 2%) for the homogeneous cube and cylinder phantoms and within (0, 6%) for the heterogeneous slab phantom. However, the Monte Carlo simulation shows increased deviation of the BSK model toward detector periphery. Conclusion: The proposed BSK model, accommodating polyenergetic x-rays and electron binding effect at low kV, shows great potential in efficiently estimating the first
Analytic heuristics for a fast DSC-MRI
NASA Astrophysics Data System (ADS)
Virgulin, M.; Castellaro, M.; Marcuzzi, F.; Grisan, E.
2014-03-01
Hemodynamics of the human brain may be studied with Dynamic Susceptibility Contrast MRI (DSC-MRI) imaging. The sequence of volumes obtained exhibits a strong spatiotemporal correlation, that can be exploited to predict which measurements will bring mostly the new information contained in the next frames. In general, the sampling speed is an important issue in many applications of the MRI, so that the focus of many current researches is to study methods to reduce the number of measurement samples needed for each frame without degrading the image quality. For the DSC-MRI, the frequency under-sampling of single frame can be exploited to make more frequent space or time acquisitions, thus increasing the time resolution and allowing the analysis of fast dynamics not yet observed. Generally (and also for MRI), the recovery of sparse signals has been achieved by Compressed Sensing (CS) techniques, which are based on statistical properties rather than deterministic ones.. By studying analytically the compound Fourier+Wavelet transform, involved in the processes of reconstruction and sparsification of MR images, we propose a deterministic technique for a rapid-MRI, exploiting the relations between the wavelet sparse representation of the recovered and the frequency samples. We give results on real images and on artificial phantoms with added noise, showing the superiority of the methods both with respect to classical Iterative Hard Thresholding (IHT) and to Location Constraint Approximate Message Passing (LCAMP) reconstruction algorithms.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses... Protection Directorate's Military Specifications, approved analytical test methods noted therein, U.S....
Quality Control Analytical Methods: Method Validation.
Klang, Mark G; Williams, LaVonn A
2016-01-01
To properly determine the accuracy of a pharmaceutical product or compounded preparation, tests must be designed specifically for that evaluation. The procedures selected must be verified through a process referred to as method validation, an integral part of any good analytical practice. The results from a method validation procedure can be used to judge the quality, reliability, and consistency of analytical results. The purpose of this article is to deliver the message of the importance of validation of a pharmaceutical product or compounded preparation and to briefly discuss the results of a lack of such validation. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to.... Army Individual Protection Directorate's Military Specifications, approved analytical test...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to.... Army Individual Protection Directorate's Military Specifications, approved analytical test...
Maraman, W.J.
1981-03-01
This project is directed toward the examination and comparison of the effects of neutron irradiation on Liquid Metal Fast Breeder Reactor (LMFBR) Program fuel materials. Unirradiated and irradiated materials will be examined as requested by the Reference Fuels System Branch of the Division of Reactor Research and Technology (DRRT). Capabilities have been established and are being expanded for providing conventional preirradiation and postirradiation examinations. Nondestructive tests will be conducted in a hot-cell facility specifically modified for examining irradiated prototype fuel pins at a rate commensurate with schedules established by DRRT.
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS Deputy...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS Deputy...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed.... (b) ASTA's Analytical Methods Manual, American Spice Trade Association (ASTA), 560 Sylvan Avenue, P.O...
Jagetic, Lydia J; Newhauser, Wayne D
2015-06-21
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
2013-01-01
Background The aim of this paper was the validation of a new analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals (Ag, Cd, Co, Cr, Cu, Ni, Pb and Zn) in soil after microwave assisted digestion in aqua regia. Determinations were performed on the ContrAA 300 (Analytik Jena) air-acetylene flame spectrometer equipped with xenon short-arc lamp as a continuum radiation source for all elements, double monochromator consisting of a prism pre-monocromator and an echelle grating monochromator, and charge coupled device as detector. For validation a method-performance study was conducted involving the establishment of the analytical performance of the new method (limits of detection and quantification, precision and accuracy). Moreover, the Bland and Altman statistical method was used in analyzing the agreement between the proposed assay and inductively coupled plasma optical emission spectrometry as standardized method for the multielemental determination in soil. Results The limits of detection in soil sample (3σ criterion) in the high-resolution continuum source flame atomic absorption spectrometry method were (mg/kg): 0.18 (Ag), 0.14 (Cd), 0.36 (Co), 0.25 (Cr), 0.09 (Cu), 1.0 (Ni), 1.4 (Pb) and 0.18 (Zn), close to those in inductively coupled plasma optical emission spectrometry: 0.12 (Ag), 0.05 (Cd), 0.15 (Co), 1.4 (Cr), 0.15 (Cu), 2.5 (Ni), 2.5 (Pb) and 0.04 (Zn). Accuracy was checked by analyzing 4 certified reference materials and a good agreement for 95% confidence interval was found in both methods, with recoveries in the range of 94–106% in atomic absorption and 97–103% in optical emission. Repeatability found by analyzing real soil samples was in the range 1.6–5.2% in atomic absorption, similar with that of 1.9–6.1% in optical emission spectrometry. The Bland and Altman method showed no statistical significant difference
Frentiu, Tiberiu; Ponta, Michaela; Hategan, Raluca
2013-03-01
The aim of this paper was the validation of a new analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals (Ag, Cd, Co, Cr, Cu, Ni, Pb and Zn) in soil after microwave assisted digestion in aqua regia. Determinations were performed on the ContrAA 300 (Analytik Jena) air-acetylene flame spectrometer equipped with xenon short-arc lamp as a continuum radiation source for all elements, double monochromator consisting of a prism pre-monocromator and an echelle grating monochromator, and charge coupled device as detector. For validation a method-performance study was conducted involving the establishment of the analytical performance of the new method (limits of detection and quantification, precision and accuracy). Moreover, the Bland and Altman statistical method was used in analyzing the agreement between the proposed assay and inductively coupled plasma optical emission spectrometry as standardized method for the multielemental determination in soil. The limits of detection in soil sample (3σ criterion) in the high-resolution continuum source flame atomic absorption spectrometry method were (mg/kg): 0.18 (Ag), 0.14 (Cd), 0.36 (Co), 0.25 (Cr), 0.09 (Cu), 1.0 (Ni), 1.4 (Pb) and 0.18 (Zn), close to those in inductively coupled plasma optical emission spectrometry: 0.12 (Ag), 0.05 (Cd), 0.15 (Co), 1.4 (Cr), 0.15 (Cu), 2.5 (Ni), 2.5 (Pb) and 0.04 (Zn). Accuracy was checked by analyzing 4 certified reference materials and a good agreement for 95% confidence interval was found in both methods, with recoveries in the range of 94-106% in atomic absorption and 97-103% in optical emission. Repeatability found by analyzing real soil samples was in the range 1.6-5.2% in atomic absorption, similar with that of 1.9-6.1% in optical emission spectrometry. The Bland and Altman method showed no statistical significant difference between the two spectrometric
NASA Astrophysics Data System (ADS)
Jagetic, Lydia J.; Newhauser, Wayne D.
2015-06-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.
Wilson, Lydia J; Newhauser, Wayne D
2015-01-01
State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical...
Fast semi-analytical solution of Maxwell's equations in Born approximation for periodic structures.
Pisarenco, Maxim; Quintanilha, Richard; van Kraaij, Mark G M M; Coene, Wim M J
2016-04-01
We propose a fast semi-analytical approach for solving Maxwell's equations in Born approximation based on the Fourier modal method (FMM). We show that, as a result of Born approximation, most matrices in the FMM algorithm become diagonal, thus allowing a reduction of computational complexity from cubic to linear. Moreover, due to the analytical representation of the solution in the vertical direction, the number of degrees of freedom in this direction is independent of the wavelength. The method is derived for planar illumination with two basic polarizations (TE/TM) and an arbitrary 2D geometry infinitely periodic in one horizontal direction.
Adaptive fast interface tracking methods
NASA Astrophysics Data System (ADS)
Popovic, Jelena; Runborg, Olof
2017-05-01
In this paper, we present a fast time adaptive numerical method for interface tracking. The method uses an explicit multiresolution description of the interface, which is represented by wavelet vectors that correspond to the details of the interface on different scale levels. The complexity of standard numerical methods for interface tracking, where the interface is described by N marker points, is O (N / Δt), when a time step Δt is used. The methods that we propose in this paper have O (TOL - 1 / p log N + Nlog N) computational cost, at least for uniformly smooth problems, where TOL is some given tolerance and p is the order of the time stepping method that is used for time advection of the interface. The adaptive method is robust in the sense that it can handle problems with both smooth and piecewise smooth interfaces (e.g. interfaces with corners) while keeping a low computational cost. We show numerical examples that verify these properties.
Remane, Daniela; Meyer, Markus R; Peters, Frank T; Wissenbach, Dirk K; Maurer, Hans H
2010-07-01
In clinical and forensic toxicology, different extraction procedures as well as analytical methods are used to monitor different drug classes of interest in biosamples. Multi-analyte procedures are preferable because they make the analytical strategy much simpler and cheaper and allow monitoring of analytes of different drug classes in one single body sample. For development of such a multi-analyte liquid chromatography-tandem mass spectrometry approach, a rapid and simple method for the extraction of 136 analytes from the following drug classes has been established: antidepressants, neuroleptics, benzodiazepines, beta-blockers, oral antidiabetics, and analytes relevant in the context of brain death diagnosis. Recovery, matrix effects, and process efficiency were tested at two concentrations using six different lots of blank plasma. The recovery results obtained using absolute peak areas were compared with those calculated using area ratios analyte/internal standard. The recoveries ranged from 8% to 84% for antidepressants, from 10% to 79% for neuroleptics, from 60% to 81% for benzodiazepines, from 1% to 71% for beta-blockers, from 10% to 73% for antidiabetics, and from 60% to 86% for analytes relevant in the context of brain death diagnosis. With the exception of 52 analytes at low concentration and 37 at high concentration, all compounds showed recoveries with acceptable variability with less than 15% and 20% coefficients of variation. Recovery results obtained by comparing peak area ratios were nearly the same, but 35 analytes at low concentration and 17 at high concentration lay above the acceptance criteria. Matrix effects with more than 25% were observed for 18 analytes. The results were acceptable for 119 analytes at high concentrations.
Fast neutron imaging device and method
Popov, Vladimir; Degtiarenko, Pavel; Musatov, Igor V.
2014-02-11
A fast neutron imaging apparatus and method of constructing fast neutron radiography images, the apparatus including a neutron source and a detector that provides event-by-event acquisition of position and energy deposition, and optionally timing and pulse shape for each individual neutron event detected by the detector. The method for constructing fast neutron radiography images utilizes the apparatus of the invention.
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
Method of identity analyte-binding peptides
Kauvar, L.M.
1990-10-16
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4--20 amino acids for specific affinity to the analyte. 5 figs.
Method of identity analyte-binding peptides
Kauvar, Lawrence M.
1990-01-01
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4-20 amino acids for specific affinity to the analyte.
Safer staining method for acid fast bacilli.
Ellis, R C; Zabrowarny, L A
1993-06-01
To develop a method for staining acid fast bacilli which excluded highly toxic phenol from the staining solution. A lipophilic agent, a liquid organic detergent, LOC High Studs, distributed by Amway, was substituted. The acid fast bacilli stained red; nuclei, cytoplasm, and cytoplasmic elements stained blue on a clear background. These results compare very favourably with acid fast bacilli stained by the traditional method. Detergents are efficient lipophilic agents and safer to handle than phenol. The method described here stains acid fast bacilli as efficiently as traditional carbol fuchsin methods. LOC High Suds is considerably cheaper than phenol.
Safer staining method for acid fast bacilli.
Ellis, R C; Zabrowarny, L A
1993-01-01
To develop a method for staining acid fast bacilli which excluded highly toxic phenol from the staining solution. A lipophilic agent, a liquid organic detergent, LOC High Studs, distributed by Amway, was substituted. The acid fast bacilli stained red; nuclei, cytoplasm, and cytoplasmic elements stained blue on a clear background. These results compare very favourably with acid fast bacilli stained by the traditional method. Detergents are efficient lipophilic agents and safer to handle than phenol. The method described here stains acid fast bacilli as efficiently as traditional carbol fuchsin methods. LOC High Suds is considerably cheaper than phenol. Images PMID:7687254
Nuclear analytical methods: Past, present and future
Becker, D.A.
1996-12-31
The development of nuclear analytical methods as an analytical tool began in 1936 with the publication of the first paper on neutron activation analysis (NAA). This year, 1996, marks the 60th anniversary of that event. This paper attempts to look back at the nuclear analytical methods of the past, to look around and to see where the technology is right now, and finally, to look ahead to try and see where nuclear methods as an analytical technique (or as a group of analytical techniques) will be going in the future. The general areas which the author focuses on are: neutron activation analysis; prompt gamma neutron activation analysis (PGNAA); photon activation analysis (PAA); charged-particle activation analysis (CPAA).
Method and apparatus for detecting an analyte
Allendorf, Mark D [Pleasanton, CA; Hesketh, Peter J [Atlanta, GA
2011-11-29
We describe the use of coordination polymers (CP) as coatings on microcantilevers for the detection of chemical analytes. CP exhibit changes in unit cell parameters upon adsorption of analytes, which will induce a stress in a static microcantilever upon which a CP layer is deposited. We also describe fabrication methods for depositing CP layers on surfaces.
Matrix Methods to Analytic Geometry.
ERIC Educational Resources Information Center
Bandy, C.
1982-01-01
The use of basis matrix methods to rotate axes is detailed. It is felt that persons who have need to rotate axes often will find that the matrix method saves considerable work. One drawback is that most students first learning to rotate axes will not yet have studied linear algebra. (MP)
Life cycle management of analytical methods.
Parr, Maria Kristina; Schmidt, Alexander H
2017-06-17
In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
2002-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
1998-01-01
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, Brent A.; Donaldson, Alan D.; Fincke, James R.; Kong, Peter C.
2002-09-24
A fast quench reaction includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This "freezes" the desired end product(s) in the heated equilibrium reaction stage.
Fast quench reactor and method
Detering, B.A.; Donaldson, A.D.; Fincke, J.R.; Kong, P.C.
1998-05-12
A fast quench reactor includes a reactor chamber having a high temperature heating means such as a plasma torch at its inlet and a restrictive convergent-divergent nozzle at its outlet end. Reactants are injected into the reactor chamber. The resulting heated gaseous stream is then rapidly cooled by passage through the nozzle. This ``freezes`` the desired end product(s) in the heated equilibrium reaction stage. 7 figs.
Quality by design compliant analytical method validation.
Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph
2012-01-03
The concept of quality by design (QbD) has recently been adopted for the development of pharmaceutical processes to ensure a predefined product quality. Focus on applying the QbD concept to analytical methods has increased as it is fully integrated within pharmaceutical processes and especially in the process control strategy. In addition, there is the need to switch from the traditional checklist implementation of method validation requirements to a method validation approach that should provide a high level of assurance of method reliability in order to adequately measure the critical quality attributes (CQAs) of the drug product. The intended purpose of analytical methods is directly related to the final decision that will be made with the results generated by these methods under study. The final aim for quantitative impurity assays is to correctly declare a substance or a product as compliant with respect to the corresponding product specifications. For content assays, the aim is similar: making the correct decision about product compliance with respect to their specification limits. It is for these reasons that the fitness of these methods should be defined, as they are key elements of the analytical target profile (ATP). Therefore, validation criteria, corresponding acceptance limits, and method validation decision approaches should be settled in accordance with the final use of these analytical procedures. This work proposes a general methodology to achieve this in order to align method validation within the QbD framework and philosophy. β-Expectation tolerance intervals are implemented to decide about the validity of analytical methods. The proposed methodology is also applied to the validation of analytical procedures dedicated to the quantification of impurities or active product ingredients (API) in drug substances or drug products, and its applicability is illustrated with two case studies.
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
Analytical Methods for Trace Metals. Training Manual.
ERIC Educational Resources Information Center
Office of Water Program Operations (EPA), Cincinnati, OH. National Training and Operational Technology Center.
This training manual presents material on the theoretical concepts involved in the methods listed in the Federal Register as approved for determination of trace metals. Emphasis is on laboratory operations. This course is intended for chemists and technicians with little or no experience in analytical methods for trace metals. Students should have…
A simple analytical method to obtain achromatic waveplate retarders
NASA Astrophysics Data System (ADS)
Vilas, Jose Luis; Lazarova-Lazarova, Aleksandra
2017-04-01
A new linear and analytical method to design achromatic retarders using waveplates is proposed. The root of this procedure is a generalization of the Hariharan method, which supposes a set of waveplates with fast axes aligned. Hence, it imposes a set of contour conditions over the overall retardation with the aim of determining the thicknesses of the waveplates. Our method proposes a polynomial approximation of the birefringences, thus removing the contour condition. Analytic expressions for calculating the thicknesses of the waveplates are then derived, showing a non-explicit dependence on the wavelength. Moreover, the overall retardation obtained by this method is close to the optimal retardation curve achieved by minimizing the merit function of the achromatism degree.
Zhang, Wei; Huang, Guangming
2015-11-15
Approaches for analyte screening have been used to aid in the fine-tuning of chemical reactions. Herein, we present a simple and straightforward analyte screening method for chemical reactions via reactive low-temperature plasma ionization mass spectrometry (reactive LTP-MS). Solution-phase reagents deposited on sample substrates were desorbed into the vapor phase by action of the LTP and by thermal desorption. Treated with LTP, both reagents reacted through a vapor phase ion/molecule reaction to generate the product. Finally, protonated reagents and products were identified by LTP-MS. Reaction products from imine formation reaction, Eschweiler-Clarke methylation and the Eberlin reaction were detected via reactive LTP-MS. Products from the imine formation reaction with reagents substituted with different functional groups (26 out of 28 trials) were successfully screened in a time of 30 s each. Besides, two short-lived reactive intermediates of Eschweiler-Clarke methylation were also detected. LTP in this study serves both as an ambient ionization source for analyte identification (including reagents, intermediates and products) and as a means to produce reagent ions to assist gas-phase ion/molecule reactions. The present reactive LTP-MS method enables fast screening for several analytes from several chemical reactions, which possesses good reagent compatibility and the potential to perform high-throughput analyte screening. In addition, with the detection of various reactive intermediates (intermediates I and II of Eschweiler-Clarke methylation), the present method would also contribute to revealing and elucidating reaction mechanisms. Copyright © 2015 John Wiley & Sons, Ltd.
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a) Analyses for lead, copper, pH, conductivity, calcium, alkalinity, orthophosphate, silica, and temperature... State. Analyses under this section for lead and copper shall only be conducted by laboratories that...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
....89 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a) Analyses for lead, copper, pH, conductivity, calcium, alkalinity, orthophosphate, silica, and...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
....89 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a) Analyses for lead, copper, pH, conductivity, calcium, alkalinity, orthophosphate, silica, and...
Biodiesel Analytical Methods: August 2002--January 2004
Van Gerpen, J.; Shanks, B.; Pruszko, R.; Clements, D.; Knothe, G.
2004-07-01
Biodiesel is an alternative fuel for diesel engines that is receiving great attention worldwide. The material contained in this book is intended to provide the reader with information about biodiesel engines and fuels, analytical methods used to measure fuel properties, and specifications for biodiesel quality control.
Prioritizing pesticide compounds for analytical methods development
Norman, Julia E.; Kuivila, Kathryn; Nowell, Lisa H.
2012-01-01
The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Fast quantum methods for optimization
NASA Astrophysics Data System (ADS)
Boixo, S.; Ortiz, G.; Somma, R.
2015-02-01
Discrete combinatorial optimization consists in finding the optimal configuration that minimizes a given discrete objective function. An interpretation of such a function as the energy of a classical system allows us to reduce the optimization problem into the preparation of a low-temperature thermal state of the system. Motivated by the quantum annealing method, we present three strategies to prepare the low-temperature state that exploit quantum mechanics in remarkable ways. We focus on implementations without uncontrolled errors induced by the environment. This allows us to rigorously prove a quantum advantage. The first strategy uses a classical-to-quantum mapping, where the equilibrium properties of a classical system in d spatial dimensions can be determined from the ground state properties of a quantum system also in d spatial dimensions. We show how such a ground state can be prepared by means of quantum annealing, including quantum adiabatic evolutions. This mapping also allows us to unveil some fundamental relations between simulated and quantum annealing. The second strategy builds upon the first one and introduces a technique called spectral gap amplification to reduce the time required to prepare the same quantum state adiabatically. If implemented on a quantum device that exploits quantum coherence, this strategy leads to a quadratic improvement in complexity over the well-known bound of the classical simulated annealing method. The third strategy is not purely adiabatic; instead, it exploits diabatic processes between the low-energy states of the corresponding quantum system. For some problems it results in an exponential speedup (in the oracle model) over the best classical algorithms.
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
Chan, Kenny S K; Lee, Louis K Y; Xing, L; Chan, Anthony T C
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis, which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.
Fast Harmonic Splines and Parameter Choice Methods
NASA Astrophysics Data System (ADS)
Gutting, Martin
2017-04-01
Solutions to boundary value problems in geoscience where the boundary is the Earth's surface are constructed in terms of harmonic splines. These are localizing trial functions that allow regional modeling or the improvement of a global model in a part of the Earth's surface. Some cases of the occurring kernels can be equipped with a fast matrix-vector multiplication using the fast multipole method (FMM). The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. The numerical effort of the matrix-vector multiplication becomes linear in reference to the number of points for a prescribed accuracy of the kernel approximation. This fast spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate several methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. However, in order to keep a fast solution algorithm we do no longer have access to the whole matrix or e.g. its singular values whose computation requires a much larger numerical effort. This must be reflected by the parameter choice methods. Therefore, in some cases a further approximation is necessary. The performance of these methods is considered for different types of noise in a large simulation study with applications to gravitational field modeling as well as to boundary value problems.
Secondary waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S.; Schilling, J.B.
1995-07-01
The characterization phase of site remediation is an important and costly part of the process. Because toxic solvents and other hazardous materials are used in common analytical methods, characterization is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the volume or form of hazardous waste produced either in the sample preparation step or in the measurement step. The authors are examining alternative methods in the areas of inorganic, radiological, and organic analysis. For determining inorganic constituents, alternative methods were studied for sample introduction into inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as their associated waste volumes, were compared with the conventional approaches. In the radiological area, the authors are comparing conventional methods for gross {alpha}/{beta} measurements of soil samples to an alternative method that uses high-pressure microwave dissolution. For determination of organic constituents, microwave-assisted extraction was studied for RCRA regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil; polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil. Extraction efficiencies were determined under varying conditions of time, temperature, microwave power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in conventional extraction methods to about 30 mL. Extraction results varied from one matrix to another. In most cases, the microwave-assisted extraction technique was as efficient as the more common Soxhlet or sonication extraction techniques.
Ter Heine, Rob; Rosing, Hilde; Beijnen, Jos H; Huitema, Alwin D R
2010-08-01
We previously developed a method for the simultaneous determination of the human immunodeficiency protease inhibitors: amprenavir, atazanavir, darunavir, indinavir, lopinavir, nelfinavir, ritonavir, saquinavir and tipranavir, the active nelfinavir metabolite M8 the non-nucleoside reverse transcriptase inhibitors efavirenz, nevirapine and etravirine and the internal standards dibenzepine, (13)C(6)-efavirenz, D5-saquinavir and D6-indinavir in plasma using liquid chromatography coupled with tandem mass spectrometry with a Sciex API3000 triple quadrupole mass spectrometer and an analytical run time of only 10 min. We report the transfer of this method from the API3000 to a supposedly less sensitive Sciex API365 mass spectrometer. We describe the steps that were undertaken to optimize the sensitivity and validation of the method that we transferred. We showed that transfer of a method to a putative less sensitive detector did not necessarily result in a less sensitive assay, and this method can be applied in laboratories where older mass spectrometers are available. Ultimately, the performance of the method was validated. Accuracy and precision was within 87%-110% and <13%, respectively. No notable loss in selectivity was observed.
Directory of Analytical Methods, Department 1820
Whan, R.E.
1986-01-01
The Materials Characterization Department performs chemical, physical, and thermophysical analyses in support of programs throughout the Laboratories. The department has a wide variety of techniques and instruments staffed by experienced personnel available for these analyses, and we strive to maintain near state-of-the-art technology by continued updates. We have prepared this Directory of Analytical Methods in order to acquaint you with our capabilities and to help you identify personnel who can assist with your analytical needs. The descriptions of the various capabilities are requester-oriented and have been limited in length and detail. Emphasis has been placed on applications and limitations with notations of estimated analysis time and alternative or related techniques. A short, simplified discussion of underlying principles is also presented along with references if more detail is desired. The contents of this document have been organized in the order: bulky analysis, microanalysis, surface analysis, optical and thermal property measurements.
A method for fast feature extraction in threshold scans
NASA Astrophysics Data System (ADS)
Mertens, Marius C.; Ritman, James
2014-01-01
We present a fast, analytical method to calculate the threshold and noise parameters from a threshold scan. This is usually done by fitting a response function to the data which is computationally very intensive. The runtime can be minimized by a hardware implementation, e.g. using an FPGA, which in turn requires to minimize the mathematical complexity of the algorithm in order to fit into the available resources on the FPGA. The systematic errors of the method are analyzed and reasonable choices of the parameters for use in practice are given.
Delgado-Aparicio, L.; Tritz, K.; Kramer, T.; Stutman, D.; Finkentha, M.; Hill, K.; Bitter, M.
2010-08-26
A new set of analytic formulae describes the transmission of soft X-ray (SXR) continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler, Rev. Sci. Instrum., 20, 599, (1999)]. The new analytic formulae can improve the interpretation of the experimental results and thus contribute in obtaining fast teperature measurements in between intermittent Thomson Scattering data.
Analytical methods for toxic gases from thermal degradation of polymers
NASA Technical Reports Server (NTRS)
Hsu, M.-T. S.
1977-01-01
Toxic gases evolved from the thermal oxidative degradation of synthetic or natural polymers in small laboratory chambers or in large scale fire tests are measured by several different analytical methods. Gas detector tubes are used for fast on-site detection of suspect toxic gases. The infrared spectroscopic method is an excellent qualitative and quantitative analysis for some toxic gases. Permanent gases such as carbon monoxide, carbon dioxide, methane and ethylene, can be quantitatively determined by gas chromatography. Highly toxic and corrosive gases such as nitrogen oxides, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and sulfur dioxide should be passed into a scrubbing solution for subsequent analysis by either specific ion electrodes or spectrophotometric methods. Low-concentration toxic organic vapors can be concentrated in a cold trap and then analyzed by gas chromatography and mass spectrometry. The limitations of different methods are discussed.
Analytical methods for toxic gases from thermal degradation of polymers
NASA Technical Reports Server (NTRS)
Hsu, M.-T. S.
1977-01-01
Toxic gases evolved from the thermal oxidative degradation of synthetic or natural polymers in small laboratory chambers or in large scale fire tests are measured by several different analytical methods. Gas detector tubes are used for fast on-site detection of suspect toxic gases. The infrared spectroscopic method is an excellent qualitative and quantitative analysis for some toxic gases. Permanent gases such as carbon monoxide, carbon dioxide, methane and ethylene, can be quantitatively determined by gas chromatography. Highly toxic and corrosive gases such as nitrogen oxides, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and sulfur dioxide should be passed into a scrubbing solution for subsequent analysis by either specific ion electrodes or spectrophotometric methods. Low-concentration toxic organic vapors can be concentrated in a cold trap and then analyzed by gas chromatography and mass spectrometry. The limitations of different methods are discussed.
Analytical chromatography. Methods, instrumentation and applications
NASA Astrophysics Data System (ADS)
Yashin, Ya I.; Yashin, A. Ya
2006-04-01
The state-of-the-art and the prospects in the development of main methods of analytical chromatography, viz., gas, high performance liquid and ion chromatographic techniques, are characterised. Achievements of the past 10-15 years in the theory and general methodology of chromatography and also in the development of new sorbents, columns and chromatographic instruments are outlined. The use of chromatography in the environmental control, biology, medicine, pharmaceutics, and also for monitoring the quality of foodstuffs and products of chemical, petrochemical and gas industries, etc. is considered.
The greening of PCB analytical methods
Erickson, M.D.; Alvarado, J.S.; Aldstadt, J.H.
1995-12-01
Green chemistry incorporates waste minimization, pollution prevention and solvent substitution. The primary focus of green chemistry over the past decade has been within the chemical industry; adoption by routine environmental laboratories has been slow because regulatory standard methods must be followed. A related paradigm, microscale chemistry has gained acceptance in undergraduate teaching laboratories, but has not been broadly applied to routine environmental analytical chemistry. We are developing green and microscale techniques for routine polychlorinated biphenyl (PCB) analyses as an example of the overall potential within the environmental analytical community. Initial work has focused on adaptation of commonly used routine EPA methods for soils and oils. Results of our method development and validation demonstrate that: (1) Solvent substitution can achieve comparable results and eliminate environmentally less-desirable solvents, (2) Microscale extractions can cut the scale of the analysis by at least a factor of ten, (3) We can better match the amount of sample used with the amount needed for the GC determination step, (4) The volume of waste generated can be cut by at least a factor of ten, and (5) Costs are reduced significantly in apparatus, reagent consumption, and labor.
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
An overview of fast multipole methods
Strickland, J.H.; Baty, R.S.
1995-11-01
A number of physics problems may be cast in terms of Hilbert-Schmidt integral equations. In many cases, the integrals tend to be zero over a large portion of the domain of interest. All of the information is contained in compact regions of the domain which renders their use very attractive from the standpoint of efficient numerical computation. Discrete representation of these integrals leads to a system of N elements which have pair-wise interactions with one another. A direct solution technique requires computational effort which is O(N{sup 2}). Fast multipole methods (FMM) have been widely used in recent years to obtain solutions to these problems requiring a computational effort of only O(Nln N) or O(N). In this paper we present an overview of several variations of the fast multipole method along with examples of its use in solving a variety of physical problems.
Constrained sampling method for analytic continuation
NASA Astrophysics Data System (ADS)
Sandvik, Anders W.
2016-12-01
A method for analytic continuation of imaginary-time correlation functions (here obtained in quantum Monte Carlo simulations) to real-frequency spectral functions is proposed. Stochastically sampling a spectrum parametrized by a large number of δ functions, treated as a statistical-mechanics problem, it avoids distortions caused by (as demonstrated here) configurational entropy in previous sampling methods. The key development is the suppression of entropy by constraining the spectral weight to within identifiable optimal bounds and imposing a set number of peaks. As a test case, the dynamic structure factor of the S =1 /2 Heisenberg chain is computed. Very good agreement is found with Bethe ansatz results in the ground state (including a sharp edge) and with exact diagonalization of small systems at elevated temperatures.
Constrained sampling method for analytic continuation.
Sandvik, Anders W
2016-12-01
A method for analytic continuation of imaginary-time correlation functions (here obtained in quantum Monte Carlo simulations) to real-frequency spectral functions is proposed. Stochastically sampling a spectrum parametrized by a large number of δ functions, treated as a statistical-mechanics problem, it avoids distortions caused by (as demonstrated here) configurational entropy in previous sampling methods. The key development is the suppression of entropy by constraining the spectral weight to within identifiable optimal bounds and imposing a set number of peaks. As a test case, the dynamic structure factor of the S=1/2 Heisenberg chain is computed. Very good agreement is found with Bethe ansatz results in the ground state (including a sharp edge) and with exact diagonalization of small systems at elevated temperatures.
Fast iterative reconstruction method for PROPELLER MRI
NASA Astrophysics Data System (ADS)
Guo, Hongyu; Dai, Jianping; Shi, Jinquan
2009-10-01
Patient motion during scanning will introduce artifacts in the reconstructed image in MRI imaging. Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction (PROPELLER) MRI is an effective technique to correct for motion artifacts. The iterative method that combine the preconditioned conjugate gradient (PCG) algorithm with nonuniform fast Fourier transformation (NUFFT) operations is applied to PROPELLER MRI in the paper. But the drawback of the method is long reconstruction time. In order to make it viable in clinical situation, parallel optimization of the iterative method on modern GPU using CUDA is proposed. The simulated data and in vivo data from PROPELLER MRI are respectively reconstructed in order to test the method. The experimental results show that image quality is improved compared with gridding method using the GPU based iterative method with compatible reconstruction time.
Analytical Methods for Exoplanet Imaging Detection Metrics
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2017-01-01
When designing or simulating exoplanet-finding missions, a selection metric must be used to choose which target stars will be observed. For direct imaging missions, the metric is a function of the planet-star separation and flux ratio as constrained by the instrument's inner and outer working angles and contrast. We present analytical methods for the calculation of two detection metrics: completeness and depth of search. While Monte Carlo methods have typically been used for determining each of these detection metrics, implementing analytical methods in simulation or early stage design yields quicker, more accurate calculations.Completeness is the probability of detecting a planet belonging to the planet population of interest. This metric requires assumptions to be made about the planet population. Probability density functions are assumed for the planetary parameters of semi-major axis, eccentricity, geometric albedo, and planetary radius. Planet-star separation and difference in brightness magnitude or contrast are written as functions of these parameters. A change of variables is performed to get a joint probability density function of planet-star separation and difference in brightness magnitude or contrast. This joint probability density function is marginalized subject to the constraints of the instrument to yield the probability of detecting a planet belonging to the population of interest.Depth of search for direct imaging is the sum of the probability of detecting a planet of given semi-major axis and planetary radius by a given instrument for a target list. This metric does not depend on assumed planet population parameter distributions. A two-dimensional grid of probabilities is generated for each star in the target list. The probability at each point in the grid is found by marginalizing a probability density function of contrast given constant values of semi-major axis and planetary radius subject to the constraints of the instrument.
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
Fast multipole methods for particle dynamics
Kurzak, J.; Pettitt, B. M.
2008-01-01
The growth of simulations of particle systems has been aided by advances in computer speed and algorithms. The adoption of O(N) algorithms to solve N-body simulation problems has been less rapid due to the fact that such scaling was only competitive for relatively large N. Our work seeks to find algorithmic modifications and practical implementations for intermediate values of N in typical use for molecular simulations. This article reviews fast multipole techniques for calculation of electrostatic interactions in molecular systems. The basic mathematics behind fast summations applied to long ranged forces is presented along with advanced techniques for accelerating the solution, including our most recent developments. The computational efficiency of the new methods facilitates both simulations of large systems as well as longer and therefore more realistic simulations of smaller systems. PMID:19194526
Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home
The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.
Analytical method for Buddleja colorants in foods.
Aoki, H; Kuze, N; Ichi, T; Koda, T
2001-04-01
Buddleja yellow colorant derived from Buddleja officinalis Maxim. has recently been approved for use as a new kind of natural colorant for food additives in China. In order to distinguish Buddleja yellow colorant from other yellow colorants, two known phenylpropanoid glycosides, acteoside (= verbascoside) and poliumoside, were isolated from the colorant as marker substances for Buddleja yellow colorant. Poliumoside has not been detected in B. officinalis Maxim. previously. These phenylpropanoid glycosides were not detected in the fruits of Gardenia jasminoides Ellis or in the stamens of the flowers of Crocus sativus L., which also contain crocetin derivatives as coloring components, using a photodiode array and mass chromatograms. Thus, an analytical HPLC method was developed to distinguish foods that have been colored with yellow colorants containing crocetin derivatives, using phenylpropanoid glycosides as markers.
Pyrroloquinoline quinone: Metabolism and analytical methods
Smidt, C.R.
1990-01-01
Pyrroloquinoline quinone (PQQ) functions as a cofactor for bacterial oxidoreductases. Whether or not PQQ serves as a cofactor in higher plants and animals remains controversial. Nevertheless, strong evidence exists that PQQ has nutritional importance. In highly purified, chemically defined diets PQQ stimulates animal growth. Further PQQ deprivation impairs connective tissue maturation, particularly when initiated in utero and throughout perinatal development. The study addresses two main objectives: (1) to elucidate basic aspects of the metabolism of PQQ in animals, and (2) to develop and improve existing analytical methods for PQQ. To study intestinal absorption of PQQ, ten mice were administered [[sup 14]C]-PQQ per os. PQQ was readily absorbed (62%) in the lower intestine and was excreted by the kidney within 24 hours. Significant amounts of labeled-PQQ were retained only by skin and kidney. Three approaches were taken to answer the question whether or not PQQ is synthesized by the intestinal microflora of mice. First, dietary antibiotics had no effect on fecal PQQ excretion. Then, no bacterial isolates could be identified that are known to synthesize PQQ. Last, cecal contents were incubated anaerobically with radiolabeled PQQ-precursors with no label appearing in isolated PQQ. Thus, intestinal PQQ synthesis is unlikely. Analysis of PQQ in biological samples is problematic since PQQ forms adducts with nucleophilic compounds and binds to the protein fraction. Existing analytical methods are reviewed and a new approach is introduced that allows for detection of PQQ in animal tissue and foods. PQQ is freed from proteins by ion exchange chromatography, purified on activated silica cartridges, detected by a colorimetric redox-cycling assay, and identified by mass spectrometry. That compounds with the properties of PQQ may be nutritionally important offers interesting areas for future investigation.
State-of-the-art in fast liquid chromatography-mass spectrometry for bio-analytical applications.
Núñez, Oscar; Gallart-Ayala, Héctor; Martins, Claudia P B; Lucci, Paolo; Busquets, Rosa
2013-05-15
There is an increasing need of new bio-analytical methodologies with enough sensitivity, robustness and resolution to cope with the analysis of a large number of analytes in complex matrices in short analysis time. For this purpose, all steps included in any bio-analytical method (sampling, extraction, clean-up, chromatographic analysis and detection) must be taken into account to achieve good and reliable results with cost-effective methodologies. The purpose of this review is to describe the state-of-the-art of the most employed technologies in the period 2009-2012 to achieve fast analysis with liquid chromatography coupled to mass spectrometry (LC-MS) methodologies for bio-analytical applications. Current trends in fast liquid chromatography involve the use of several column technologies and this review will focus on the two most frequently applied: sub-2μm particle size packed columns to achieve ultra high pressure liquid chromatography (UHPLC) separations and porous-shell particle packed columns to attain high efficiency separations with reduced column back-pressures. Additionally, recent automated sample extraction and clean-up methodologies to reduce sample manipulation, variability and total analysis time in bio-analytical applications such as on-line solid phase extraction coupled to HPLC or UHPLC methods, or the use of other approaches such as molecularly imprinted polymers, restricted access materials, and turbulent flow chromatography will also be addressed. The use of mass spectrometry and high or even ultra-high resolution mass spectrometry to reduce sample manipulation and to solve ion suppression or ion enhancement and matrix effects will also be presented. The advantages and drawbacks of all these methodologies for fast and sensitive analysis of biological samples are going to be discussed by means of relevant applications.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS... § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS... § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...
Weaver, Abigail A.; Reiser, Hannah; Barstis, Toni; Benvenuti, Michael; Ghosh, Debarati; Hunckler, Michael; Joy, Brittney; Koenig, Leah; Raddell, Kellie; Lieberman, Marya
2013-01-01
Reports of low quality pharmaceuticals have been on the rise in the last decade with the greatest prevalence of substandard medicines in developing countries, where lapses in manufacturing quality control or breaches in the supply chain allow substandard medicines to reach the marketplace. Here, we describe inexpensive test cards for fast field screening of pharmaceutical dosage forms containing beta lactam antibiotics or combinations of the four first-line antituberculosis (TB) drugs. The devices detect the active pharmaceutical ingredients (APIs) ampicillin, amoxicillin, rifampicin, isoniazid, ethambutol, and pyrazinamide, and also screen for substitute pharmaceuticals such as acetaminophen and chloroquine that may be found in counterfeit pharmaceuticals. The tests can detect binders and fillers like chalk, talc, and starch not revealed by traditional chromatographic methods. These paper devices contain twelve lanes, separated by hydrophobic barriers, with different reagents deposited in the lanes. The user rubs some of the solid pharmaceutical across the lanes and dips the edge of the paper into water. As water climbs up the lanes by capillary action, it triggers a library of different chemical tests and a timer to indicate when the tests are completed. The reactions in each lane generate colors to form a “color bar code” which can be analyzed visually by comparison to standard outcomes. While quantification of the APIs is poor compared to conventional analytical methods, the sensitivity and selectivity for the analytes is high enough to pick out suspicious formulations containing no API or a substitute API, as well as formulations containing APIs that have been “cut” with inactive ingredients. PMID:23725012
Weaver, Abigail A; Reiser, Hannah; Barstis, Toni; Benvenuti, Michael; Ghosh, Debarati; Hunckler, Michael; Joy, Brittney; Koenig, Leah; Raddell, Kellie; Lieberman, Marya
2013-07-02
Reports of low-quality pharmaceuticals have been on the rise in the past decade, with the greatest prevalence of substandard medicines in developing countries, where lapses in manufacturing quality control or breaches in the supply chain allow substandard medicines to reach the marketplace. Here, we describe inexpensive test cards for fast field screening of pharmaceutical dosage forms containing beta lactam antibiotics or combinations of the four first-line antituberculosis (TB) drugs. The devices detect the active pharmaceutical ingredients (APIs) ampicillin, amoxicillin, rifampicin, isoniazid, ethambutol, and pyrazinamide and also screen for substitute pharmaceuticals, such as acetaminophen and chloroquine that may be found in counterfeit pharmaceuticals. The tests can detect binders and fillers such as chalk, talc, and starch not revealed by traditional chromatographic methods. These paper devices contain 12 lanes, separated by hydrophobic barriers, with different reagents deposited in the lanes. The user rubs some of the solid pharmaceutical across the lanes and dips the edge of the paper into water. As water climbs up the lanes by capillary action, it triggers a library of different chemical tests and a timer to indicate when the tests are completed. The reactions in each lane generate colors to form a "color bar code" which can be analyzed visually by comparison with standard outcomes. Although quantification of the APIs is poor compared with conventional analytical methods, the sensitivity and selectivity for the analytes is high enough to pick out suspicious formulations containing no API or a substitute API as well as formulations containing APIs that have been "cut" with inactive ingredients.
NASA Astrophysics Data System (ADS)
Kurylyk, Barret L.; Irvine, Dylan J.
2016-02-01
This study details the derivation and application of a new analytical solution to the one-dimensional, transient conduction-advection equation that is applied to trace vertical subsurface fluid fluxes. The solution employs a flexible initial condition that allows for nonlinear temperature-depth profiles, providing a key improvement over most previous solutions. The boundary condition is composed of any number of superimposed step changes in surface temperature, and thus it accommodates intermittent warming and cooling periods due to long-term changes in climate or land cover. The solution is verified using an established numerical model of coupled groundwater flow and heat transport. A new computer program FAST (Flexible Analytical Solution using Temperature) is also presented to facilitate the inversion of this analytical solution to estimate vertical groundwater flow. The program requires surface temperature history (which can be estimated from historic climate data), subsurface thermal properties, a present-day temperature-depth profile, and reasonable initial conditions. FAST is written in the Python computing language and can be run using a free graphical user interface. Herein, we demonstrate the utility of the analytical solution and FAST using measured subsurface temperature and climate data from the Sendia Plain, Japan. Results from these illustrative examples highlight the influence of the chosen initial and boundary conditions on estimated vertical flow rates.
Selected Analytical Methods for Environmental Remediation ...
The US Environmental Protection Agency’s Office of Research and Development (ORD) conducts cutting-edge research that provides the underpinning of science and technology for public health and environmental policies and decisions made by federal, state and other governmental organizations. ORD’s six research programs identify the pressing research needs with input from EPA offices and stakeholders. Research is conducted by ORD’s 3 labs, 4 centers, and 2 offices located in 14 facilities. The EPA booth at APHL will have several resources available to attendees, mostly in the form of print materials, that showcase our research labs, case studies of research activities, and descriptions of specific research projects. The Selected Analytical Methods for Environmental Remediation and Recovery (SAM), a library of selected methods that are helping to increase the nation's laboratory capacity to support large-scale emergency response operations, will be demoed by EPA scientists at the APHL Experience booth in the Exhibit Hall on Tuesday during the morning break. Please come to the EPA booth #309 for more information! To be on a loop at our ORD booth demo during APHL.
An Analytical Approach for Fast Recovery of the LSI Properties in Magnetic Particle Imaging
Jabbari Asl, Hamed
2016-01-01
Linearity and shift invariance (LSI) characteristics of magnetic particle imaging (MPI) are important properties for quantitative medical diagnosis applications. The MPI image equations have been theoretically shown to exhibit LSI; however, in practice, the necessary filtering action removes the first harmonic information, which destroys the LSI characteristics. This lost information can be constant in the x-space reconstruction method. Available recovery algorithms, which are based on signal matching of multiple partial field of views (pFOVs), require much processing time and a priori information at the start of imaging. In this paper, a fast analytical recovery algorithm is proposed to restore the LSI properties of the x-space MPI images, representable as an image of discrete concentrations of magnetic material. The method utilizes the one-dimensional (1D) x-space imaging kernel and properties of the image and lost image equations. The approach does not require overlapping of pFOVs, and its complexity depends only on a small-sized system of linear equations; therefore, it can reduce the processing time. Moreover, the algorithm only needs a priori information which can be obtained at one imaging process. Considering different particle distributions, several simulations are conducted, and results of 1D and 2D imaging demonstrate the effectiveness of the proposed approach. PMID:27847513
Microgenetic Learning Analytics Methods: Workshop Report
ERIC Educational Resources Information Center
Aghababyan, Ani; Martin, Taylor; Janisiewicz, Philip; Close, Kevin
2016-01-01
Learning analytics is an emerging discipline and, as such, benefits from new tools and methodological approaches. This work reviews and summarizes our workshop on microgenetic data analysis techniques using R, held at the second annual Learning Analytics Summer Institute in Cambridge, Massachusetts, on 30 June 2014. Specifically, this paper…
Advances in paper-analytical methods for pharmaceutical analysis.
Sharma, Niraj; Barstis, Toni; Giri, Basant
2017-09-22
Paper devices have many advantages over other microfluidic devices. The paper substrate, from cellulose to glass fiber, is an inexpensive substrate that can be readily modified to suit a variety of applications. Milli- to micro-scale patterns can be designed to create a fast, cost-effective device that uses small amounts of reagents and samples. Finally, well-established chemical and biological methods can be adapted to paper to yield a portable device that can be used in resource-limited areas (e.g., field work). Altogether, the paper devices have grown into reliable analytical devices for screening low quality pharmaceuticals. This review article presents fabrication processes, detection techniques, and applications of paper microfluidic devices toward pharmaceutical screening. Copyright © 2017 Elsevier B.V. All rights reserved.
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...
New analytical input impedance calculation for fast design of printed narrow slot antenna
NASA Astrophysics Data System (ADS)
Akan, Volkan; Yazgan, Erdem
2011-09-01
In this article, fast and simple closed-form relations are presented to calculate input impedance of filamentary-excited printed narrow slot antenna on electrically thin dielectric substrate near to half wavelength resonance frequency. This antenna is a complementary structure of thin printed dipole antenna. The obtained formulations have been verified numerically by a commercially available full-wave electromagnetic simulator and then a prototype antenna has been produced. Experimental measurements are realised on the antenna and these measurement values of input impedance also have verified analytical relations and numerical simulations. It is shown that analytical, numerical and experimental results are close to each other. Therefore, these relations can be utilised as an initial design step just before making detailed numerical analyses. Besides, the given relations can be easily realised in CAD-oriented platforms for fast calculations.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements,...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... a surfactant that will not affect the chemistry of the method), which may include Brij-35 or sodium... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not...
Metalaxyl: persistence, degradation, metabolism, and analytical methods.
Sukul, P; Spiteller, M
2000-01-01
Metalaxyl is a systemic fungicide used to control plant diseases caused by Oomycete fungi. Its formulations include granules, wettable powders, dusts, and emulsifiable concentrates. Application may be by foliar or soil incorporation, surface spraying (broadcast or band), drenching, and seed treatment. Metalaxyl registered products either contain metalaxyl as the sole active ingredient or are combined with other active ingredients (e.g., captan, mancozeb, copper compounds, carboxin). Due to its broad-spectrum activity, metalaxyl is used world-wide on a variety of fruit and vegetable crops. Its effectiveness results from inhibition of uridine incorporation into RNA and specific inhibition of RNA polymerase-1. Metalaxyl has both curative and systemic properties. Its mammalian toxicity is classified as EPA toxicity class III and it is also relatively non-toxic to most nontarget arthropod and vertebrate species. Adequate analytical methods of TLC, GLC, HPLC, MS, and other techniques are available for identification and determination of metalaxyl residues and its metabolites. Available laboratory and field studies indicate that metalaxyl is stable to hydrolysis under normal environmental pH values, It is also photolytically stable in water and soil when exposed to natural sunlight. Its tolerance to a wide range of pH, light, and temperature leads to its continued use in agriculture. Metalaxyl is photodecomposed in UV light, and photoproducts are formed by rearrangement of the N-acyl group to the aromatic ring, demethoxylation, N-deacylation, and elimination of the methoxycarbonyl group from the molecule. Photosensitizers such as humic acid, TiO2, H2O2, acetone, and riboflavin accelerate its photodecomposition. Information is provided on the fate of metalaxyl in plant, soil, water, and animals. Major metabolic routes include hydrolysis of the methyl ester and methyl ether oxidation of the ring-methyl groups. The latter are precursors of conjugates in plants and animals
An analytical method for computing atomic contact areas in biomolecules.
Mach, Paul; Koehl, Patrice
2013-01-15
We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... INTERNATIONAL, 481 North Frederick Avenue, Suite 500, Gaithersburg, MD 20877-2417. (f) Manual of Analytical... Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O. Box 3489, 2211... INTERNATIONAL, Volumes I & II, AOAC INTERNATIONAL, 481 North Frederick Avenue, Suite 500, Gaithersburg, MD...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... INTERNATIONAL, 481 North Frederick Avenue, Suite 500, Gaithersburg, MD 20877-2417. (f) Manual of Analytical... Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O. Box 3489, 2211... INTERNATIONAL, Volumes I & II, AOAC INTERNATIONAL, 481 North Frederick Avenue, Suite 500, Gaithersburg, MD...
Novel analytical methods for the characterization of oral wafers.
Garsuch, Verena; Breitkreutz, Jörg
2009-09-01
This study aims at compensating the lack of adequate methods for the characterization of the novel dosage forms buccal wafers by applying recent advanced analytical techniques. Fast-dissolving oral wafers need special methods for assessing their properties in drug development and quality control. For morphologic investigations, scanning electron microscopy (SEM) and near-infrared chemical imaging (NIR-CI) were used. Differences in the distribution of the active pharmaceutical ingredient within wafers can be depicted by NIR-CI. Film thickness was determined by micrometer screw and coating thickness gauge revealing no significant differences between the obtained values. To distinguish between the mechanical properties of different polymers, tensile test was performed. Suitable methods to predict disintegration behaviour are thermomechanical analysis and contact angle measurement. The determination of drug release was carried out by three different methods. Fibre-optic sensor systems allow an online measurement of the drug release profiles and the thorough analysis even within the first seconds of disintegration and drug dissolution.
A Dynamic Management Method for Fast Manufacturing Resource Reconfiguration
NASA Astrophysics Data System (ADS)
Yuan, Zhiye
To fast and optimally reconfigure manufacturing resource, a dynamic management method for fast manufacturing resource reconfiguration based on holon was proposed. In this method, a dynamic management structure for fast manufacturing resource reconfiguration was established based on holon. Moreover, the cooperation relationship among holons for fast manufacturing resource reconfiguration and the manufacturing information cooperation mechanism based on holonic were constructed. Finally, the simulation system of a dynamic management method for fast manufacturing resource reconfiguration was demonstrated and validated by Flexsim software. It has shown the proposed method can dynamically and optimally reconfigure manufacturing resource, and it can effectively improve the efficiency of manufacturing processes.
Fast Implicit Methods for Stiff Moving Interfaces
2011-03-16
y) of the elliptic system (2): multiplying Eq. (2) by S, integrating over Ω and applying Gauss ’ theorem yields 1 2 µ(γ)− ∫ Γ P (γ)S(γ − σ)An(σ)µ(σ...Fast ADI iteration for first-order elliptic systems. Preprint, UC Berke- ley Mathematics Department, 2011. [15] J. Strain. Geometric nonuniform fast
Methods and Instruments for Fast Neutron Detection
Jordan, David V.; Reeder, Paul L.; Cooper, Matthew W.; McCormick, Kathleen R.; Peurrung, Anthony J.; Warren, Glen A.
2005-05-01
Pacific Northwest National Laboratory evaluated the performance of a large-area (~0.7 m2) plastic scintillator time-of-flight (TOF) sensor for direct detection of fast neutrons. This type of sensor is a readily area-scalable technology that provides broad-area geometrical coverage at a reasonably low cost. It can yield intrinsic detection efficiencies that compare favorably with moderator-based detection methods. The timing resolution achievable should permit substantially more precise time windowing of return neutron flux than would otherwise be possible with moderated detectors. The energy-deposition threshold imposed on each scintillator contributing to the event-definition trigger in a TOF system can be set to blind the sensor to direct emission from the neutron generator. The primary technical challenge addressed in the project was to understand the capabilities of a neutron TOF sensor in the limit of large scintillator area and small scintillator separation, a size regime in which the neutral particle’s flight path between the two scintillators is not tightly constrained.
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...
NASA Technical Reports Server (NTRS)
Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas
1986-01-01
The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.
Fast calculation method for spherical computer-generated holograms.
Tachiki, Mark L; Sando, Yusuke; Itoh, Masahide; Yatagai, Toyohiko
2006-05-20
The synthesis of spherical computer-generated holograms is investigated. To deal with the staggering calculation times required to synthesize the hologram, a fast calculation method for approximating the hologram distribution is proposed. In this method, the diffraction integral is approximated as a convolution integral, allowing computation using the fast-Fourier-transform algorithm. The principles of the fast calculation method, the error in the approximation, and results from simulations are presented.
Accelerated panel methods using the fast multipole method
NASA Technical Reports Server (NTRS)
Leathrum, James F., Jr.
1994-01-01
Panel methods are commonly used in computational fluid dynamics for the solution of potential flow problems. The methods are a numerical technique based on the surface distribution of singularity elements. The solution is the process of finding the strength of the singularity elements distributed over the body's surface. This process involves the solution of the matrix problem Pq = p' for a set of unknowns q. The Fast Multipole Method is used to directly compute q without using matrix solvers. The algorithm works in O(N) time for N points, a great improvement over standard matrix solvers. In panel methods, the surface of a body is divided into a series of quadrilateral panels. The methods involve the computation of the influence of all other panels on each individual panel. The influence is based on the surface distribution, though this can be approximated by the area for distant panels. An alternative approximation, though with arbitrary accuracy, is to develop a multipole expansion about the center of the panel to describe the effect of a given panel on distant points in space. The expansion is based on the moments of the panel, thus allow the use of various surface distributions without changing the basic algorithm, just the computation of the various moments. The expansions are then manipulated in a tree walk to develop Taylor series expansions about a point in space which describe the effect of all distant panels on any point within a volume of convergence. The effect of near panels then needs to be computed directly, but the effect of all distant panels can be computed by simply evaluating the resulting expansion. The Fast Multipole Method has been applied to panel methods for the solution of source and doublet distributions. A major feature of the algorithm is that the algorithm does not change to derive the potential and velocity for sources and doublets. The same expansions can be used for both sources and doublets. Since the velocity is related to the
A Joint Analytic Method for Estimating Aquitard Hydraulic Parameters.
Zhuang, Chao; Zhou, Zhifang; Illman, Walter A
2017-01-10
The vertical hydraulic conductivity (Kv ), elastic (Sske ), and inelastic (Sskv ) skeletal specific storage of aquitards are three of the most critical parameters in land subsidence investigations. Two new analytic methods are proposed to estimate the three parameters. The first analytic method is based on a new concept of delay time ratio for estimating Kv and Sske of an aquitard subject to long-term stable, cyclic hydraulic head changes at boundaries. The second analytic method estimates the Sskv of the aquitard subject to linearly declining hydraulic heads at boundaries. Both methods are based on analytical solutions for flow within the aquitard, and they are jointly employed to obtain the three parameter estimates. This joint analytic method is applied to estimate the Kv , Sske , and Sskv of a 34.54-m thick aquitard for which the deformation progress has been recorded by an extensometer located in Shanghai, China. The estimated results are then calibrated by PEST (Doherty 2005), a parameter estimation code coupled with a one-dimensional aquitard-drainage model. The Kv and Sske estimated by the joint analytic method are quite close to those estimated via inverse modeling and performed much better in simulating elastic deformation than the estimates obtained from the stress-strain diagram method of Ye and Xue (2005). The newly proposed joint analytic method is an effective tool that provides reasonable initial values for calibrating land subsidence models.
NASA Astrophysics Data System (ADS)
Kokhanovsky, Alexander; Katsev, Iosif; Prikhach, Alexander; Zege, Eleonora
We present the new fast aerosol retrieval technique (FAR) to retrieve the aerosol optical thick-ness (AOT), Angstrom parameter, and land reflectance from spectral satellite data. The most important difference of the proposed techniques from NASA/MODIS, ESA/MERIS and some other well-known AOT retrieval codes is that our retrievals do not use the look-up tables (LUT) technique but instead it is based on our previously developed extremely fast code RAY for ra-diative transfer (RT) computations and includes analytical solutions of radiative transfer. The previous version of the retrieval code (ART) was completely based at the RT computations. The FAR technique is about 100 times faster than ART because of the use combination of the RAY computation and analytical solution of the radiative transfer theory. The accuracy of these approximate solutions is thoroughly checked. Using the RT computations in the course of the AOT retrieval allows one to include any available local models of molecular atmosphere and of aerosol in upper and middle atmosphere layers for the treated area. Any set of wave-lengths from any satellite optical instruments can be processed. Moreover, we use the method of least squares in the retrieval of optical parameters of aerosol because the RAY code pro-vides the derivatives of the radiation characteristics with respect to the parameters in question. This technique allows the optimal use on multi-spectral information. The retrieval methods are flexible and can be used in synergetic algorithms, which couple data of two or more satel-lite receivers. These features may be considered as definite merits in comparison with the LUT technique. The successful comparison of FAR retrieved data with results of some other algorithms and with AERONET measurements will be demonstrated. Beside two important problems, namely, the effect of a priory choice of aerosol model to the retrieved AOT accuracy and effect of adjacent pixels containing clouds or snow spots is
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS... according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS... according to approved procedures described in manuals of standardized methodology. These standard methods...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States Environmental Protection Agency, EPA-815-R-05-002 or Method 1622: Cryptosporidium in Water by Filtration/IMS/FA... 141.704 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States Environmental Protection Agency, EPA-815-R-05-002 or Method 1622: Cryptosporidium in Water by Filtration/IMS/FA... 141.704 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER...
Learner Language Analytic Methods and Pedagogical Implications
ERIC Educational Resources Information Center
Dyson, Bronwen
2010-01-01
Methods for analysing interlanguage have long aimed to capture learner language in its own right. By surveying the cognitive methods of Error Analysis, Obligatory Occasion Analysis and Frequency Analysis, this paper traces reformulations to attain this goal. The paper then focuses on Emergence Analysis, which fine-tunes learner language analysis…
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 20005. (g) Standard Methods for the Examination of Water and Wastewater, American Public Health Association (APHA), the American Water Works Association (AWWA) and the Water Pollution Control Federation... Physical/Chemical Methods, Environmental Protection Agency, Office of Solid Waste, SW-846 Integrated Manual...
A fast Chebyshev method for simulating flexible-wing propulsion
NASA Astrophysics Data System (ADS)
Moore, M. Nicholas J.
2017-09-01
We develop a highly efficient numerical method to simulate small-amplitude flapping propulsion by a flexible wing in a nearly inviscid fluid. We allow the wing's elastic modulus and mass density to vary arbitrarily, with an eye towards optimizing these distributions for propulsive performance. The method to determine the wing kinematics is based on Chebyshev collocation of the 1D beam equation as coupled to the surrounding 2D fluid flow. Through small-amplitude analysis of the Euler equations (with trailing-edge vortex shedding), the complete hydrodynamics can be represented by a nonlocal operator that acts on the 1D wing kinematics. A class of semi-analytical solutions permits fast evaluation of this operator with O (Nlog N) operations, where N is the number of collocation points on the wing. This is in contrast to the minimum O (N2) cost of a direct 2D fluid solver. The coupled wing-fluid problem is thus recast as a PDE with nonlocal operator, which we solve using a preconditioned iterative method. These techniques yield a solver of near-optimal complexity, O (Nlog N) , allowing one to rapidly search the infinite-dimensional parameter space of all possible material distributions and even perform optimization over this space.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
Rotary fast tool servo system and methods
Montesanti, Richard C.; Trumper, David L.
2007-10-02
A high bandwidth rotary fast tool servo provides tool motion in a direction nominally parallel to the surface-normal of a workpiece at the point of contact between the cutting tool and workpiece. Three or more flexure blades having all ends fixed are used to form an axis of rotation for a swing arm that carries a cutting tool at a set radius from the axis of rotation. An actuator rotates a swing arm assembly such that a cutting tool is moved in and away from the lathe-mounted, rotating workpiece in a rapid and controlled manner in order to machine the workpiece. A pair of position sensors provides rotation and position information for a swing arm to a control system. A control system commands and coordinates motion of the fast tool servo with the motion of a spindle, rotating table, cross-feed slide, and in-feed slide of a precision lathe.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick Avenue, Gaithersburg, MD 20877-2417. (2)...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 141.704 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Treatment for Cryptosporidium Source Water... Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
Analytical chemistry methods for mixed oxide fuel, March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of materials used to produce mixed oxide fuel. These materials are ceramic fuel and insulator pellets and the plutonium and uranium oxides and nitrates used to fabricate these pellets.
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
....005 mg/L. The Practical Quantitation Level, or PQL for lead is 0.005 mg/L. (B) For Copper: ±10 percent... equal to 0.050 mg/L. The Practical Quantitation Level, or PQL for copper is 0.050 mg/L. (iii) Achieve the method detection limit for lead of 0.001 mg/L according to the procedures in appendix B of...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
....005 mg/L. The Practical Quantitation Level, or PQL for lead is 0.005 mg/L. (B) For Copper: ±10 percent... equal to 0.050 mg/L. The Practical Quantitation Level, or PQL for copper is 0.050 mg/L. (iii) Achieve the method detection limit for lead of 0.001 mg/L according to the procedures in appendix B of...
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
2005-01-01
Gene expression databases contain a wealth of information, but current data mining tools are limited in their speed and effectiveness in extracting meaningful biological knowledge from them. Online analytical processing (OLAP) can be used as a supplement to cluster analysis for fast and effective data mining of gene expression databases. We used Analysis Services 2000, a product that ships with SQLServer2000, to construct an OLAP cube that was used to mine a time series experiment designed to identify genes associated with resistance of soybean to the soybean cyst nematode, a devastating pest of soybean. The data for these experiments is stored in the soybean genomics and microarray database (SGMD). A number of candidate resistance genes and pathways were found. Compared to traditional cluster analysis of gene expression data, OLAP was more effective and faster in finding biologically meaningful information. OLAP is available from a number of vendors and can work with any relational database management system through OLE DB. PMID:16046824
Analytical techniques for instrument design - matrix methods
Robinson, R.A.
1997-09-01
We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Handbook of Analytical Methods for Textile Composites
NASA Technical Reports Server (NTRS)
Cox, Brian N.; Flanagan, Gerry
1997-01-01
The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.
Relativistic mirrors in laser plasmas (analytical methods)
NASA Astrophysics Data System (ADS)
Bulanov, S. V.; Esirkepov, T. Zh; Kando, M.; Koga, J.
2016-10-01
Relativistic flying mirrors in plasmas are realized as thin dense electron (or electron-ion) layers accelerated by high-intensity electromagnetic waves to velocities close to the speed of light in vacuum. The reflection of an electromagnetic wave from the relativistic mirror results in its energy and frequency changing. In a counter-propagation configuration, the frequency of the reflected wave is multiplied by the factor proportional to the Lorentz factor squared. This scientific area promises the development of sources of ultrashort x-ray pulses in the attosecond range. The expected intensity will reach the level at which the effects predicted by nonlinear quantum electrodynamics start to play a key role. We present an overview of theoretical methods used to describe relativistic flying, accelerating, oscillating mirrors emerging in intense laser-plasma interactions.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-01-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.
1994-01-01
Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.
Analytical instruments, ionization sources, and ionization methods
Atkinson, David A.; Mottishaw, Paul
2006-04-11
Methods and apparatus for simultaneous vaporization and ionization of a sample in a spectrometer prior to introducing the sample into the drift tube of the analyzer are disclosed. The apparatus includes a vaporization/ionization source having an electrically conductive conduit configured to receive sample particulate which is conveyed to a discharge end of the conduit. Positioned proximate to the discharge end of the conduit is an electrically conductive reference device. The conduit and the reference device act as electrodes and have an electrical potential maintained between them sufficient to cause a corona effect, which will cause at least partial simultaneous ionization and vaporization of the sample particulate. The electrical potential can be maintained to establish a continuous corona, or can be held slightly below the breakdown potential such that arrival of particulate at the point of proximity of the electrodes disrupts the potential, causing arcing and the corona effect. The electrical potential can also be varied to cause periodic arcing between the electrodes such that particulate passing through the arc is simultaneously vaporized and ionized. The invention further includes a spectrometer containing the source. The invention is particularly useful for ion mobility spectrometers and atmospheric pressure ionization mass spectrometers.
Analytic Methods Used in Quality Control in a Compounding Pharmacy.
Allen, Loyd V
2017-01-01
Analytical testing will no doubt become a more important part of pharmaceutical compounding as the public and regulatory agencies demand increasing documentation of the quality of compounded preparations. Compounding pharmacists must decide what types of testing and what amount of testing to include in their quality-control programs, and whether testing should be done in-house or outsourced. Like pharmaceutical compounding, analytical testing should be performed only by those who are appropriately trained and qualified. This article discusses the analytical methods that are used in quality control in a compounding pharmacy. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Delgado-Aparicio, L.; Hill, K.; Bitter, M.; Tritz, K.; Kramer, T.; Stutman, D.; Finkenthal, M.
2010-10-15
A new set of analytic formulas describes the transmission of soft x-ray continuum radiation through a metallic foil for its application to fast electron temperature measurements in fusion plasmas. This novel approach shows good agreement with numerical calculations over a wide range of plasma temperatures in contrast with the solutions obtained when using a transmission approximated by a single-Heaviside function [S. von Goeler et al., Rev. Sci. Instrum. 70, 599 (1999)]. The new analytic formulas can improve the interpretation of the experimental results and thus contribute in obtaining fast temperature measurements in between intermittent Thomson scattering data.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Applying an analytical method to study neutron behavior for dosimetry
NASA Astrophysics Data System (ADS)
Shirazi, S. A. Mousavi
2016-12-01
In this investigation, a new dosimetry process is studied by applying an analytical method. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical method in all constituent elements of liver tissue. This analytical method is introduced applying mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This method can be applied to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.
Analytical methods for quantitation of prenylated flavonoids from hops
Nikolić, Dejan; van Breemen, Richard B.
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical methods for quantitation of prenylated flavonoids from hops.
Nikolić, Dejan; van Breemen, Richard B
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach.
Applications of nuclear analytical methods to materials analysis
Jones, K.W.; Hanson, A.L.; Kraner, H.W.
1982-01-01
Nuclear analytical methods have now become important in the characterization of many types of materials and have been shown to be an extremely important extension of many more common methods. To illustrate the breadth of their use, some recent Brookhaven experiments are described that deal with the depth distribution of hydrogen in Nb-H alloys, the diffusion of Mo in graphite at high temperatures, and the measurement of Al and Si concentrations in zeolite catalysts. It is hoped that the presentation of these illustrative examples will serve as a stimulus and encouragement for the further application of nuclear analytical methods.
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
Computer Subroutines for Analytic Rotation by Two Gradient Methods.
ERIC Educational Resources Information Center
van Thillo, Marielle
Two computer subroutine packages for the analytic rotation of a factor matrix, A(p x m), are described. The first program uses the Flectcher (1970) gradient method, and the second uses the Polak-Ribiere (Polak, 1971) gradient method. The calculations in both programs involve the optimization of a function of free parameters. The result is a…
Development of quality-by-design analytical methods.
Vogt, Frederick G; Kord, Alireza S
2011-03-01
Quality-by-design (QbD) is a systematic approach to drug development, which begins with predefined objectives, and uses science and risk management approaches to gain product and process understanding and ultimately process control. The concept of QbD can be extended to analytical methods. QbD mandates the definition of a goal for the method, and emphasizes thorough evaluation and scouting of alternative methods in a systematic way to obtain optimal method performance. Candidate methods are then carefully assessed in a structured manner for risks, and are challenged to determine if robustness and ruggedness criteria are satisfied. As a result of these studies, the method performance can be understood and improved if necessary, and a control strategy can be defined to manage risk and ensure the method performs as desired when validated and deployed. In this review, the current state of analytical QbD in the industry is detailed with examples of the application of analytical QbD principles to a range of analytical methods, including high-performance liquid chromatography, Karl Fischer titration for moisture content, vibrational spectroscopy for chemical identification, quantitative color measurement, and trace analysis for genotoxic impurities.
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas
2017-01-01
As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.
Progress and development of analytical methods for gibberellins.
Pan, Chaozhi; Tan, Swee Ngin; Yong, Jean Wan Hong; Ge, Liya
2017-01-01
Gibberellins, as a group of phytohormones, exhibit a wide variety of bio-functions within plant growth and development, which have been used to increase crop yields. Many analytical procedures, therefore, have been developed for the determination of the types and levels of endogenous and exogenous gibberellins. As plant tissues contain gibberellins in trace amounts (usually at the level of nanogram per gram fresh weight or even lower), the sample pre-treatment steps (extraction, pre-concentration, and purification) for gibberellins are reviewed in details. The primary focus of this comprehensive review is on the various analytical methods designed to meet the requirements for gibberellins analyses in complex matrices with particular emphasis on high-throughput analytical methods, such as gas chromatography, liquid chromatography, and capillary electrophoresis, mostly combined with mass spectrometry. The advantages and drawbacks of the each described analytical method are discussed. The overall aim of this review is to provide a comprehensive and critical view on the different analytical methods nowadays employed to analyze gibberellins in complex sample matrices and their foreseeable trends. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.
Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua
2016-05-01
Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.
FastStats: Births -- Method of Delivery
... this? Submit What's this? Submit Button NCHS Home Births - Method of Delivery Recommend on Facebook Tweet Share ... of all deliveries by Cesarean: 32.0% Source: Births: Final Data for 2015, table 21 [PDF - 1. ...
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
Fast and Sensitive Method for Determination of Domoic Acid in Mussel Tissue
Barbaro, Elena; Zangrando, Roberta; Barbante, Carlo; Gambaro, Andrea
2016-01-01
Domoic acid (DA), a neurotoxic amino acid produced by diatoms, is the main cause of amnesic shellfish poisoning (ASP). In this work, we propose a very simple and fast analytical method to determine DA in mussel tissue. The method consists of two consecutive extractions and requires no purification steps, due to a reduction of the extraction of the interfering species and the application of very sensitive and selective HILIC-MS/MS method. The procedural method was validated through the estimation of trueness, extract yield, precision, detection, and quantification limits of analytical method. The sample preparation was also evaluated through qualitative and quantitative evaluations of the matrix effect. These evaluations were conducted both on the DA-free matrix spiked with known DA concentration and on the reference certified material (RCM). We developed a very selective LC-MS/MS method with a very low value of method detection limit (9 ng g−1) without cleanup steps. PMID:26904720
A new method for constructing analytic elements for groundwater flow.
NASA Astrophysics Data System (ADS)
Strack, O. D.
2007-12-01
The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.
A method of determining spectral analytical dye densities
NASA Technical Reports Server (NTRS)
Scarpace, F. L.; Friederichs, G. A.
1978-01-01
A straightforward method for the user of color imagery to determine the spectral analytical density of dyes present in the processed imagery is presented. The method involves exposing a large number of different color patches on the film which span the gamut of the film's imaging capabilities. From integral spectral density measurements at 16 to 19 different wavelengths, the unit spectral dye curves for each of the three dyes present were determined in two different types of color films. A discussion of the use of these spectral dye densities to determine the transformation between integral density measurements and analytical density is presented.
Beamforming and holography image formation methods: an analytic study.
Solimene, Raffaele; Cuccaro, Antonio; Ruvio, Giuseppe; Tapia, Daniel Flores; O'Halloran, Martin
2016-04-18
Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the conﬁguration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) conﬁguration..
A fast digital image correlation method for deformation measurement
NASA Astrophysics Data System (ADS)
Pan, Bing; Li, Kai
2011-07-01
Fast and high-accuracy deformation analysis using digital image correlation (DIC) has been increasingly important and highly demanded in recent years. In literature, the DIC method using the Newton-Rapshon (NR) algorithm has been considered as a gold standard for accurate sub-pixel displacement tracking, as it is insensitive to the relative deformation and rotation of the target subset and thus provides highest sub-pixel registration accuracy and widest applicability. A significant drawback of conventional NR-algorithm-based DIC method, however, is its extremely huge computational expense. In this paper, a fast DIC method is proposed deformation measurement by effectively eliminating the repeating redundant calculations involved in the conventional NR-algorithm-based DIC method. Specifically, a reliability-guided displacement scanning strategy is employed to avoid time-consuming integer-pixel displacement searching for each calculation point, and a pre-computed global interpolation coefficient look-up table is utilized to entirely eliminate repetitive interpolation calculation at sub-pixel locations. With these two approaches, the proposed fast DIC method substantially increases the calculation efficiency of the traditional NR-algorithm-based DIC method. The performance of proposed fast DIC method is carefully tested on real experimental images using various calculation parameters. Results reveal that the computational speed of the present fast DIC is about 120-200 times faster than that of the traditional method, without any loss of its measurement accuracy
Nascimento, Carina F; Rocha, Diogo L; Rocha, Fábio R P
2015-02-15
An environmental friendly procedure was developed for fast melamine determination as an adulterant of protein content in milk. Triton X-114 was used for sample clean-up and as a fluorophore, whose fluorescence was quenched by the analyte. A linear response was observed from 1.0 to 6.0mgL(-1) melamine, described by the Stern-Volmer equation I°/I=(0.999±0.002)+(0.0165±0.004) CMEL (r=0.999). The detection limit was estimated at 0.8mgL(-1) (95% confidence level), which allows detecting as low as 320μg melamine in 100g of milk. Coefficients of variation (n=8) were estimated at 0.4% and 1.4% with and without melamine, respectively. Recoveries to melamine spiked to milk samples from 95% to 101% and similar slopes of calibration graphs obtained with and without milk indicated the absence of matrix effects. Results for different milk samples agreed with those obtained by high performance liquid chromatography at the 95% confidence level.
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
Fast tomographic methods for the tokamak ISTTOK
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.
2008-04-01
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Fast tomographic methods for the tokamak ISTTOK
Carvalho, P. J.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.; Gori, S.; Toussaint, U. v.
2008-04-07
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Method for ultra-fast boriding
Erdemir, Ali; Sista, Vivekanand; Kahvecioglu, Ozgenur; Eryilmaz, Osman Levent
2017-01-31
An article of manufacture and method of forming a borided material. An electrochemical cell is used to process a substrate to deposit a plurality of borided layers on the substrate. The plurality of layers are co-deposited such that a refractory metal boride layer is disposed on a substrate and a rare earth metal boride conforming layer is disposed on the refractory metal boride layer.
Analytical Methods for Detonation Residues of Insensitive Munitions
NASA Astrophysics Data System (ADS)
Walsh, Marianne E.
2016-01-01
Analytical methods are described for the analysis of post-detonation residues from insensitive munitions. Standard methods were verified or modified to obtain the mass of residues deposited per round. In addition, a rapid chromatographic separation was developed and used to measure the mass of NTO (3-nitro-1,2,4-triazol-5-one), NQ (nitroguanidine) and DNAN (2,4-dinitroanisole). The HILIC (hydrophilic-interaction chromatography) separation described here uses a trifunctionally-bonded amide phase to retain the polar analytes. The eluent is 75/25 v/v acetonitrile/water acidified with acetic acid, which is also suitable for LC/MS applications. Analytical runtime was three minutes. Solid phase extraction and LC/MS conditions are also described.
Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny
2013-04-20
We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.
Zeb, Alam; Ullah, Fareed
2016-01-01
A simple and highly sensitive spectrophotometric method was developed for the determination of thiobarbituric acid reactive substances (TBARS) as a marker for lipid peroxidation in fried fast foods. The method uses the reaction of malondialdehyde (MDA) and TBA in the glacial acetic acid medium. The method was precise, sensitive, and highly reproducible for quantitative determination of TBARS. The precision of extractions and analytical procedure was very high as compared to the reported methods. The method was used to determine the TBARS contents in the fried fast foods such as Shami kebab, samosa, fried bread, and potato chips. Shami kebab, samosa, and potato chips have higher amount of TBARS in glacial acetic acid-water extraction system than their corresponding pure glacial acetic acid and vice versa in fried bread samples. The method can successfully be used for the determination of TBARS in other food matrices, especially in quality control of food industries.
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-11-04
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration.
A New Analytic Alignment Method for a SINS
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
Fast-timing methods for semiconductor detectors
Spieler, H.
1982-03-01
The basic parameters are discussed which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter.
Fast timing methods for semiconductor detectors. Revision
Spieler, H.
1984-10-01
This tutorial paper discusses the basic parameters which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter.
Fast simulation method for airframe analysis based on big data
NASA Astrophysics Data System (ADS)
Liu, Dongliang; Zhang, Lixin
2016-10-01
In this paper, we employ the big data method to structural analysis by considering the correlations between loads and loads, loads and results and results and results. By means of fundamental mathematics and physical rules, the principle, feasibility and error control of the method are discussed. We then establish the analysis process and procedures. The method is validated by two examples. The results show that the fast simulation method based on big data is fast and precise when it is applied to structural analysis.
Laser: a Tool for Optimization and Enhancement of Analytical Methods
Preisler, Jan
1997-01-01
In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p
Ground water modeling applications using the analytic element method.
Hunt, Randall J
2006-01-01
Though powerful and easy to use, applications of the analytic element method are not as widespread as finite-difference or finite-element models due in part to their relative youth. Although reviews that focus primarily on the mathematical development of the method have appeared in the literature, a systematic review of applications of the method is not available. An overview of the general types of applications of analytic elements in ground water modeling is provided in this paper. While not fully encompassing, the applications described here cover areas where the method has been historically applied (regional, two-dimensional steady-state models, analyses of ground water-surface water interaction, quick analyses and screening models, wellhead protection studies) as well as more recent applications (grid sensitivity analyses, estimating effective conductivity and dispersion in highly heterogeneous systems). The review of applications also illustrates areas where more method development is needed (three-dimensional and transient simulations).
Literature Review on Processing and Analytical Methods for ...
Report The purpose of this report was to survey the open literature to determine the current state of the science regarding the processing and analytical methods currently available for recovery of F. tularensis from water and soil matrices, and to determine what gaps remain in the collective knowledge concerning F. tularensis identification from environmental samples.
Analytical chemistry methods for metallic core components: Revision March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of alloys used to fabricate core components. These alloys are 302, 308, 316, 316-Ti, and 321 stainless steels and 600 and 718 Inconels and they may include other 300-series stainless steels.
ANALYTICAL METHOD READINESS FOR THE CONTAMINANT CANDIDATE LIST
The Contaminant Candidate List (CCL), which was promulgated in March 1998, includes 50 chemical and 10 microbiological contaminants/contaminant groups. At the time of promulgation, analytical methods were available for 6 inorganic and 28 organic contaminants. Since then, 4 anal...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements...
Analytical methods for dating modern writing instrument inks on paper.
Ezcurra, Magdalena; Góngora, Juan M G; Maguregui, Itxaso; Alonso, Rosa
2010-04-15
This work reviews the different analytical methods that have been proposed in the field of forensic dating of inks from different modern writing instruments. The reported works have been classified according to the writing instrument studied and the ink component analyzed in relation to aging. The study, done chronologically, shows the advances experienced in the ink dating field in the last decades.
Meta-analytic methods for neuroimaging data explained.
Radua, Joaquim; Mataix-Cols, David
2012-03-08
The number of neuroimaging studies has grown exponentially in recent years and their results are not always consistent. Meta-analyses are helpful to summarize this vast literature and also offer insights that are not apparent from the individual studies. In this review, we describe the main methods used for meta-analyzing neuroimaging data, with special emphasis on their relative advantages and disadvantages. We describe and discuss meta-analytical methods for global brain volumes, methods based on regions of interest, label-based reviews, voxel-based meta-analytic methods and online databases. Regions of interest-based methods allow for optimal statistical analyses but are affected by a limited and potentially biased inclusion of brain regions, whilst voxel-based methods benefit from a more exhaustive and unbiased inclusion of studies but are statistically more limited. There are also relevant differences between the different available voxel-based meta-analytic methods, and the field is rapidly evolving to develop more accurate and robust methods. We suggest that in any meta-analysis of neuroimaging data, authors should aim to: only include studies exploring the whole brain; ensure that the same threshold throughout the whole brain is used within each included study; and explore the robustness of the findings via complementary analyses to minimize the risk of false positives.
Meta-analytic methods for neuroimaging data explained
2012-01-01
The number of neuroimaging studies has grown exponentially in recent years and their results are not always consistent. Meta-analyses are helpful to summarize this vast literature and also offer insights that are not apparent from the individual studies. In this review, we describe the main methods used for meta-analyzing neuroimaging data, with special emphasis on their relative advantages and disadvantages. We describe and discuss meta-analytical methods for global brain volumes, methods based on regions of interest, label-based reviews, voxel-based meta-analytic methods and online databases. Regions of interest-based methods allow for optimal statistical analyses but are affected by a limited and potentially biased inclusion of brain regions, whilst voxel-based methods benefit from a more exhaustive and unbiased inclusion of studies but are statistically more limited. There are also relevant differences between the different available voxel-based meta-analytic methods, and the field is rapidly evolving to develop more accurate and robust methods. We suggest that in any meta-analysis of neuroimaging data, authors should aim to: only include studies exploring the whole brain; ensure that the same threshold throughout the whole brain is used within each included study; and explore the robustness of the findings via complementary analyses to minimize the risk of false positives. PMID:22737993
A New Splitting Method for Both Analytical and Preparative LC/MS
NASA Astrophysics Data System (ADS)
Cai, Yi; Adams, Daniel; Chen, Hao
2013-11-01
This paper presents a novel splitting method for liquid chromatography/mass spectrometry (LC/MS) application, which allows fast MS detection of LC-separated analytes and subsequent online analyte collection. In this approach, a PEEK capillary tube with a micro-orifice drilled on the tube side wall is used to connect with LC column. A small portion of LC eluent emerging from the orifice can be directly ionized by desorption electrospray ionization (DESI) with negligible time delay (6~10 ms) while the remaining analytes exiting the tube outlet can be collected. The DESI-MS analysis of eluted compounds shows narrow peaks and high sensitivity because of the extremely small dead volume of the orifice used for LC eluent splitting (as low as 4 nL) and the freedom to choose favorable DESI spray solvent. In addition, online derivatization using reactive DESI is possible for supercharging proteins and for enhancing their signals without introducing extra dead volume. Unlike UV detector used in traditional preparative LC experiments, this method is applicable to compounds without chromophores (e.g., saccharides) due to the use of MS detector. Furthermore, this splitting method well suits monolithic column-based ultra-fast LC separation at a high elution flow rate of 4 mL/min. [Figure not available: see fulltext.
A new splitting method for both analytical and preparative LC/MS.
Cai, Yi; Adams, Daniel; Chen, Hao
2014-02-01
This paper presents a novel splitting method for liquid chromatography/mass spectrometry (LC/MS) application, which allows fast MS detection of LC-separated analytes and subsequent online analyte collection. In this approach, a PEEK capillary tube with a micro-orifice drilled on the tube side wall is used to connect with LC column. A small portion of LC eluent emerging from the orifice can be directly ionized by desorption electrospray ionization (DESI) with negligible time delay (6~10 ms) while the remaining analytes exiting the tube outlet can be collected. The DESI-MS analysis of eluted compounds shows narrow peaks and high sensitivity because of the extremely small dead volume of the orifice used for LC eluent splitting (as low as 4 nL) and the freedom to choose favorable DESI spray solvent. In addition, online derivatization using reactive DESI is possible for supercharging proteins and for enhancing their signals without introducing extra dead volume. Unlike UV detector used in traditional preparative LC experiments, this method is applicable to compounds without chromophores (e.g., saccharides) due to the use of MS detector. Furthermore, this splitting method well suits monolithic column-based ultra-fast LC separation at a high elution flow rate of 4 mL/min. Figure ᅟ
NASA Astrophysics Data System (ADS)
Mitsel, A. A.; Firsov, K. M.
1995-09-01
A new spectral line selection algorithm is developed. The algorithm makes it possible to decrease the number of spectral lines with the increase in altitude. In order for the computer code based on a line-by-line method to operate efficiently two line selections must be carried out. The first selection is rough. This enables the most weak lines not contributing to the optical thickness of the layer z1-z2 to be eliminated. The other lines are subjected to the second selection. In this case the maximum height up to which the line should be taken into account is determined. At the same time for each line at each altitude the maximum resonance frequency difference within the limits of which the contribution of the line to absorption should be taken into account is determined. The gain in the time of calculation of the integral transmittance may be of five times or greater. The calculation error of the integral transmittance is not larger than 0.5%.
A fast vector array adaptive beam forming method
NASA Astrophysics Data System (ADS)
Li, Zhizhong; Chen, Zhe; Li, Haitao; Xu, Zhongliang
2017-06-01
Based on model features of the vector sensor array signals, the paper transforms the time delay of broadband signals in time domain into the phase shift of different sub-bands in frequency domain to realize accurate time delay, and uses Hilbert Transform to construct analytic signals to form a fast vector array adaptive beam forming algorithm flow. The verification result with experimental data shows that this algorithm has much better target resolution capability than conventional beam forming algorithm. With the increase of 4-6dB in target detection capability, it has bright application prospect.
A GPU code for analytic continuation through a sampling method
NASA Astrophysics Data System (ADS)
Nordström, Johan; Schött, Johan; Locht, Inka L. M.; Di Marco, Igor
We here present a code for performing analytic continuation of fermionic Green's functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU). The code is based on the sampling method introduced by Mishchenko et al. (2000), and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures.
Analytic Gradients for the Effective Fragment Molecular Orbital Method.
Bertoni, Colleen; Gordon, Mark S
2016-10-11
The analytic gradient for the Coulomb, polarization, exchange-repulsion, and dispersion terms of the fully integrated effective fragment molecular orbital (EFMO) method is derived and the implementation is discussed. The derivation of the EFMO analytic gradient is more complicated than that for the effective fragment potential (EFP) gradient, because the geometry of each EFP fragment is flexible (not rigid) in the EFMO approach. The accuracy of the gradient is demonstrated by comparing the EFMO analytic gradient with the numeric gradient for several systems, and by assessing the energy conservation during an EFMO NVE ensemble molecular dynamics simulation of water molecules. In addition to facilitating accurate EFMO geometry optimizations, this allows calculations with flexible EFP fragments to be performed.
Use of scientometrics to assess nuclear and other analytical methods
Lyon, W.S.
1986-01-01
Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs.
Broséus, Julian; Debrus, Benjamin; Delémont, Olivier; Rudaz, Serge; Esseiva, Pierre
2013-07-10
Analytical results harmonisation is investigated in this study to provide an alternative to the restrictive approach of analytical methods harmonisation which is recommended nowadays for making possible the exchange of information and then for supporting the fight against illicit drugs trafficking. Indeed, the main goal of this study is to demonstrate that a common database can be fed by a range of different analytical methods, whatever the differences in levels of analytical parameters between these latter ones. For this purpose, a methodology making possible the estimation and even the optimisation of results similarity coming from different analytical methods was then developed. In particular, the possibility to introduce chemical profiles obtained with Fast GC-FID in a GC-MS database is studied in this paper. By the use of the methodology, the similarity of results coming from different analytical methods can be objectively assessed and the utility in practice of database sharing by these methods can be evaluated, depending on profiling purposes (evidential vs. operational perspective tool). This methodology can be regarded as a relevant approach for database feeding by different analytical methods and puts in doubt the necessity to analyse all illicit drugs seizures in one single laboratory or to implement analytical methods harmonisation in each participating laboratory. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
[The general analytical methods for gases dissolved in liquids: sonoluminescence].
Deng, Jiu-Shuai; Liu, Yan
2009-10-01
How to analyze the gases dissolved in water or organic liquids is a challenging problem in analytical chemistry. Till the present time, only the dissolved oxygen in water can be analyzed by chemical and instrumental methods, while other gases, e. g. CO2, N2, CH4, Ar, He, Ke, still can not be analyzed by chemical or instrumental methods. The present paper gives a review on using sonoluminescence for gas analysis in water or organic liquids.
Adaptation of fast marching methods to intracellular signaling
NASA Astrophysics Data System (ADS)
Chikando, Aristide C.; Kinser, Jason M.
2006-02-01
Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.
Fast Erase Method and Apparatus For Digital Media
NASA Technical Reports Server (NTRS)
Oakely, Ernest C. (Inventor)
2006-01-01
A non-contact fast erase method for erasing information stored on a magnetic or optical media. The magnetic media element includes a magnetic surface affixed to a toroidal conductor and stores information in a magnetic polarization pattern. The fast erase method includes applying an alternating current to a planar inductive element positioned near the toroidal conductor, inducing an alternating current in the toroidal conductor, and heating the magnetic surface to a temperature that exceeds the Curie-point so that information stored on the magnetic media element is permanently erased. The optical disc element stores information in a plurality of locations being defined by pits and lands in a toroidal conductive layer. The fast erase method includes similarly inducing a plurality of currents in the optical media element conductive layer and melting a predetermined portion of the conductive layer so that the information stored on the optical medium is destroyed.
Development of Impurity Profiling Methods Using Modern Analytical Techniques.
Ramachandra, Bondigalla
2017-01-02
This review gives a brief introduction about the process- and product-related impurities and emphasizes on the development of novel analytical methods for their determination. It describes the application of modern analytical techniques, particularly the ultra-performance liquid chromatography (UPLC), liquid chromatography-mass spectrometry (LC-MS), high-resolution mass spectrometry (HRMS), gas chromatography-mass spectrometry (GC-MS) and high-performance thin layer chromatography (HPTLC). In addition to that, the application of nuclear magnetic resonance (NMR) spectroscopy was also discussed for the characterization of impurities and degradation products. The significance of the quality, efficacy and safety of drug substances/products, including the source of impurities, kinds of impurities, adverse effects by the presence of impurities, quality control of impurities, necessity for the development of impurity profiling methods, identification of impurities and regulatory aspects has been discussed. Other important aspects that have been discussed are forced degradation studies and the development of stability indicating assay methods.
Analytical Methods for the Quantification of Histamine and Histamine Metabolites.
Bähre, Heike; Kaever, Volkhard
2017-03-21
The endogenous metabolite histamine (HA) is synthesized in various mammalian cells but can also be ingested from exogenous sources. It is involved in a plethora of physiological and pathophysiological processes. So far, four different HA receptors (H1R-H4R) have been described and numerous HAR antagonists have been developed. Contemporary investigations regarding the various roles of HA and its main metabolites have been hampered by the lack of highly specific and sensitive analytic methods for all of these analytes. Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) is the method of choice for identification and sensitive quantification of many low-molecular weight endogenous metabolites. In this chapter, different methodological aspects of HA quantification as well as recommendations for LC-MS/MS methods suitable for analysis of HA and its main metabolites are summarized.
An analytic method for identifying dynamically formed runaway stars
NASA Astrophysics Data System (ADS)
Ryu, Taeho; Leigh, Nathan W. C.; Perna, Rosalba
2017-09-01
In this paper, we study the three-body products (two single stars and a binary) of binary-binary (2+2) scattering interactions. This is done using a combination of analytic methods and numerical simulations of 2+2 scattering interactions, both in isolation and in a homogeneous background potential. We analytically derive a simple formula relating the angle between the velocity vectors of the two ejected single stars and the orbital separation of the remaining binary. We compare our analytic formulation to numerical scattering simulations and illustrate that the agreement is excellent, both in isolation and in a homogeneous background potential. Our results are ideally suited for application to the GAIA data base, which is expected to identify many hundred runaway stars. The analytic relation presented here has the potential to identify runaway stars formed dynamically with high confidence. Finally, by applying our method to the runaways AE Aur and μ Col, we illustrate that it can be used to constrain the history of the background potential, which was denser than the presently observed density in the case of the Trapezium cluster.
Measuring solids concentration in stormwater runoff: comparison of analytical methods.
Clark, Shirley E; Siu, Christina Y S
2008-01-15
Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.
Analytical assessment of the thermal behavior of nickel-metal hydride batteries during fast charging
NASA Astrophysics Data System (ADS)
Taheri, Peyman; Yazdanpour, Maryam; Bahrami, Majid
2014-01-01
A novel distributed transient thermal model is proposed to investigate the thermal behavior of nickel-metal hydride (NiMH) batteries under fast-charging processes at constant currents. Based on the method of integral transformation, a series-form solution for the temperature field inside the battery core is obtained that takes account for orthotropic heat conduction, transient heat generation, and convective heat dissipation at surfaces of the battery. The accuracy of the developed theoretical model is confirmed through comparisons with numerical and experimental data for a sample 30 ampere-hour NiMH battery. The comparisons show that even the first term of the series solution fairly predicts the temperature field with the modest numerical cost. The thermal model is also employed to define an efficiency for charging processes. Our calculations confirm that the charging efficiency decreases as the charging current increases.
Illegal use patterns, side effects, and analytical methods of ketamine.
Han, Eunyoung; Kwon, Nam Ji; Feng, Ling-Yi; Li, Jih-Heng; Chung, Heesun
2016-11-01
In Asian countries, such as China, Taiwan, and Hong Kong, ketamine (KT) is one of the most prevalent illicit use drugs. KT is regulated by various drug-related laws in many countries, such as Korea, Taiwan, China, U.S.A, Netherlands, UK, Australia, Mexico, and Canada. This review research explored pharmacology and side effects of KT, the illicit use patterns of KT, the analytical methods of KT in biological samples, and the concentrations of KT from abusers and non-abusers. Many side effects of KT have been reported mental and physical problems. Although many studies conducted various analytical methods for KT, this research focused on the urine and hair analysis and compared some parameters of samples, instruments, columns, extraction methods, internal standards, LOD/LOQ levels, metabolites, NK/K ratio, cut off values, and m/z values. Our research also compared the concentrations of KT in biological samples from abusers and non-abusers. Many rapid and precise analytical methods for illegal KT use are needed to be developed and applied to real samples. To minimize and prevent harm from KT, the authorities and appropriate agencies require a careful assessment, evaluation, early identification, and surveillance of KT users in both clinical and social settings. In addition, there is a need to construct a stricter legislative management and provide preventive education to younger individuals because illegal KT use is relatively common among the young populations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Comparison between methods of analytical continuation for bosonic functions
NASA Astrophysics Data System (ADS)
Schött, J.; van Loon, E. G. C. P.; Locht, I. L. M.; Katsnelson, M. I.; Di Marco, I.
2016-12-01
In this paper we perform a critical assessment of different known methods for the analytical continuation of bosonic functions, namely, the maximum entropy method, the non-negative least-squares method, the non-negative Tikhonov method, the Padé approximant method, and a stochastic sampling method. Four functions of different shape are investigated, corresponding to four physically relevant scenarios. They include a simple two-pole model function; two flavors of the tight-binding model on a square lattice, i.e., a single-orbital metallic system and a two-orbital insulating system; and the Hubbard dimer. The effect of numerical noise in the input data on the analytical continuation is discussed in detail. Overall, the stochastic method by A. S. Mishchenko et al. [Phys. Rev. B 62, 6317 (2000), 10.1103/PhysRevB.62.6317] is shown to be the most reliable tool for input data whose numerical precision is not known. For high-precision input data, this approach is slightly outperformed by the Padé approximant method, which combines a good-resolution power with a good numerical stability. Although none of the methods retrieves all features in the spectra in the presence of noise, our analysis provides a useful guideline for obtaining reliable information of the spectral function in cases of practical interest.
NASA Astrophysics Data System (ADS)
Ren, Zhengyong; Tang, Jingtian; Kalscheuer, Thomas; Maurer, Hansruedi
2017-01-01
A novel fast and accurate algorithm is developed for large-scale 3-D gravity and magnetic modeling problems. An unstructured grid discretization is used to approximate sources with arbitrary mass and magnetization distributions. A novel adaptive multilevel fast multipole (AMFM) method is developed to reduce the modeling time. An observation octree is constructed on a set of arbitrarily distributed observation sites, while a source octree is constructed on a source tetrahedral grid. A novel characteristic is the independence between the observation octree and the source octree, which simplifies the implementation of different survey configurations such as airborne and ground surveys. Two synthetic models, a cubic model and a half-space model with mountain-valley topography, are tested. As compared to analytical solutions of gravity and magnetic signals, excellent agreements of the solutions verify the accuracy of our AMFM algorithm. Finally, our AMFM method is used to calculate the terrain effect on an airborne gravity data set for a realistic topography model represented by a triangular surface retrieved from a digital elevation model. Using 16 threads, more than 5800 billion interactions between 1,002,001 observation points and 5,839,830 tetrahedral elements are computed in 453.6 s. A traditional first-order Gaussian quadrature approach requires 3.77 days. Hence, our new AMFM algorithm not only can quickly compute the gravity and magnetic signals for complicated problems but also can substantially accelerate the solution of 3-D inversion problems.
An analytical pilot rating method for highly elastic aircraft
NASA Technical Reports Server (NTRS)
Swaim, R. L.; Poopaka, S.
1981-01-01
An analytical method was developed to predict pilot ratings for highly elastic aircraft subject to severe mode interactions between rigid body and elastic dynamics. An extension of the standard optimal control model of pilot response was made to include the hypothesis that the pilot controls the system with an internal model consisting of the slowly varying part of the aircraft dynamics. This modified optimal control model was analytically evaluated for a longitudinal pitch tracking task on a large flexible aircraft. Parametric variations in the undamped natural frequencies of two symmetric elastic modes were made to induce varying amounts of mode interaction. The model proved successful in discriminating when the pilot can or cannot visually separate rigid from elastic pitch response in the turbulence excited tracking task. This method shows considerable promise in making it possible to investigate such mode interaction effects on handling qualities in the preliminary design stage of new aircraft.
Analytical Methods for Measuring Mercury in Water, Sediment and Biota
Lasorsa, Brenda K.; Gill, Gary A.; Horvat, Milena
2012-06-07
Mercury (Hg) exists in a large number of physical and chemical forms with a wide range of properties. Conversion between these different forms provides the basis for mercury's complex distribution pattern in local and global cycles and for its biological enrichment and effects. Since the 1960’s, the growing awareness of environmental mercury pollution has stimulated the development of more accurate, precise and efficient methods of determining mercury and its compounds in a wide variety of matrices. During recent years new analytical techniques have become available that have contributed significantly to the understanding of mercury chemistry in natural systems. In particular, these include ultra sensitive and specific analytical equipment and contamination-free methodologies. These improvements allow for the determination of total mercury as well as major species of mercury to be made in water, sediments and soils, and biota. Analytical methods are selected depending on the nature of the sample, the concentration levels of mercury, and what species or fraction is to be quantified. The terms “speciation” and “fractionation” in analytical chemistry were addressed by the International Union for Pure and Applied Chemistry (IUPAC) which published guidelines (Templeton et al., 2000) or recommendations for the definition of speciation analysis. "Speciation analysis is the analytical activity of identifying and/or measuring the quantities of one or more individual chemical species in a sample. The chemical species are specific forms of an element defined as to isotopic composition, electronic or oxidation state, and/or complex or molecular structure. The speciation of an element is the distribution of an element amongst defined chemical species in a system. In case that it is not possible to determine the concentration of the different individual chemical species that sum up the total concentration of an element in a given matrix, meaning it is impossible to
Customizing computational methods for visual analytics with big data.
Choo, Jaegul; Park, Haesun
2013-01-01
The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.
A new simple multidomain fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2016-09-01
A simple multidomain fast multipole boundary element method (BEM) for solving potential problems is presented in this paper, which can be applied to solve a true multidomain problem or a large-scale single domain problem using the domain decomposition technique. In this multidomain BEM, the coefficient matrix is formed simply by assembling the coefficient matrices of each subdomain and the interface conditions between subdomains without eliminating any unknown variables on the interfaces. Compared with other conventional multidomain BEM approaches, this new approach is more efficient with the fast multipole method, regardless how the subdomains are connected. Instead of solving the linear system of equations directly, the entire coefficient matrix is partitioned and decomposed using Schur complement in this new approach. Numerical results show that the new multidomain fast multipole BEM uses fewer iterations in most cases with the iterative equation solver and less CPU time than the traditional fast multipole BEM in solving large-scale BEM models. A large-scale fuel cell model with more than 6 million elements was solved successfully on a cluster within 3 h using the new multidomain fast multipole BEM.
Methods for quantifying uncertainty in fast reactor analyses.
Fanning, T. H.; Fischer, P. F.
2008-04-07
Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.
A fast multipole boundary element method for solving two-dimensional thermoelasticity problems
NASA Astrophysics Data System (ADS)
Liu, Y. J.; Li, Y. X.; Huang, S.
2014-09-01
A fast multipole boundary element method (BEM) for solving general uncoupled steady-state thermoelasticity problems in two dimensions is presented in this paper. The fast multipole BEM is developed to handle the thermal term in the thermoelasticity boundary integral equation involving temperature and heat flux distributions on the boundary of the problem domain. Fast multipole expansions, local expansions and related translations for the thermal term are derived using complex variables. Several numerical examples are presented to show the accuracy and effectiveness of the developed fast multipole BEM in calculating the displacement and stress fields for 2-D elastic bodies under various thermal loads, including thin structure domains that are difficult to mesh using the finite element method (FEM). The BEM results using constant elements are found to be accurate compared with the analytical solutions, and the accuracy of the BEM results is found to be comparable to that of the FEM with linear elements. In addition, the BEM offers the ease of use in generating the mesh for a thin structure domain or a domain with complicated geometry, such as a perforated plate with randomly distributed holes for which the FEM fails to provide an adequate mesh. These results clearly demonstrate the potential of the developed fast multipole BEM for solving 2-D thermoelasticity problems.
Analytical Methods for Biomass Characterization during Pretreatment and Bioconversion
Pu, Yunqiao; Meng, Xianzhi; Yoo, Chang Geun; Li, Mi; Ragauskas, Arthur J
2016-01-01
Lignocellulosic biomass has been introduced as a promising resource for alternative fuels and chemicals because of its abundance and complement for petroleum resources. Biomass is a complex biopolymer and its compositional and structural characteristics largely vary depending on its species as well as growth environments. Because of complexity and variety of biomass, understanding its physicochemical characteristics is a key for effective biomass utilization. Characterization of biomass does not only provide critical information of biomass during pretreatment and bioconversion, but also give valuable insights on how to utilize the biomass. For better understanding biomass characteristics, good grasp and proper selection of analytical methods are necessary. This chapter introduces existing analytical approaches that are widely employed for biomass characterization during biomass pretreatment and conversion process. Diverse analytical methods using Fourier transform infrared (FTIR) spectroscopy, gel permeation chromatography (GPC), and nuclear magnetic resonance (NMR) spectroscopy for biomass characterization are reviewed. In addition, biomass accessibility methods by analyzing surface properties of biomass are also summarized in this chapter.
Dorodnitsyn, Vladimir; Van Damme, Bart
2016-06-01
Wave propagation in cellular and porous media is widely studied due to its abundance in nature and industrial applications. Biot's theory for open-cell media predicts the existence of two simultaneous pressure waves, distinguished by its velocity. A fast wave travels through the solid matrix, whereas a much slower wave is carried by fluid channels. In closed-cell materials, the slow wave disappears due to a lack of a continuous fluid path. However, recent finite element (FE) simulations done by the authors of this paper also predict the presence of slow pressure waves in saturated closed-cell materials. The nature of the slow wave is not clear. In this paper, an equivalent unit cell of a medium with square cells is proposed to permit an analytical description of the dynamics of such a material. A simplified FE model suggests that the fluid-structure interaction can be fully captured using a wavenumber-dependent spring support of the vibrating cell walls. Using this approach, the pressure wave behavior can be calculated with high accuracy, but with less numerical effort. Finally, Rayleigh's energy method is used to investigate the coexistence of two waves with different velocities.
Fully Isotropic Fast Marching Methods on Cartesian Grids.
Appia, Vikram; Yezzi, Anthony
2010-01-01
The existing Fast Marching methods which are used to solve the Eikonal equation use a locally continuous model to estimate the accumulated cost, but a discontinuous (discretized) model for the traveling cost around each grid point. Because the accumulated cost and the traveling (local) cost are treated differently, the estimate of the accumulated cost at any point will vary based on the direction of the arriving front. Instead we propose to estimate the traveling cost at each grid point based on a locally continuous model, where we will interpolate the traveling cost along the direction of the propagating front. We further choose an interpolation scheme that is not biased by the direction of the front. Thus making the fast marching process truly isotropic. We show the significance of removing the directional bias in the computation of the cost in certain applications of fast marching method. We also compare the accuracy and computation times of our proposed methods with the existing state of the art fast marching techniques to demonstrate the superiority of our method.
A fast and efficient method for producing partially coherent sources
NASA Astrophysics Data System (ADS)
Hyde, M. W., IV; Bose-Pillai, S.; Xiao, X.; Voelz, D. G.
2017-02-01
A fast, flexible and efficient method for generating partially coherent sources is presented. It is shown that the Schell-model (uniformly correlated) and non-uniformly correlated sources can be produced quickly using a fast steering mirror and low-actuator-count deformable mirror, respectively. The statistical optics theory underpinning the proposed technique is presented and discussed. Simulation results of two Schell-models and one non-uniformly correlated source are presented and compared to the theory to test the new approach.
Fast and stable numerical method for neuronal modelling
NASA Astrophysics Data System (ADS)
Hashemi, Soheil; Abdolali, Ali
2016-11-01
Excitable cell modelling is of a prime interest in predicting and targeting neural activity. Two main limits in solving related equations are speed and stability of numerical method. Since there is a tradeoff between accuracy and speed, most previously presented methods for solving partial differential equations (PDE) are focused on one side. More speed means more accurate simulations and therefore better device designing. By considering the variables in finite differenced equation in proper time and calculating the unknowns in the specific sequence, a fast, stable and accurate method is introduced in this paper for solving neural partial differential equations. Propagation of action potential in giant axon is studied by proposed method and traditional methods. Speed, consistency and stability of the methods are compared and discussed. The proposed method is as fast as forward methods and as stable as backward methods. Forward methods are known as fastest methods and backward methods are stable in any circumstances. Complex structures can be simulated by proposed method due to speed and stability of the method.
Recent analytical methods for cephalosporins in biological fluids.
Toothaker, R D; Wright, D S; Pachla, L A
1987-01-01
Since 1980, RP chromatography has been the principal analytical technique used for cephalosporins. This technology offers selectivity, accuracy, and ease of use. Most of the methods rely on protein precipitation and, to a lesser extent, solid-phase isolation or extraction procedures. The proper selection of a method depends on the analytical constraints imposed by the overall objective of the study. For example, pharmacokinetic datum interpretation mandates that the method be validated and provide specific and accurate results. LC is the preferred technique, since it not only meets these specifications but may also distinguish between the drug and metabolites. Those chromatographic methods which quantify several different cephalosporins are not desirable for pharmacokinetic datum interpretation, since accuracy and precision are usually compromised in order that many different drugs may be quantified in a single analysis. The proper selection of sample preparation method is dependent on the presence of potential interferences and the acceptable lower limit of quantitation. Protein precipitation methods offer ease of sample preparation but may suffer from nonselectivity. Solid-phase isolation and extraction procedures may increase selectivity and improve the limit of quantitation. Although LC provides specific and accurate results, clinical laboratories may prefer to use the less specific methods for therapeutic drug monitoring. In this case, microbiological, enzymatic, and fluorimetric methods offer improved sample throughput but less specificity. However, these methods should not be used for drugs that may have a low margin of safety or if the patient is on multiple-antibiotic therapy. Future methods may involve incorporating solid-phase isolation columns to enhance the specificity of chromatographic, microbiological, enzymatic, and fluorescence methods. Advancements in microbore column technology may allow improvements in the selectivity and sensitivity of LC
Nakashima, Harunobu; Tomiyama, Ken-Ichi; Kawakami, Tsuyoshi; Isama, Kazuo
2010-07-01
In preparing for the revision of the authorized analytical method for tributyltin (TBT) and triphenyltin (TPT), which are banned from using according to the "Act on the Control of Household Products Containing Harmful Substances", an examination was conducted on the detection method of these substances using gas chromatography/mass spectrometry (GC/MS), after derivatizing them (ethyl-derivatizing method and hydrogen-derivatizing method). Ethyl-derivatized compounds had stability, which enabled the detection of TPT with a higher sensitivity. In addition, a preparation suitable for the following analytical objects was established: (1) textile products, (2) water-based products (such as water-based paint), (3) oil-based products (such as wax), and (4) adhesives. Addition-recovery experiments were conducted using the prescribed pretreatment method, when each surrogate substances (TBT-d27, TPT-d15) were added and the data were corrected, good recovery rates (94.5-118.6% in TBT, and 86.6-110.1% in TPT) were obtained. When TBT and TPT in 31 commercially available products were analyzed based on the developed analytical method, an adhesive showed 13.2 microg/g of TBT content, which exceeded the regulatory criterion (1 microg/g as tin). Next, when the same products with different manufacturing date were analyzed, TBT (10.2-10.8 microg/g), which exceeded the regulatory criterion, was detected in 4 products among 8 products, and simultaneously, a high concentration (over 1000 microg/g) of dibutyltin (DBT) was detected. It was suggested that TBT as an impurity of DBT remained, and the manufacturer chose the voluntary recall of the product. The new method is considered sufficiently applicable as a revised method for the conventionally authorized method.
Analytical method for distribution of metallic gasket contact stress
NASA Astrophysics Data System (ADS)
Feng, Xiu; Gu, Boqing; Wei, Long; Sun, Jianjun
2008-11-01
Metallic gasket seals have been widely used in chemical and petrochemical plants. The failure of sealing system will lead to enormous pecuniary loss, serious environment pollution and personal injury accident. The failure of sealing systems is mostly caused not by the strength of flanges or bolts but by the leakage of the connections. The leakage behavior of bolted flanged connections is related to the gasket contact stress. In particular, the non-uniform distribution of this stress in the radial direction caused by the flange rotational flexibility has a major influence on the tightness of bolted flanged connections. In this paper, based on Warters method and considering the operating pressure, the deformation of the flanges is analyzed theoretically, and the formula for calculating the angle of rotation of the flanges is derived, based on which and the mechanical property of the gasket material, the method for calculating the gasket contact stresses is put forward. The maximum stress at the gasket outer flank calculated by the analytical method is lower than that obtained by numerical simulation, but the mean stresses calculated by the two methods are nearly the same. The analytical method presented in this paper can be used as an engineering method for designing the metallic gasket connections.
Analytical method for measuring cosmogenic 35S in natural waters
Uriostegui, Stephanie H.; Bibby, Richard K.; Esser, Bradley K.; ...
2015-05-18
Here, cosmogenic sulfur-35 in water as dissolved sulfate (35SO4) has successfully been used as an intrinsic hydrologic tracer in low-SO4, high-elevation basins. Its application in environmental waters containing high SO4 concentrations has been limited because only small amounts of SO4 can be analyzed using current liquid scintillation counting (LSC) techniques. We present a new analytical method for analyzing large amounts of BaSO4 for 35S. We quantify efficiency gains when suspending BaSO4 precipitate in Inta-Gel Plus cocktail, purify BaSO4 precipitate to remove dissolved organic matter, mitigate interference of radium-226 and its daughter products by selection of high purity barium chloride, andmore » optimize LSC counting parameters for 35S determination in larger masses of BaSO4. Using this improved procedure, we achieved counting efficiencies that are comparable to published LSC techniques despite a 10-fold increase in the SO4 sample load. 35SO4 was successfully measured in high SO4 surface waters and groundwaters containing low ratios of 35S activity to SO4 mass demonstrating that this new analytical method expands the analytical range of 35SO4 and broadens the utility of 35SO4 as an intrinsic tracer in hydrologic settings.« less
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Astrophysics Data System (ADS)
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Control of irradiated food: Recent developments in analytical detection methods.
NASA Astrophysics Data System (ADS)
Delincée, H.
1993-07-01
An overview of recent international efforts, i.e. programmes of "ADMIT" (FAO/IAEA) and of BCR (EC) towards the development of analytical detection methods for radiation processed foods will be given. Some larger collaborative studies have already taken place, e.g. ESR of bones from chicken, prok, beef, frog legs and fish, thermoluminescence of insoluble minerals isolated from herbs and spices, GC analysis of long-chain hydrocarbons derived from the lipid fraction of chicken and other meats, and the microbiological APC/DEFT procedure for spices. These methods could soon be implemented in international standard protocols.
Development of A High Throughput Method Incorporating Traditional Analytical Devices
White, C. C.; Embree, E.; Byrd, W. E; Patel, A. R.
2004-01-01
A high-throughput (high throughput is the ability to process large numbers of samples) and companion informatics system has been developed and implemented. High throughput is defined as the ability to autonomously evaluate large numbers of samples, while an informatics system provides the software control of the physical devices, in addition to the organization and storage of the generated electronic data. This high throughput system includes both an ultra-violet and visible light spectrometer (UV-Vis) and a Fourier transform infrared spectrometer (FTIR) integrated with a multi sample positioning table. This method is designed to quantify changes in polymeric materials occurring from controlled temperature, humidity and high flux UV exposures. The integration of the software control of these analytical instruments within a single computer system is presented. Challenges in enhancing the system to include additional analytical devices are discussed. PMID:27366626
A fast method for a generalized nonlocal elastic model
NASA Astrophysics Data System (ADS)
Du, Ning; Wang, Hong; Wang, Che
2015-09-01
We develop a numerical method for a generalized nonlocal elastic model, which is expressed as a composition of a Riesz potential operator with a fractional differential operator, by composing a collocation method with a finite difference discretization. By carefully exploring the structure of the coefficient matrix of the numerical method, we develop a preconditioned fast Krylov subspace method, which reduces the computations to (Nlog N) per iteration and the memory to O (N). The use of the preconditioner significantly reduces the number of iterations, and the preconditioner can be inverted in O (Nlog N) computations. Numerical results show the utility of the method.
Fast Fiber-Laser Alignment: Beam Spot-Size Method
NASA Astrophysics Data System (ADS)
Zhang, Rong; Guo, Jingyan; Shi, Frank G.
2005-03-01
A novel fast and cost-effective method is introduced for the active alignment of a fiber to a laser diode: only four easy laser beam spot-size measurements are required for moving the fiber tip from the far field to the proximity of the optimal alignment position, thus dramatically reducing the total alignment time (at least five times faster than a conventional method),as experimentally confirmed. Moreover, in contrast to the existing methods,the new method is failure-proof. The principle of the proposed method can be applied generally to any type of packages and is illustrated by an example of a butterfly package.
The characterization of kerogen-analytical limitations and method design
Larter, S.R.
1987-04-01
Methods suitable for high resolution total molecular characterization of kerogens and other polymeric SOM are necessary for a quantitative understanding of hydrocarbon maturation and migration phenomena in addition to being a requirement for a systematic understanding of kerogen based fuel utilization. Gas chromatographic methods, in conjunction with analytical pyrolysis methods, have proven successful in the rapid superficial characterization of kerogen pyrolysates. Most applications involve qualitative or semi-quantitative assessment of the relative concentration of aliphatic, aromatic, or oxygen-containing species in a kerogen pyrolysate. More recently, the use of alkylated polystyrene internal standards has allowed the direct determination of parameters related to the abundance of, for example, normal alkyl groups or single ring aromatic species in kerogens. The future of methods of this type for improved kerogen typing is critically discussed. The conceptual design and feasibility of methods suitable for the more complete characterization of complex geopolymers on the molecular level is discussed with practical examples.
Comparison of analytical methods for calculation of wind loads
NASA Technical Reports Server (NTRS)
Minderman, Donald J.; Schultz, Larry L.
1989-01-01
The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.
21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 5 2011-04-01 2011-04-01 false Analytical methods for an in vivo bioavailability... Analytical methods for an in vivo bioavailability or bioequivalence study. (a) The analytical method used in... ingredient or therapeutic moiety, or its active metabolite(s), achieved in the body. (b) When the analytical...
21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Analytical methods for an in vivo bioavailability... Analytical methods for an in vivo bioavailability or bioequivalence study. (a) The analytical method used in... ingredient or therapeutic moiety, or its active metabolite(s), achieved in the body. (b) When the analytical...
21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 5 2014-04-01 2014-04-01 false Analytical methods for an in vivo bioavailability... Analytical methods for an in vivo bioavailability or bioequivalence study. (a) The analytical method used in... ingredient or therapeutic moiety, or its active metabolite(s), achieved in the body. (b) When the analytical...
21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 5 2013-04-01 2013-04-01 false Analytical methods for an in vivo bioavailability... Analytical methods for an in vivo bioavailability or bioequivalence study. (a) The analytical method used in... ingredient or therapeutic moiety, or its active metabolite(s), achieved in the body. (b) When the analytical...
21 CFR 320.29 - Analytical methods for an in vivo bioavailability or bioequivalence study.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 5 2012-04-01 2012-04-01 false Analytical methods for an in vivo bioavailability... Analytical methods for an in vivo bioavailability or bioequivalence study. (a) The analytical method used in... ingredient or therapeutic moiety, or its active metabolite(s), achieved in the body. (b) When the analytical...
A new fast direct solver for the boundary element method
NASA Astrophysics Data System (ADS)
Huang, S.; Liu, Y. J.
2017-04-01
A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.
Selectivity in analytical chemistry: two interpretations for univariate methods.
Dorkó, Zsanett; Verbić, Tatjana; Horvai, George
2015-01-01
Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity.
Organic analysis and analytical methods development: FY 1995 progress report
Clauss, S.A.; Hoopes, V.; Rau, J.
1995-09-01
This report describes the status of organic analyses and developing analytical methods to account for the organic components in Hanford waste tanks, with particular emphasis on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-103 (Tank 103-SY). The analytical data are to serve as an example of the status of methods development and application. Samples of the convective and nonconvective layers from Tank 103-SY were analyzed for total organic carbon (TOC). The TOC value obtained for the nonconvective layer using the hot persulfate method was 10,500 {mu}g C/g. The TOC value obtained from samples of Tank 101-SY was 11,000 {mu}g C/g. The average value for the TOC of the convective layer was 6400 {mu}g C/g. Chelator and chelator fragments in Tank 103-SY samples were identified using derivatization. gas chromatography/mass spectrometry (GC/MS). Organic components were quantified using GC/flame ionization detection. Major components in both the convective and nonconvective-layer samples include ethylenediaminetetraacetic acid (EDTA), nitrilotriacetic acid (NTA), succinic acid, nitrosoiminodiacetic acid (NIDA), citric acid, and ethylenediaminetriacetic acid (ED3A). Preliminary results also indicate the presence of C16 and C18 carboxylic acids in the nonconvective-layer sample. Oxalic acid was one of the major components in the nonconvective layer as determined by derivatization GC/flame ionization detection.
Quantitative analytical method to evaluate the metabolism of vitamin D.
Mena-Bravo, A; Ferreiro-Vera, C; Priego-Capote, F; Maestro, M A; Mouriño, A; Quesada-Gómez, J M; Luque de Castro, M D
2015-03-10
A method for quantitative analysis of vitamin D (both D2 and D3) and its main metabolites - monohydroxylated vitamin D (25-hydroxyvitamin D2 and 25-hydroxyvitamin D3) and dihydroxylated metabolites (1,25-dihydroxyvitamin D2, 1,25-dihydroxyvitamin D3 and 24,25-dihydroxyvitamin D3) in human serum is here reported. The method is based on direct analysis of serum by an automated platform involving on-line coupling of a solid-phase extraction workstation to a liquid chromatograph-tandem mass spectrometer. Detection of the seven analytes was carried out by the selected reaction monitoring (SRM) mode, and quantitative analysis was supported on the use of stable isotopic labeled internal standards (SIL-ISs). The detection limits were between 0.3-75pg/mL for the target compounds, while precision (expressed as relative standard deviation) was below 13.0% for between-day variability. The method was externally validated according to the vitamin D External Quality Assurance Scheme (DEQAS) through the analysis of ten serum samples provided by this organism. The analytical features of the method support its applicability in nutritional and clinical studies targeted at elucidating the role of vitamin D metabolism.
A Critical Review of Analytical Methods for Quantification of Cefotaxime.
Consortti, Lívia Paganini; Salgado, Hérida Regina Nunes
2017-07-04
Bacterial resistance to antibiotics is a growing phenomenon in the world. Considering the relevance of antimicrobials for population and the reduction in the registration of new antimicrobials by regulatory, proper quality control is required in order to minimize the spread of bacterial resistance and ensure the effectiveness of a treatment, as well as safety for the patient. Among the antimicrobials is cefotaxime, a drug belonging to third-generation cephalosporins, which is highly active against Gram-negative bactéria and is used to treat central nervous system infections such as meningitis and septicemia. Due to the critical importance of quality control in regard to drugs and pharmaceutical products, combined with bacterial resistance to antibiotics, this study aims to conduct a detailed review of analytical methods for cefotaxime. Using a critical review of literature, this paper describes the analytical methods published to quantify cefotaxime in different matrices; a large number of methods by HPLC and spectrophotometry were observed. Despite the advantages of the techniques, most methods reported have large environment and occupational impact, which enfatizes the need to adopt green procedures in quantifying cefotaxime.
Gaussian Analytic Centroiding method of star image of star tracker
NASA Astrophysics Data System (ADS)
Wang, Haiyong; Xu, Ershuai; Li, Zhifeng; Li, Jingjin; Qin, Tianmu
2015-11-01
The energy distribution of an actual star image coincides with the Gaussian law statistically in most cases, so the optimized processing algorithm about star image centroiding should be constructed also by following Gaussian law. For a star image spot covering a certain number of pixels, the marginal distribution of the gray accumulation on rows and columns are shown and analyzed, based on which the formulas of Gaussian Analytic Centroiding method (GAC) are deduced, and the robustness is also promoted due to the inherited filtering effect of gray accumulation. Ideal reference star images are simulated by the PSF (point spread function) with integral form. Precision and speed tests for the Gaussian Analytic formulas are conducted under three scenarios of Gaussian radius (0.5, 0.671, 0.8 pixel), The simulation results show that the precision of GAC method is better than that of the other given algorithms when the Gaussian radius is not bigger than 5 × 5 pixel window, a widely used parameter. Above all, the algorithm which consumes the least time is still the novel GAC method. GAC method helps to promote the comprehensive performance in the attitude determination of a star tracker.
Evolution of microbiological analytical methods for dairy industry needs.
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards.
Reducing waste generation and radiation exposure by analytical method modification
Ekechukwu, A.A.
1996-10-01
The primary goal of an analytical support laboratory has traditionally been to provide accurate data in a timely and cost effective fashion. Added to this goal is now the need to provide the same high quality data while generating as little waste as possible. At the Savannah River Technology Center (SRTC), we have modified and reengineered several methods to decrease generated waste and hence reduce radiation exposure. These method changes involved improving detection limits (which decreased the amount of sample required for analysis), decreasing reaction and analysis time, decreasing the size of experimental set-ups, recycling spent solvent and reagents, and replacing some methods. These changes had the additional benefits of reducing employee radiation exposure and exposure to hazardous chemicals. In all cases, the precision, accuracy, and detection limits were equal to or better than the replaced method. Most of the changes required little or no expenditure of funds. This paper describes these changes and discusses some of their applications.
ANALYTICAL METHODS FOR KINETIC STUDIES OF BIOLOGICAL INTERACTIONS: A REVIEW
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S.
2015-01-01
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721
Quasi-Analytical Method for Images Construction from Gravitational Lenses
NASA Astrophysics Data System (ADS)
Kotvytskiy, A. T.; Bronza, S. D.
One of the main problems in the study of system of equations of the gravitational lens, is the computation of coordinates from the known position of the source. In the process of computing finds the solution of equations with two unknowns. The problem is that, in general, there is no analytical method that can find all of the roots (lens) of system over the field of real numbers. In this connection, use numerical methods like the method of tracing. For the N-point gravitational lenses we have a system of polynomial equations. The methods of algebraic geometry, we transform the system to another system, which splits into two equations. Each equation of the transformed system is a polynomial in one variable. Finding the roots of these equations is the standard computing task.
Evolution of microbiological analytical methods for dairy industry needs
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
NASA Astrophysics Data System (ADS)
Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.
2014-05-01
Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in ambient air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C9-C15 BVOC composition of single plant emissions may be characterised within a 14.5 min analysis time. Moreover, in-situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an 11.7 min chromatographic separation time (increasing to 19.7 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). These analysis times potentially allow for a twofold to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in-situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC (OBVOC) linalool in ambient air. During this field deployment within a suburban forest
NASA Astrophysics Data System (ADS)
Theis, L. S.; Motzoi, F.; Wilhelm, F. K.
2016-01-01
We present a few-parameter ansatz for pulses to implement a broad set of simultaneous single-qubit rotations in frequency-crowded multilevel systems. Specifically, we consider a system of two qutrits whose working and leakage transitions suffer from spectral crowding (detuned by δ ). In order to achieve precise controllability, we make use of two driving fields (each having two quadratures) at two different tones to simultaneously apply arbitrary combinations of rotations about axes in the X -Y plane to both qubits. Expanding the waveforms in terms of Hanning windows, we show how analytic pulses containing smooth and composite-pulse features can easily achieve gate errors less than 10-4 and considerably outperform known adiabatic techniques. Moreover, we find a generalization of the WAHWAH (Weak AnHarmonicity With Average Hamiltonian) method by Schutjens et al. [R. Schutjens, F. A. Dagga, D. J. Egger, and F. K. Wilhelm, Phys. Rev. A 88, 052330 (2013)], 10.1103/PhysRevA.88.052330 that allows precise separate single-qubit rotations for all gate times beyond a quantum speed limit. We find in all cases a quantum speed limit slightly below 2 π /δ for the gate time and show that our pulses are robust against variations in system parameters and filtering due to transfer functions, making them suitable for experimental implementations.
Analytical methods for gravity-assist tour design
NASA Astrophysics Data System (ADS)
Strange, Nathan J.
This dissertation develops analytical methods for the design of gravity-assist space- craft trajectories. Such trajectories are commonly employed by planetary science missions to reach Mercury or the Outer Planets. They may also be used at the Outer Planets for the design of science tours with multiple flybys of those planets' moons. Recent work has also shown applicability to new missions concepts such as NASA's Asteroid Redirect Mission. This work is based in the theory of patched conics. This document applies rigor to the concept of pumping (i.e. using gravity assists to change orbital energy) and cranking (i.e. using gravity assists to change inclination) to develop several analytic relations with pump and crank angles. In addition, transformations are developed between pump angle, crank angle, and v-infinity magnitude to classical orbit elements. These transformations are then used to describe the limits on orbits achievable via gravity assists of a planet or moon. This is then extended to develop analytic relations for all possible ballistic gravity-assist transfers and one type of propulsive transfer, v-infinity leveraging transfers. The results in this dissertation complement existing numerical methods for the design of these trajectories by providing methods that can guide numerical searches to find promising trajectories and even, in some cases, replace numerical searches altogether. In addition, results from new techniques presented in this dissertation such as Tisserand Graphs, the V-Infinity Globe, and Non-Tangent V-Infinty Leveraging provide additional insight into the structure of the gravity-assist trajectory design problem.
Analytical method for thermal stress analysis of plasma facing materials
NASA Astrophysics Data System (ADS)
You, J. H.; Bolt, H.
2001-10-01
The thermo-mechanical response of plasma facing materials (PFMs) to heat loads from the fusion plasma is one of the crucial issues in fusion technology. In this work, a fully analytical description of the thermal stress distribution in armour tiles of plasma facing components is presented which is expected to occur under typical high heat flux (HHF) loads. The method of stress superposition is applied considering the temperature gradient and thermal expansion mismatch. Several combinations of PFMs and heat sink metals are analysed and compared. In the framework of the present theoretical model, plastic flow and the effect of residual stress can be quantitatively assessed. Possible failure features are discussed.
Implementation of the maximum entropy method for analytic continuation
NASA Astrophysics Data System (ADS)
Levy, Ryan; LeBlanc, J. P. F.; Gull, Emanuel
2017-06-01
We present Maxent, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv3 and extensively documented. This paper shows the use of the programs in detail.
Analytical methods for human biomonitoring of pesticides. A review.
Yusa, Vicent; Millet, Maurice; Coscolla, Clara; Roca, Marta
2015-09-03
Biomonitoring of both currently-used and banned-persistent pesticides is a very useful tool for assessing human exposure to these chemicals. In this review, we present current approaches and recent advances in the analytical methods for determining the biomarkers of exposure to pesticides in the most commonly used specimens, such as blood, urine, and breast milk, and in emerging non-invasive matrices such as hair and meconium. We critically discuss the main applications for sample treatment, and the instrumental techniques currently used to determine the most relevant pesticide biomarkers. We finally look at the future trends in this field. Copyright © 2015 Elsevier B.V. All rights reserved.
Analytical Methods Used in the Quality Control of Honey.
Pita-Calvo, Consuelo; Guerra-Rodríguez, María Esther; Vázquez, Manuel
2017-02-01
Honey is a natural sweet substance produced by bees (Apis mellifera). In this work, the main parameters used in routine quality control of honey and the most commonly used analytical methods for their determination are reviewed. Honey can be adulterated with cheaper sweeteners or, indirectly, by feeding the bees with sugars. Therefore, methods for detecting and quantifying adulteration are necessary. Chromatographic techniques are widely used in honey analysis. More recently, techniques such as Raman, near-infrared, mid-infrared, and nuclear magnetic resonance spectroscopy in combination with chemometric data processing have been proposed. However, spectroscopy does not allow the determination of enzyme activities, one criteria of great importance for the honey trade. Methylglyoxal is an interesting compound for its antibacterial properties. Methods for its determination are also reviewed.
Fast Numerical Methods for Stochastic Partial Differential Equations
2016-04-15
AFRL-AFOSR-VA-TR-2016-0156 Fast Numerical Methods for Stochastic Partial Differential Equations Hongmei Chi Florida Agricultural and Mechanical...University Tallahassee Final Report 04/15/2016 DISTRIBUTION A: Distribution approved for public release. AF Office Of Scientific Research (AFOSR)/ RTA2...Arlington, Virginia 22203 Air Force Research Laboratory Air Force Materiel Command REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 1. REPORT
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-08
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science.
Differential correction method applied to measurement of the FAST reflector
NASA Astrophysics Data System (ADS)
Li, Xin-Yi; Zhu, Li-Chun; Hu, Jin-Wen; Li, Zhi-Heng
2016-08-01
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) adopts an active deformable main reflector which is composed of 4450 triangular panels. During an observation, the illuminated area of the reflector is deformed into a 300-m diameter paraboloid and directed toward a source. To achieve accurate control of the reflector shape, positions of 2226 nodes distributed around the entire reflector must be measured with sufficient precision within a limited time, which is a challenging task because of the large scale. Measurement of the FAST reflector makes use of stations and node targets. However, in this case the effect of the atmosphere on measurement accuracy is a significant issue. This paper investigates a differential correction method for total stations measurement of the FAST reflector. A multi-benchmark differential correction method, including a scheme for benchmark selection and weight assignment, is proposed. On-site evaluation experiments show there is an improvement of 70%-80% in measurement accuracy compared with the uncorrected measurement, verifying the effectiveness of the proposed method.
Using analytic network process for evaluating mobile text entry methods.
Ocampo, Lanndon A; Seva, Rosemary R
2016-01-01
This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods.
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
Validation of analytic methods for biomarkers used in drug development.
Chau, Cindy H; Rixe, Olivier; McLeod, Howard; Figg, William D
2008-10-01
The role of biomarkers in drug discovery and development has gained precedence over the years. As biomarkers become integrated into drug development and clinical trials, quality assurance and, in particular, assay validation become essential with the need to establish standardized guidelines for analytic methods used in biomarker measurements. New biomarkers can revolutionize both the development and use of therapeutics but are contingent on the establishment of a concrete validation process that addresses technology integration and method validation as well as regulatory pathways for efficient biomarker development. This perspective focuses on the general principles of the biomarker validation process with an emphasis on assay validation and the collaborative efforts undertaken by various sectors to promote the standardization of this procedure for efficient biomarker development.
Friedmann-Lemaitre cosmologies via roulettes and other analytic methods
Chen, Shouxin; Gibbons, Gary W.; Yang, Yisong E-mail: gwg1@damtp.cam.ac.uk
2015-10-01
In this work a series of methods are developed for understanding the Friedmann equation when it is beyond the reach of the Chebyshev theorem. First it will be demonstrated that every solution of the Friedmann equation admits a representation as a roulette such that information on the latter may be used to obtain that for the former. Next the Friedmann equation is integrated for a quadratic equation of state and for the Randall-Sundrum II universe, leading to a harvest of a rich collection of new interesting phenomena. Finally an analytic method is used to isolate the asymptotic behavior of the solutions of the Friedmann equation, when the equation of state is of an extended form which renders the integration impossible, and to establish a universal exponential growth law.
Validation of Analytical Methods for Biomarkers Employed in Drug Development
Chau, Cindy H.; Rixe, Olivier; McLeod, Howard; Figg, William D.
2008-01-01
The role of biomarkers in drug discovery and development has gained precedence over the years. As biomarkers become integrated into drug development and clinical trials, quality assurance and in particular assay validation becomes essential with the need to establish standardized guidelines for analytical methods used in biomarker measurements. New biomarkers can revolutionize both the development and use of therapeutics, but is contingent upon the establishment of a concrete validation process that addresses technology integration and method validation as well as regulatory pathways for efficient biomarker development. This perspective focuses on the general principles of the biomarker validation process with an emphasis on assay validation and the collaborative efforts undertaken by various sectors to promote the standardization of this procedure for efficient biomarker development. PMID:18829475
A novel unified coding analytical method for Internet of Things
NASA Astrophysics Data System (ADS)
Sun, Hong; Zhang, JianHong
2013-08-01
This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.
Friedmann-Lemaitre cosmologies via roulettes and other analytic methods
NASA Astrophysics Data System (ADS)
Chen, Shouxin; Gibbons, Gary W.; Yang, Yisong
2015-10-01
In this work a series of methods are developed for understanding the Friedmann equation when it is beyond the reach of the Chebyshev theorem. First it will be demonstrated that every solution of the Friedmann equation admits a representation as a roulette such that information on the latter may be used to obtain that for the former. Next the Friedmann equation is integrated for a quadratic equation of state and for the Randall-Sundrum II universe, leading to a harvest of a rich collection of new interesting phenomena. Finally an analytic method is used to isolate the asymptotic behavior of the solutions of the Friedmann equation, when the equation of state is of an extended form which renders the integration impossible, and to establish a universal exponential growth law.
Fast sweeping method for the factored eikonal equation
NASA Astrophysics Data System (ADS)
Fomel, Sergey; Luo, Songting; Zhao, Hongkai
2009-09-01
We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.
Kroniger, K; Herzog, M; Landry, G; Dedes, G; Parodi, K; Traneus, E
2015-06-15
Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used as irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
A PDE-Based Fast Local Level Set Method
NASA Astrophysics Data System (ADS)
Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo
1999-11-01
We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.
Analytical solutions for radiation-driven winds in massive stars. I. The fast regime
Araya, I.; Curé, M.; Cidale, L. S.
2014-11-01
Accurate mass-loss rate estimates are crucial keys in the study of wind properties of massive stars and for testing different evolutionary scenarios. From a theoretical point of view, this implies solving a complex set of differential equations in which the radiation field and the hydrodynamics are strongly coupled. The use of an analytical expression to represent the radiation force and the solution of the equation of motion has many advantages over numerical integrations. Therefore, in this work, we present an analytical expression as a solution of the equation of motion for radiation-driven winds in terms of the force multiplier parameters. This analytical expression is obtained by employing the line acceleration expression given by Villata and the methodology proposed by Müller and Vink. On the other hand, we find useful relationships to determine the parameters for the line acceleration given by Müller and Vink in terms of the force multiplier parameters.
Fast iterative boundary element methods for high-frequency scattering problems in 3D elastodynamics
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Darbas, Marion; Le Louër, Frédérique
2017-07-01
The fast multipole method is an efficient technique to accelerate the solution of large scale 3D scattering problems with boundary integral equations. However, the fast multipole accelerated boundary element method (FM-BEM) is intrinsically based on an iterative solver. It has been shown that the number of iterations can significantly hinder the overall efficiency of the FM-BEM. The derivation of robust preconditioners for FM-BEM is now inevitable to increase the size of the problems that can be considered. The main constraint in the context of the FM-BEM is that the complete system is not assembled to reduce computational times and memory requirements. Analytic preconditioners offer a very interesting strategy by improving the spectral properties of the boundary integral equations ahead from the discretization. The main contribution of this paper is to combine an approximate adjoint Dirichlet to Neumann (DtN) map as an analytic preconditioner with a FM-BEM solver to treat Dirichlet exterior scattering problems in 3D elasticity. The approximations of the adjoint DtN map are derived using tools proposed in [40]. The resulting boundary integral equations are preconditioned Combined Field Integral Equations (CFIEs). We provide various numerical illustrations of the efficiency of the method for different smooth and non-smooth geometries. In particular, the number of iterations is shown to be completely independent of the number of degrees of freedom and of the frequency for convex obstacles.
MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...
Basal buoyancy and fast-moving glaciers: in defense of analytic force balance
NASA Astrophysics Data System (ADS)
van der Veen, C. J.
2016-06-01
The geometric approach to force balance advocated by T. Hughes in a series of publications has challenged the analytic approach by implying that the latter does not adequately account for basal buoyancy on ice streams, thereby neglecting the contribution to the gravitational driving force associated with this basal buoyancy. Application of the geometric approach to Byrd Glacier, Antarctica, yields physically unrealistic results, and it is argued that this is because of a key limiting assumption in the geometric approach. A more traditional analytic treatment of force balance shows that basal buoyancy does not affect the balance of forces on ice streams, except locally perhaps, through bridging effects.
Fast-slow asymptotic for semi-analytical ignition criteria in FitzHugh-Nagumo system
NASA Astrophysics Data System (ADS)
Bezekci, B.; Biktashev, V. N.
2017-09-01
We study the problem of initiation of excitation waves in the FitzHugh-Nagumo model. Our approach follows earlier works and is based on the idea of approximating the boundary between basins of attraction of propagating waves and of the resting state as the stable manifold of a critical solution. Here, we obtain analytical expressions for the essential ingredients of the theory by singular perturbation using two small parameters, the separation of time scales of the activator and inhibitor and the threshold in the activator's kinetics. This results in a closed analytical expression for the strength-duration curve.
A fast linear reconstruction method for scanning impedance imaging.
Liu, Hongze; Hawkins, Aaron R; Schultz, Stephen M; Oliphant, Travis E
2006-01-01
Scanning electrical impedance imaging (SII) has been developed and implemented as a novel high resolution imaging modality with the potential of imaging the electrical properties of biological tissues. In this paper, a fast linear model is derived and applied to the impedance image reconstruction of scanning impedance imaging. With the help of both the deblurring concept and the reciprocity principle, this new approach leads to a calibrated approximation of the exact impedance distribution rather than a relative one from the original simplified linear method. Additionally, the method shows much less computational cost than the more straightforward nonlinear inverse method based on the forward model. The kernel function of this new approach is described and compared to the kernel of the simplified linear method. Two-dimensional impedance images of a flower petal and cancer cells are reconstructed using this method. The images reveal details not present in the measured images.
21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Procedure for announcing analytical methods for...-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a) FDA may issue an order announcing a specific analytical method or methods for the quantification of...
21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Procedure for announcing analytical methods for...-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a) FDA may issue an order announcing a specific analytical method or methods for the quantification of...
21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Procedure for announcing analytical methods for...-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a) FDA may issue an order announcing a specific analytical method or methods for the quantification of...
Analytical Methods in Untargeted Metabolomics: State of the Art in 2015
Alonso, Arnald; Marsal, Sara; Julià, Antonio
2015-01-01
Metabolomics comprises the methods and techniques that are used to measure the small molecule composition of biofluids and tissues, and is actually one of the most rapidly evolving research fields. The determination of the metabolomic profile – the metabolome – has multiple applications in many biological sciences, including the developing of new diagnostic tools in medicine. Recent technological advances in nuclear magnetic resonance and mass spectrometry are significantly improving our capacity to obtain more data from each biological sample. Consequently, there is a need for fast and accurate statistical and bioinformatic tools that can deal with the complexity and volume of the data generated in metabolomic studies. In this review, we provide an update of the most commonly used analytical methods in metabolomics, starting from raw data processing and ending with pathway analysis and biomarker identification. Finally, the integration of metabolomic profiles with molecular data from other high-throughput biotechnologies is also reviewed. PMID:25798438
A two-dimensional, semi-analytic expansion method for nodal calculations
Palmtag, Scott P.
1995-08-01
Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure.
Key aspects of analytical method validation and linearity evaluation.
Araujo, Pedro
2009-08-01
Method validation may be regarded as one of the most well-known areas in analytical chemistry as is reflected in the substantial number of articles submitted and published in peer review journals every year. However, some of the relevant parameters recommended by regulatory bodies are often used interchangeably and incorrectly or are miscalculated, due to few references to evaluate some of the terms as well as wrong application of the mathematical and statistical approaches used in their estimation. These mistakes have led to misinterpretation and ambiguity in the terminology and in some instances to wrong scientific conclusions. In this article, the definitions of various relevant performance indicators such as selectivity, specificity, accuracy, precision, linearity, range, limit of detection, limit of quantitation, ruggedness, and robustness are critically discussed with a view to prevent their erroneous usage and ensure scientific correctness and consistency among publications.
Analytical methods for determination of anticoagulant rodenticides in biological samples.
Imran, Muhammad; Shafi, Humera; Wattoo, Sardar Ali; Chaudhary, Muhammad Taimoor; Usman, Hafiz Faisal
2015-08-01
Anticoagulant rodenticides belong to a heterogeneous group of compounds which are used to kill rodents. They bind to enzyme complexes responsible for recycling of vitamin K, thus producing impairment in coagulation process. Rodenticides are among the most common house hold toxicants and exhibit wide variety of toxicities in non-target species especially in human, dogs and cats. This article reviews published analytical methods reported in literature for qualitative and quantitative determination of anticoagulant rodenticides in biological specimens. These techniques include high performance liquid chromatography coupled with ultraviolet and florescence detectors, liquid chromatography electrospray ionization tandem mass spectrometry, liquid chromatography with high resolution tandem mass spectrometry, ultra performance liquid chromatography mass spectrometry, gas chromatography mass spectrometry, ion chromatography with fluorescence detection, ion chromatography electrospray ionization ion trap mass spectrometry and ion chromatography electrospray ionization tandem mass spectrometry.
Analytical methods for volatile compounds in wheat bread.
Pico, Joana; Gómez, Manuel; Bernal, José; Bernal, José Luis
2016-01-08
Bread aroma is one of the main requirements for its acceptance by consumers, since it is one of the first attributes perceived. Sensory analysis, crucial to be correlated with human perception, presents limitations and needs to be complemented with instrumental analysis. Gas chromatography coupled to mass spectrometry is usually selected as the technique to determine bread volatile compounds, although proton-transfer reaction mass spectrometry begins also to be used to monitor aroma processes. Solvent extraction, supercritical fluid extraction and headspace analysis are the main options for the sample treatment. The present review focuses on the different sample treatments and instrumental alternatives reported in the literature to analyse volatile compounds in wheat bread, providing advantages and limitations. Usual parameters employed in these analytical methods are also described.
Application of analytical methods in authentication and adulteration of honey.
Siddiqui, Amna Jabbar; Musharraf, Syed Ghulam; Choudhary, M Iqbal; Rahman, Atta-Ur-
2017-02-15
Honey is synthesized from flower nectar and it is famous for its tremendous therapeutic potential since ancient times. Many factors influence the basic properties of honey including the nectar-providing plant species, bee species, geographic area, and harvesting conditions. Quality and composition of honey is also affected by many other factors, such as overfeeding of bees with sucrose, harvesting prior to maturity, and adulteration with sugar syrups. Due to the complex nature of honey, it is often challenging to authenticate the purity and quality by using common methods such as physicochemical parameters and more specialized procedures need to be developed. This article reviews the literature (between 2000 and 2016) on the use of analytical techniques, mainly NMR spectroscopy, for authentication of honey, its botanical and geographical origin, and adulteration by sugar syrups. NMR is a powerful technique and can be used as a fingerprinting technique to compare various samples.
Analytical Failure Prediction Method Developed for Woven and Braided Composites
NASA Technical Reports Server (NTRS)
Min, James B.
2003-01-01
Historically, advances in aerospace engine performance and durability have been linked to improvements in materials. Recent developments in ceramic matrix composites (CMCs) have led to increased interest in CMCs to achieve revolutionary gains in engine performance. The use of CMCs promises many advantages for advanced turbomachinery engine development and may be especially beneficial for aerospace engines. The most beneficial aspects of CMC material may be its ability to maintain its strength to over 2500 F, its internal material damping, and its relatively low density. Ceramic matrix composites reinforced with two-dimensional woven and braided fabric preforms are being considered for NASA s next-generation reusable rocket turbomachinery applications (for example, see the preceding figure). However, the architecture of a textile composite is complex, and therefore, the parameters controlling its strength properties are numerous. This necessitates the development of engineering approaches that combine analytical methods with limited testing to provide effective, validated design analyses for the textile composite structures development.
Exploring Lab Tests Over Utilization Patterns Using Health Analytics Methods.
Khalifa, Mohamed; Zabani, Ibrahim; Khalid, Parwaiz
2016-01-01
Healthcare resources are over utilized contributing more to the growing costs of care. Although laboratory testing is essential, yet it can be expensive and excessive. King Faisal Specialist Hospital and Research Center, Saudi Arabia studied lab tests utilization patterns using health analytics methods. The objective was to identify patterns of utilizing lab tests and to develop recommendations to control over utilization. Three over utilization patterns were identified; using expensive tests for many patients as routine, unnecessarily repeating lab test and a combined one. Two recommendations were suggested; a user approach, modifying user behavior through orientation about the impact of over utilization on the cost effectiveness of healthcare, and a system approach, implementing system alerts to help physicians check the results and identify the date of the last lab tests done with information about appropriate frequency of ordering such lab test and medically significant intervals at which such test should be repeated.
Updates to Selected Analytical Methods for Environmental Remediation and Recovery (SAM)
View information on the latest updates to methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), including the newest recommended methods and publications.
[Fast Implementation Method of Protein Spots Detection Based on CUDA].
Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong
2016-02-01
In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms.
The Finite Analytic Method for steady and unsteady heat transfer problems
NASA Technical Reports Server (NTRS)
Chen, C.-J.; Li, P.
1980-01-01
A new numerical method called the Finite Analytical Method for solving partial differential equations is introduced. The basic idea of the finite analytic method is the incorporation of the local analytic solution in obtaining the numerical solution of the problem. The finite analytical method first divides the total region of the problem into small subregions in which local analytic solutions are obtained. Then an algebraic equation is derived from the local analytic solution for each subregion relating an interior nodal value at a point P in the subregion to its neighboring nodal values. The assembly of all the local analytic solutions thus provides the finite-analytic numerical solution of the problem. In this paper the finite analytic method is illustrated in solving steady and unsteady heat transfer problems.
Applying analytical and experimental methods to characterize engineered components
NASA Astrophysics Data System (ADS)
Munn, Brian S.
A variety of analytical and experimental methods were employed to characterize two very different types of engineered components. The engineered components of interest were monolithic silicon carbide tiles and M12 x 1.75 Class 9.8 steel fasteners. A new application applying the hole drilling technique was developed on monolithic silicon-carbide tiles of varying thicknesses. This work was driven by a need to first develop a reliable method to measure residual stresses and, then, to validate the methodology through characterizing residual stresses on the tiles of interest. The residual stresses measured in all tiles were tensile in nature. The highest residual stresses were measured at the surface, and decreased exponentially. There was also a trend for the residual tensile stresses to decrease with increasing specimen thickness. Thermal-mechanical interactions were successfully analyzed via a one-way, coupled FEA modeled approach. The key input for a successful FEA analysis was an appropriate heat transfer rate. By varying the heat transfer rate in the FEA model and, then, comparing stress output to experimental residual stress values, provided a favorable numerical solution in determining a heat transfer rate. Fatigue behavior of a M12 x 1.75 Class 9.8 steel test fastener was extensively studied through the use of a variety of experimental and analytical techniques. Of particular interest, was the underlying interaction between notch plasticity and overall fatigue behavior. A very large data set of fastener fatigue behavior was generated with respect to mean stress. A series of endurance limit curves were established for different mean stress values ranging from minimal to the yield strength of the steel fastener (0 ≤ sigmam ≤ sigmay). This wide range in mean stress values created a change in notch tip plasticity which caused a local diminishing of the mean stress increasing expected fatigue life. The change in notch plasticity was identified by residual stress
Heuskin, Stéphanie; Rozet, Eric; Lorge, Stéphanie; Farmakidis, Julien; Hubert, Philippe; Verheggen, François; Haubruge, Eric; Wathelet, Jean-Paul; Lognay, Georges
2010-12-01
The validation of a fast GC-FID analytical method for the quantitative determination of semiochemical sesquiterpenes (E-beta-farnesene and beta-caryophyllene) to be used in an integrated pest management approach is described. Accuracy profiles using total error as decision criteria for validation were used to verify the overall accuracy of the method results within a well defined range of concentrations and to determine the lowest limit of quantification for each analyte. Furthermore it allowed to select a very simple and reliable regression model for calibration curve for the quantification of both analytes as well as to provide measurement uncertainty without any additional experiments. Finally, this validated method was used for the quantification of semiochemicals in slow release formulations. The goal was to verify the protection efficiency of alginate gel beads formulations against oxidation and degradation of sesquiterpenes. The results showed that the alginate beads are adequate slow release devices which protect the bio-active molecules during at least twenty days. Copyright (c) 2010 Elsevier B.V. All rights reserved.
[Analytical methods for control of foodstuffs made from bioengineered plants].
Chernysheva, O N; Sorokina, E Iu
2013-01-01
Foodstuffs made by modern biotechnology are requiring for special control. Analytical methods used for these purposes are being constantly perfected. When choosing a strategy for the analysis, several factors have to be assessed: specificity, sensitivity, practically of the method and time efficiency. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. Protein detection methods are based mainly on ELISA. The specific detection of a novel protein synthesized by gene introduced during transformation constitutes an alternative approach for the identification of GMO. The genetic modification is not always specifically directed at the production of a novel protein and does not always result in protein expression levels sufficient for detection purposes. In addition, some proteins may be expressed only in specific parts of the plant or expressed at different levels in distinct parts of plant. As DNA is a rather stable molecule relative to proteins, it is preferred target for any kind of sample. These methods are more sensitive and specific than protein detection methods. PCR-based test can be categorized into several levels of specificity. The least specific methods are commonly called "screening methods" and relate to target DNA elements, such as promoters and terminators that are present in many different GMOs. For routine screening purpose regulatory elements 35S promoter, derived from the Cauliflower Mosaic Virus and the NOS terminator, derived from the nopaline synthase gene of Agrobacterium tumefaciens, are used as target sequences. The second level is "gene-specific methods". These methods target a part of the DNA harbouring the active gene associated with the specific genetic modification. The highest specificity is seen when the target is the unique junction found at the integration locus between the inserted DNA and the recipient genome. These are called "event-specific methods". For a
A Domain Decomposition Parallelization of the Fast Marching Method
2003-12-01
Furthermore, the following attribute of the Fast Marching Method can be discerned: ATTRIBUTE 2. The sequential loop steps ( c )-(f) sort all accepted nodes...process #1. (Cm) Perform sequential FMM algorithm steps ( c )-(f). (Em) Receive results for nodes G,°k < Go from process #0 and results for nodes Gt°k...Go = min(GDo...0 N), and mark it as accepted. If Go E Vi, go to next step, else go to step (F). (D) Perform sequential FMM algorithm steps ( c )-(e
NASA Astrophysics Data System (ADS)
Chakraborty, Bidisha; Heyde, Brecht; Alessandrini, Martino; D'hooge, Jan
2016-04-01
Image registration techniques using free-form deformation models have shown promising results for 3D myocardial strain estimation from ultrasound. However, the use of this technique has mostly been limited to research institutes due to the high computational demand, which is primarily due to the computational load of the regularization term ensuring spatially smooth cardiac strain estimates. Indeed, this term typically requires evaluating derivatives of the transformation field numerically in each voxel of the image during every iteration of the optimization process. In this paper, we replace this time-consuming step with a closed-form solution directly associated with the transformation field resulting in a speed up factor of ~10-60,000, for a typical 3D B-mode image of 2503 and 5003 voxels, depending upon the size and the parametrization of the transformation field. The performance of the numeric and the analytic solutions was contrasted by computing tracking and strain accuracy on two realistic synthetic 3D cardiac ultrasound sequences, mimicking two ischemic motion patterns. Mean and standard deviation of the displacement errors over the cardiac cycle for the numeric and analytic solutions were 0.68+/-0.40 mm and 0.75+/-0.43 mm respectively. Correlations for the radial, longitudinal and circumferential strain components at end-systole were 0.89, 0.83 and 0.95 versus 0.90, 0.88 and 0.92 for the numeric and analytic regularization respectively. The analytic solution matched the performance of the numeric solution as no statistically significant differences (p>0.05) were found when expressed in terms of bias or limits-of-agreement.
Segmentation of hand radiographs using fast marching methods
NASA Astrophysics Data System (ADS)
Chen, Hong; Novak, Carol L.
2006-03-01
Rheumatoid Arthritis is one of the most common chronic diseases. Joint space width in hand radiographs is evaluated to assess joint damage in order to monitor progression of disease and response to treatment. Manual measurement of joint space width is time-consuming and highly prone to inter- and intra-observer variation. We propose a method for automatic extraction of finger bone boundaries using fast marching methods for quantitative evaluation of joint space width. The proposed algorithm includes two stages: location of hand joints followed by extraction of bone boundaries. By setting the propagation speed of the wave front as a function of image intensity values, the fast marching algorithm extracts the skeleton of the hands, in which each branch corresponds to a finger. The finger joint locations are then determined by using the image gradients along the skeletal branches. In order to extract bone boundaries at joints, the gradient magnitudes are utilized for setting the propagation speed, and the gradient phases are used for discriminating the boundaries of adjacent bones. The bone boundaries are detected by searching for the fastest paths from one side of each joint to the other side. Finally, joint space width is computed based on the extracted upper and lower bone boundaries. The algorithm was evaluated on a test set of 8 two-hand radiographs, including images from healthy patients and from patients suffering from arthritis, gout and psoriasis. Using our method, 97% of 208 joints were accurately located and 89% of 416 bone boundaries were correctly extracted.
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or...
21 CFR 530.40 - Safe levels and availability of analytical methods.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe... accordance with § 530.22, the following analytical methods have been accepted by FDA: ...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or...
21 CFR 530.40 - Safe levels and availability of analytical methods.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe... accordance with § 530.22, the following analytical methods have been accepted by FDA: ...
40 CFR 260.21 - Petitions for equivalent testing or analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... analytical methods. 260.21 Section 260.21 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Petitions for equivalent testing or analytical methods. (a) Any person seeking to add a testing or analytical method to part 261, 264, or 265 of this chapter may petition for a regulatory amendment under this...
21 CFR 530.40 - Safe levels and availability of analytical methods.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe...) In accordance with § 530.22, the following analytical methods have been accepted by FDA: [Reserved] ...
21 CFR 530.40 - Safe levels and availability of analytical methods.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe... accordance with § 530.22, the following analytical methods have been accepted by FDA: ...
Review of Processing and Analytical Methods for Francisella ...
Journal Article The etiological agent of tularemia, Francisella tularensis, is a resilient organism within the environment and can be acquired many ways (infectious aerosols and dust, contaminated food and water, infected carcasses, and arthropod bites). However, isolating F. tularensis from environmental samples can be challenging due to its nutritionally fastidious and slow-growing nature. In order to determine the current state of the science regarding available processing and analytical methods for detection and recovery of F. tularensis from water and soil matrices, a review of the literature was conducted. During the review, analysis via culture, immunoassays, and genomic identification were the most commonly found methods for F. tularensis detection within environmental samples. Other methods included combined culture and genomic analysis for rapid quantification of viable microorganisms and use of one assay to identify multiple pathogens from a single sample. Gaps in the literature that were identified during this review suggest that further work to integrate culture and genomic identification would advance our ability to detect and to assess the viability of Francisella spp. The optimization of DNA extraction, whole genome amplification with inhibition-resistant polymerases, and multiagent microarray detection would also advance biothreat detection.
Digitization of photographic slides: simple, effective, fast, and inexpensive method.
Camarena, Lázaro Cárdenas; Guerrero, María Teresa
2002-03-01
The technological evolution has changed multiple areas of plastic surgery, including photography. The photograph is one of the instruments used most by the plastic surgeon, and it cannot be eliminated by technological changes. The principal change in photography is that images can be scanned through digital cameras instead of slides. Despite the multiple advantages that digital photography represents, many surgeons are resisting the change. One of the main reasons for this resistance is the large quantity of photographic slides that need to be digitized to be used at scientific conferences as well as in publications. The methods and existing techniques for digitizing slides are costly and time-consuming, and there is risk for loss of definition and image brightness. The authors present a simple, effective, fast, and inexpensive method for digitizing slides. This method has been validated by various plastic surgeons and is effective for use in multimedia presentations and for paper printouts with publication quality.
Method for Accurate Surface Temperature Measurements During Fast Induction Heating
NASA Astrophysics Data System (ADS)
Larregain, Benjamin; Vanderesse, Nicolas; Bridier, Florent; Bocher, Philippe; Arkinson, Patrick
2013-07-01
A robust method is proposed for the measurement of surface temperature fields during induction heating. It is based on the original coupling of temperature-indicating lacquers and a high-speed camera system. Image analysis tools have been implemented to automatically extract the temporal evolution of isotherms. This method was applied to the fast induction treatment of a 4340 steel spur gear, allowing the full history of surface isotherms to be accurately documented for a sequential heating, i.e., a medium frequency preheating followed by a high frequency final heating. Three isotherms, i.e., 704, 816, and 927°C, were acquired every 0.3 ms with a spatial resolution of 0.04 mm per pixel. The information provided by the method is described and discussed. Finally, the transformation temperature Ac1 is linked to the temperature on specific locations of the gear tooth.
A method of fast mosaic for massive UAV images
NASA Astrophysics Data System (ADS)
Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong
2014-11-01
With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.
Crovelli, Robert A.; revised by Charpentier, Ronald R.
2012-01-01
The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.
Jim Sterbentz
2011-06-01
This study evaluates how fast neutron fluence >0.1 MeV correlates to material damage (i.e., the total fluence spectrum folded with the respective material’s displacements-per- atom [dpa] damage response function) for the specific material fluence spectra encountered in Next Generation Nuclear Plant (NGNP) service and the irradiation tests conducted in material test reactors (MTRs) for the fuel materials addressed in the white paper. It also reports how the evaluated correlations of >0.1 MeV fluence to material damage vary between the different spectral conditions encountered in material service versus testing.
Moving least-squares enhanced Shepard interpolation for the fast marching and string methods
NASA Astrophysics Data System (ADS)
Burger, Steven K.; Liu, Yuli; Sarkar, Utpal; Ayers, Paul W.
2009-01-01
The number of the potential energy calculations required by the quadratic string method (QSM), and the fast marching method (FMM) is significantly reduced by using Shepard interpolation, with a moving least squares to fit the higher-order derivatives of the potential. The derivatives of the potential are fitted up to fifth order. With an error estimate for the interpolated values, this moving least squares enhanced Shepard interpolation scheme drastically reduces the number of potential energy calculations in FMM, often by up 80%. Fitting up through the highest order tested here (fifth order) gave the best results for all grid spacings. For QSM, using enhanced Shepard interpolation gave slightly better results than using the usual second order approximate, damped Broyden-Fletcher-Goldfarb-Shanno updated Hessian to approximate the surface. To test these methods we examined two analytic potentials, the rotational dihedral potential of alanine dipeptide and the SN2 reaction of methyl chloride with fluoride.
An analytical method for predicting postwildfire peak discharges
Moody, John A.
2012-01-01
An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The
Qian Cutrone, Jingfang Jenny; Huang, Xiaohua Stella; Kozlowski, Edward S; Bao, Ye; Wang, Yingzi; Poronsky, Christopher S; Drexler, Dieter M; Tymiak, Adrienne A
2017-05-10
Synthetic macrocyclic peptides with natural and unnatural amino acids have gained considerable attention from a number of pharmaceutical/biopharmaceutical companies in recent years as a promising approach to drug discovery, particularly for targets involving protein-protein or protein-peptide interactions. Analytical scientists charged with characterizing these leads face multiple challenges including dealing with a class of complex molecules with the potential for multiple isomers and variable charge states and no established standards for acceptable analytical characterization of materials used in drug discovery. In addition, due to the lack of intermediate purification during solid phase peptide synthesis, the final products usually contain a complex profile of impurities. In this paper, practical analytical strategies and methodologies were developed to address these challenges, including a tiered approach to assessing the purity of macrocyclic peptides at different stages of drug discovery. Our results also showed that successful progression and characterization of a new drug discovery modality benefited from active analytical engagement, focusing on fit-for-purpose analyses and leveraging a broad palette of analytical technologies and resources. Copyright © 2017. Published by Elsevier B.V.
Measurement of company effectiveness using analytic network process method
NASA Astrophysics Data System (ADS)
Goran, Janjić; Zorana, Tanasić; Borut, Kosec
2017-06-01
The sustainable development of an organisation is monitored through the organisation's performance, which beforehand incorporates all stakeholders' requirements in its strategy. The strategic management concept enables organisations to monitor and evaluate their effectiveness along with efficiency by monitoring of the implementation of set strategic goals. In the process of monitoring and measuring effectiveness, an organisation can use multiple-criteria decision-making methods as help. This study uses the method of analytic network process (ANP) to define the weight factors of the mutual influences of all the important elements of an organisation's strategy. The calculation of an organisation's effectiveness is based on the weight factors and the degree of fulfilment of the goal values of the strategic map measures. New business conditions influence the changes in the importance of certain elements of an organisation's business in relation to competitive advantage on the market, and on the market, increasing emphasis is given to non-material resources in the process of selection of the organisation's most important measures.
Pesticides in honey: A review on chromatographic analytical methods.
Souza Tette, Patrícia Amaral; Rocha Guidi, Letícia; de Abreu Glória, Maria Beatriz; Fernandes, Christian
2016-01-01
Honey is a product of high consumption due to its nutritional and antimicrobial properties. However, residues of pesticides, used in plagues' treatment in the hive or in crop fields in the neighborhoods, can compromise its quality. Therefore, determination of these contaminants in honey is essential, since the use of pesticides has increased significantly in recent decades because of the growing demand for food production. Furthermore, pesticides in honey can be an indicator of environmental contamination. As the concentration of these compounds in honey is usually at trace levels and several pesticides can be found simultaneously, the use of highly sensitive and selective techniques is required. In this context, miniaturized sample preparation approaches and liquid or gas chromatography coupled to mass spectrometry became the most important analytical techniques. In this review we present and discuss recent studies dealing with pesticide determination in honey, focusing on sample preparation and separation/detection methods as well as application of the developed methods worldwide. Furthermore, trends and future perspectives are presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Analytical methods in DNA and protein adduct analysis.
Koivisto, Pertti; Peltonen, Kimmo
2010-11-01
DNA or protein adducts are reaction products of endogenous or exogenous chemicals and cellular macromolecules. Adducts are useful in toxicological studies and/or human biomonitoring exercises. In particular, DNA damage provides invaluable information for risk analysis. Second, metabolites or conjugates can be regarded as markers of phase II reactions though they may not give accurate information about the levels of reactive and damage-provoking reactive compounds or intermediates. Electrophiles are often short-lived molecules and therefore difficult to monitor. In contrast, adducts are often chemically stable, though their levels in biological samples are low, which makes their detection challenging. The assay of adducts is similar to the analysis of any other trace organic molecule, i.e. problems with the matrix and small amounts of analytes in samples. The (32)P-postlabelling assay is a specific method for DNA adducts but immunochemical and fluorescence-based methods have been developed which can detect adducts linked to both DNA and protein. Tandem mass spectrometry, particularly if combined with ultrahigh-performance liquid chromatography, is currently the recommended detection technique; however investigators are striving to develop novel ways to achieve greater sensitivity. Standards are a prerequisite in adduct analysis, but unfortunately they are seldom commercially available.
Pyrrolizidine alkaloids in honey: comparison of analytical methods.
Kempf, M; Wittig, M; Reinhard, A; von der Ohe, K; Blacquière, T; Raezke, K-P; Michel, R; Schreier, P; Beuerle, T
2011-03-01
Pyrrolizidine alkaloids (PAs) are a structurally diverse group of toxicologically relevant secondary plant metabolites. Currently, two analytical methods are used to determine PA content in honey. To achieve reasonably high sensitivity and selectivity, mass spectrometry detection is demanded. One method is an HPLC-ESI-MS-MS approach, the other a sum parameter method utilising HRGC-EI-MS operated in the selected ion monitoring mode (SIM). To date, no fully validated or standardised method exists to measure the PA content in honey. To establish an LC-MS method, several hundred standard pollen analysis results of raw honey were analysed. Possible PA plants were identified and typical commercially available marker PA-N-oxides (PANOs). Three distinct honey sets were analysed with both methods. Set A consisted of pure Echium honey (61-80% Echium pollen). Echium is an attractive bee plant. It is quite common in all temperate zones worldwide and is one of the major reasons for PA contamination in honey. Although only echimidine/echimidine-N-oxide were available as reference for the LC-MS target approach, the results for both analytical techniques matched very well (n = 8; PA content ranging from 311 to 520 µg kg(-1)). The second batch (B) consisted of a set of randomly picked raw honeys, mostly originating from Eupatorium spp. (0-15%), another common PA plant, usually characterised by the occurrence of lycopsamine-type PA. Again, the results showed good consistency in terms of PA-positive samples and quantification results (n = 8; ranging from 0 to 625 µg kg(-1) retronecine equivalents). The last set (C) was obtained by consciously placing beehives in areas with a high abundance of Jacobaea vulgaris (ragwort) from the Veluwe region (the Netherlands). J. vulgaris increasingly invades countrysides in Central Europe, especially areas with reduced farming or sites with natural restorations. Honey from two seasons (2007 and 2008) was sampled. While only trace amounts of
Assessment of analytical methods to determine pyrethroids content of bednets.
Castellarnau, Marc; Ramón-Azcón, Javier; Gonzalez-Quinteiro, Yolanda; López, Jordi F; Grimalt, Joan O; Marco, María-Pilar; Nieuwenhuijsen, Mark; Picado, Albert
2017-01-01
To present and evaluate simple, cost-effective tests to determine the amount of insecticide on treated materials. We developed and evaluated a competitive immunoassay on two different platforms: a label-free impedimetric biosensor (EIS biosensor) and a lateral flow. Both approaches were validated by gas chromatography (GC) and ELISA, gold standards for analytical methods and immunoassays, respectively. Finally, commercially available pyrethroid-treated ITN samples were analysed. Different extraction methods were evaluated. Insecticide extraction by direct infusion of the ITN samples with dichloromethane and dioxane showed recovery efficiencies around 100% for insecticide-coated bednets, and >70% for insecticide-incorporated bednets. These results were comparable to those obtained with standard sonication methods. The competitive immunoassay characterisation with ELISA presented a dynamic range between 12 nm and 1.5 μm (coefficient of variation (CV) below 5%), with an IC50 at 138 nm, and a limit of detection (LOD) of 3.2 nm. EIS biosensor had a linear range between 1.7 nm and 61 nm (CV around 14%), with an IC50 at 10.4 nm, and a LOD of 0.6 nm. Finally, the lateral flow approach showed a dynamic range between 150 nm and 1.5 μm, an IC50 at 505 nm and a LOD of 67 nm. ELISA can replace chromatography as an accurate laboratory technique to determine insecticide concentration in bednets. The lateral flow approach developed can be used to estimate ITN insecticide concentration in the field. This new technology, coupled to the new extraction methods, should provide reliable guidelines for ITN use and replacement in the field. © 2016 John Wiley & Sons Ltd.
Fast, simple, and good pan-sharpening method
NASA Astrophysics Data System (ADS)
Palubinskas, Gintautas
2013-01-01
Pan-sharpening of optical remote sensing multispectral imagery aims to include spatial information from a high-resolution image (high frequencies) into a low-resolution image (low frequencies) while preserving spectral properties of a low-resolution image. From a signal processing view, a general fusion filtering framework (GFF) can be formulated, which is very well suitable for a fusion of multiresolution and multisensor data such as optical-optical and optical-radar imagery. To reduce computation time, a simple and fast variant of GFF-high-pass filtering method (HPFM)-is proposed, which performs filtering in signal domain and thus avoids time-consuming FFT computations. A new joint quality measure based on the combination of two spectral and spatial measures was proposed for quality assessment by a proper normalization of the ranges of variables. Quality and speed of six pan-sharpening methods-component substitution (CS), Gram-Schmidt (GS) sharpening, Ehlers fusion, Amélioration de la Résolution Spatiale par Injection de Structures, GFF, and HPFM-were evaluated for WorldView-2 satellite remote sensing data. Experiments showed that the HPFM method outperforms all the fusion methods used in this study, even its parentage method GFF. Moreover, it is more than four times faster than GFF method and competitive with CS and GS methods in speed.
Improved analytical method to study the cup anemometer performance
NASA Astrophysics Data System (ADS)
Pindado, Santiago; Ramos-Cenzano, Alvaro; Cubas, Javier
2015-10-01
The cup anemometer rotor aerodynamics is analytically studied based on the aerodynamics of a single cup. The effect of the rotation on the aerodynamic force is included in the analytical model, together with the displacement of the aerodynamic center during one turn of the cup. The model can be fitted to the testing results, indicating the presence of both the aforementioned effects.
DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROTECTION
A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...
DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROTECTION
A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...
Fast perceptual method for evaluating color scales for Internet visualization
NASA Astrophysics Data System (ADS)
Kalvin, Alan D.; Rogowitz, Bernice E.
2001-06-01
We have developed a fast perceptual method for evaluating color scales for data visualization that uses a monochrome photographic image of a human face as a test pattern. We conducted an experiment in which we applied various color scales to a photographic image of a face and asked observers to rate the naturalness of each image. We found a very strong correlation between the perceived naturalness of the images and the luminance monotonicity of the color scales. Since color scales with monotonic luminance profiles are widely recommended and used for visualizing continuous scalar data, we conclude that using a human face as a test patten provides a quick, simple method for evaluating such color scale in Internet environments.
Fast calculation method of complex space targets' optical cross section.
Han, Yi; Sun, Huayan; Li, Yingchun; Guo, Huichao
2013-06-10
This paper utilizes the optical cross section (OCS) to characterize the optical scattering characteristics of a space target under the conditions of Sun lighting. We derive the mathematical expression of OCS according to the radiometric theory, and put forward a fast visualization calculation method of complex space targets' OCS based on an OpenGL and 3D model. Through the OCS simulation of Lambert bodies (cylinder and sphere), the computational accuracy and speed of the algorithm were verified. By using this method, the relative error for OCS will not exceed 0.1%, and it only takes 0.05 s to complete a complex calculation. Additionally, we calculated the OCS of three actual satellites with bidirectional reflectance distribution function model parameters in visible bands, and results indicate that it is easy to distinguish the three targets by comparing their OCS curves. This work is helpful for the identification and classification of unresolved space target based on photometric characteristics.
Fast methods for spatially correlated multilevel functional data
Staicu, Ana-Maria; Crainiceanu, Ciprian M.; Carroll, Raymond J.
2010-01-01
We propose a new methodological framework for the analysis of hierarchical functional data when the functions at the lowest level of the hierarchy are correlated. For small data sets, our methodology leads to a computational algorithm that is orders of magnitude more efficient than its closest competitor (seconds versus hours). For large data sets, our algorithm remains fast and has no current competitors. Thus, in contrast to published methods, we can now conduct routine simulations, leave-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where the object of inference are functions or images that remain dependent even after conditioning on the subject on which they are measured. Supplementary materials are available at Biostatistics online. PMID:20089508
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Hesford, Andrew J; Chew, Weng C
2010-08-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths.
Zarzycki, Paweł K; Slączka, Magdalena M; Zarzycka, Magdalena B; Bartoszuk, Małgorzata A; Włodarczyk, Elżbieta; Baran, Michał J
2011-11-01
This paper is a continuation of our previous research focusing on development of micro-TLC methodology under temperature-controlled conditions. The main goal of present paper is to demonstrate separation and detection capability of micro-TLC technique involving simple analytical protocols without multi-steps sample pre-purification. One of the advantages of planar chromatography over its column counterpart is that each TLC run can be performed using non-previously used stationary phase. Therefore, it is possible to fractionate or separate complex samples characterized by heavy biological matrix loading. In present studies components of interest, mainly steroids, were isolated from biological samples like fish bile using single pre-treatment steps involving direct organic liquid extraction and/or deproteinization by freeze-drying method. Low-molecular mass compounds with polarity ranging from estetrol to progesterone derived from the environmental samples (lake water, untreated and treated sewage waters) were concentrated using optimized solid-phase extraction (SPE). Specific bands patterns for samples derived from surface water of the Middle Pomerania in northern part of Poland can be easily observed on obtained micro-TLC chromatograms. This approach can be useful as simple and non-expensive complementary method for fast control and screening of treated sewage water discharged by the municipal wastewater treatment plants. Moreover, our experimental results show the potential of micro-TLC as an efficient tool for retention measurements of a wide range of steroids under reversed-phase (RP) chromatographic conditions. These data can be used for further optimalization of SPE or HPLC systems working under RP conditions. Furthermore, we also demonstrated that micro-TLC based analytical approach can be applied as an effective method for the internal standard (IS) substance search. Generally, described methodology can be applied for fast fractionation or screening of the
An analytical filter design method for guided wave phased arrays
NASA Astrophysics Data System (ADS)
Kwon, Hyu-Sang; Kim, Jin-Yeon
2016-12-01
This paper presents an analytical method for designing a spatial filter that processes the data from an array of two-dimensional guided wave transducers. An inverse problem is defined where the spatial filter coefficients are determined in such a way that a prescribed beam shape, i.e., a desired array output is best approximated in the least-squares sense. Taking advantage of the 2π-periodicity of the generated wave field, Fourier-series representation is used to derive closed-form expressions for the constituting matrix elements. Special cases in which the desired array output is an ideal delta function and a gate function are considered in a more explicit way. Numerical simulations are performed to examine the performance of the filters designed by the proposed method. It is shown that the proposed filters can significantly improve the beam quality in general. Most notable is that the proposed method does not compromise between the main lobe width and the sidelobe levels; i.e. a narrow main lobe and low sidelobes are simultaneously achieved. It is also shown that the proposed filter can compensate the effects of nonuniform directivity and sensitivity of array elements by explicitly taking these into account in the formulation. From an example of detecting two separate targets, how much the angular resolution can be improved as compared to the conventional delay-and-sum filter is quantitatively illustrated. Lamb wave based imaging of localized defects in an elastic plate using a circular array is also presented as an example of practical applications.
Zhao, Liuwei; Cao, Weirui; Xue, Xiaofeng; Wang, Miao; Wu, Liming; Yu, Linsheng
2017-03-01
Erythromycin A, the main component of erythromycin, is widely used to treat and control foulbrood diseases in honey bees. In this study, we developed a fast and sensitive method to simultaneously determine erythromycin A and its degradation products in honey. The analytical methodology was based on dispersive liquid-liquid microextraction and liquid chromatography coupled with tandem mass spectrometry with advanced i-Funnel technology. The liquid-liquid microextraction and liquid chromatography coupled with tandem mass spectrometry parameters were optimized. The recoveries of erythromycin A and its degradation products from spiked honey samples were 76.1-102.1%, with reproducibility rates of 7.1-13.1% and correlation coefficients >0.99. The decision limit and detection capability were 0.02-0.07 and 0.03-0.10 ng/g, respectively. The proposed method was validated and successfully applied to the determination of the target analytes in commercial honey samples. It was efficient and sensitive, and it lays the foundation for further research on honey safety.
Fast detection of air contaminants using immunobiological methods
NASA Astrophysics Data System (ADS)
Schmitt, Katrin; Bolwien, Carsten; Sulz, Gerd; Koch, Wolfgang; Dunkhorst, Wilhelm; Lödding, Hubert; Schwarz, Katharina; Holländer, Andreas; Klockenbring, Torsten; Barth, Stefan; Seidel, Björn; Hofbauer, Wolfgang; Rennebarth, Torsten; Renzl, Anna
2009-05-01
The fast and direct identification of possibly pathogenic microorganisms in air is gaining increasing interest due to their threat for public health, e.g. in clinical environments or in clean rooms of food or pharmaceutical industries. We present a new detection method allowing the direct recognition of relevant germs or bacteria via fluorescence-labeled antibodies within less than one hour. In detail, an air-sampling unit passes particles in the relevant size range to a substrate which contains antibodies with fluorescence labels for the detection of a specific microorganism. After the removal of the excess antibodies the optical detection unit comprising reflected-light and epifluorescence microscopy can identify the microorganisms by fast image processing on a single-particle level. First measurements with the system to identify various test particles as well as interfering influences have been performed, in particular with respect to autofluorescence of dust particles. Specific antibodies for the detection of Aspergillus fumigatus spores have been established. The biological test system consists of protein A-coated polymer particles which are detected by a fluorescence-labeled IgG. Furthermore the influence of interfering particles such as dust or debris is discussed.
Strauss, Holger M; Karabudak, Engin; Bhattacharyya, Saroj; Kretzschmar, Andreas; Wohlleben, Wendel; Cölfen, Helmut
2008-02-01
The optical setup and the performance of a prototype UV/Vis multiwavelength analytical ultracentrifuge (MWL-AUC) is described and compared to the commercially available Optima XL-A from Beckman Coulter. Slight modifications have been made to the optical path of the MWL-AUC. With respect to wavelength accuracy and radial resolution, the new MWL-AUC is found to be comparable to the existing XL-A. Absorbance accuracy is dependent on the light intensity available at the detection wavelength as well as the intrinsic noise of the data. Measurements from single flashes of light are more noisy for the MWL-AUC, potentially due to the absence of flash-to-flash normalization in the current design. However, the possibility of both wavelength and scan averaging can compensate for this and still give much faster scan rates than the XL-A. Some further improvements of the existing design are suggested based on these findings.
Contains basic information on the role and origins of the Selected Analytical Methods including the formation of the Homeland Security Laboratory Capacity Work Group and the Environmental Evaluation Analytical Process Roadmap for Homeland Security Events
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Stakeholder Meeting Regarding Re-Evaluation of Currently Approved Total Coliform Analytical...) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential...
Fast radiative transfer of dust reprocessing in semi-analytic models with artificial neural networks
NASA Astrophysics Data System (ADS)
Silva, Laura; Fontanot, Fabio; Granato, Gian Luigi
2012-06-01
A serious concern for semi-analytical galaxy formation models, aiming to simulate multiwavelength surveys and to thoroughly explore the model parameter space, is the extremely time-consuming numerical solution of the radiative transfer of stellar radiation through dusty media. To overcome this problem, we have implemented an artificial neural network (ANN) algorithm in the radiative transfer code GRASIL, in order to significantly speed up the computation of the infrared (IR) spectral energy distribution (SED). The ANN we have implemented is of general use, in that its input neurons are defined as those quantities effectively determining the shape of the IR SED. Therefore, the training of the ANN can be performed with any model and then applied to other models. We made a blind test to check the algorithm, by applying a net trained with a standard chemical evolution model (i.e. CHE_EVO) to a mock catalogue extracted from the semi-analytic model MORGANA, and compared galaxy counts and evolution of the luminosity functions in several near-IR to sub-millimetre (sub-mm) bands, and also the spectral differences for a large subset of randomly extracted models. The ANN is able to excellently approximate the full computation, but with a gain in CPU time by ˜2 orders of magnitude. It is only advisable that the training covers reasonably well the range of values of the input neurons in the application. Indeed in the sub-mm at high redshift, a tiny fraction of models with some sensible input neurons out of the range of the trained net cause wrong answer by the ANN. These are extreme starbursting models with high optical depths, favourably selected by sub-mm observations, and are difficult to predict a priori.
Analytical methods for waste minimisation in the convenience food industry.
Darlington, R; Staikos, T; Rahimifard, S
2009-04-01
Waste creation in some sectors of the food industry is substantial, and while much of the used material is non-hazardous and biodegradable, it is often poorly dealt with and simply sent to landfill mixed with other types of waste. In this context, overproduction wastes were found in a number of cases to account for 20-40% of the material wastes generated by convenience food manufacturers (such as ready-meals and sandwiches), often simply just to meet the challenging demands placed on the manufacturer due to the short order reaction time provided by the supermarkets. Identifying specific classes of waste helps to minimise their creation, through consideration of what the materials constitute and why they were generated. This paper aims to provide means by which food industry wastes can be identified, and demonstrate these mechanisms through a practical example. The research reported in this paper investigated the various categories of waste and generated three analytical methods for the support of waste minimisation activities by food manufacturers. The waste classifications and analyses are intended to complement existing waste minimisation approaches and are described through consideration of a case study convenience food manufacturer that realised significant financial savings through waste measurement, analysis and reduction.
Analytical method to estimate resin cement diffusion into dentin
NASA Astrophysics Data System (ADS)
de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa
2016-05-01
This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.
NIOSH Manual of Analytical Methods (third edition). Fourth supplement
Not Available
1990-08-15
The NIOSH Manual of Analytical Methods, 3rd edition, was updated for the following chemicals: allyl-glycidyl-ether, 2-aminopyridine, aspartame, bromine, chlorine, n-butylamine, n-butyl-glycidyl-ether, carbon-dioxide, carbon-monoxide, chlorinated-camphene, chloroacetaldehyde, p-chlorophenol, crotonaldehyde, 1,1-dimethylhydrazine, dinitro-o-cresol, ethyl-acetate, ethyl-formate, ethylenimine, sodium-fluoride, hydrogen-fluoride, cryolite, sodium-hexafluoroaluminate, formic-acid, hexachlorobutadiene, hydrogen-cyanide, hydrogen-sulfide, isopropyl-acetate, isopropyl-ether, isopropyl-glycidyl-ether, lead, lead-oxide, maleic-anhydride, methyl-acetate, methyl-acrylate, methyl-tert-butyl ether, methyl-cellosolve-acetate, methylcyclohexanol, 4,4'-methylenedianiline, monomethylaniline, monomethylhydrazine, nitric-oxide, p-nitroaniline, phenyl-ether, phenyl-ether-biphenyl mixture, phenyl-glycidyl-ether, phenylhydrazine, phosphine, ronnel, sulfuryl-fluoride, talc, tributyl-phosphate, 1,1,2-trichloro-1,2,2-trifluoroethane, trimellitic-anhydride, triorthocresyl-phosphate, triphenyl-phosphate, and vinyl-acetate.
Spectral method for fast measurement of twisted nematic liquid crystal cell parameters.
Pinzón, Plinio Jesús; Pérez, Isabel; Sánchez-Pena, José Manuel; Vázquez, Carmen
2014-08-10
We present an experimental approach for the fast measurement of twisted nematic (TN) liquid crystal (LC) cells' parameters. It is based on the spectral measurements of the light transmitted by the system polarizer-reference wave plate-LC cell-analyzer. The cell parameters are obtained by fitting the theoretical model to the experimental data. This method allows determining the rubbing angle, the twist angle and its sense, and the spectral dispersion of the LC cell retardation, simultaneously, with few measurements and without the need of applying voltage or any specific analytical conditions. The method is validated by characterizing two different TN cells with retardations of about 0.91 and 1.85 μm. The birefringence relative error is less than 1.3%.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
Groeneboom, N. E.; Dahle, H.
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
Fresco-Rivera, P; Fernández-Varela, R; Gómez-Carracedo, M P; Ramírez-Villalobos, F; Prada, D; Muniategui, S; Andrade, J M
2007-11-30
A fast analytical tool based on attenuated total reflectance mid-IR spectrometry is presented to evaluate the origin of spilled hydrocarbons and to monitor their fate on the environment. Ten spectral band ratios are employed in univariate and multivariate studies (principal components analysis, cluster analysis, density functions - potential curves - and Kohonen self organizing maps). Two indexes monitor typical photooxidation processes, five are related to aromatic characteristics and three study aliphatic and branched chains. The case study considered here comprises 45 samples taken on beaches (from 2002 to 2005) after the Prestige carrier accident off the Galician coast and 104 samples corresponding to weathering studies deployed for the Prestige's fuel, four typical crude oils and a fuel oil. The univariate studies yield insightful views on the gross chemical evolution whereas the multivariate studies allow for simple and straightforward elucidations on whether the unknown samples match the Prestige's fuel. Besides, a good differentiation on the weathering patterns of light and heavy products is obtained.
Fast Second Degree Total Variation Method for Image Compressive Sensing
Liu, Pengfei; Xiao, Liang; Zhang, Jun
2015-01-01
This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed. PMID:26361008
Fast calculation method for computer-generated cylindrical holograms.
Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi
2008-07-01
Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.
Methods for fast, reliable growth of Sn whiskers
NASA Astrophysics Data System (ADS)
Bozack, M. J.; Snipes, S. K.; Flowers, G. N.
2016-10-01
We report several methods to reliably grow dense fields of high-aspect ratio tin whiskers for research purposes in a period of days to weeks. The techniques offer marked improvements over previous means to grow whiskers, which have struggled against the highly variable incubation period of tin whiskers and slow growth rate. Control of the film stress is the key to fast-growing whiskers, owing to the fact that whisker incubation and growth are fundamentally a stress-relief phenomenon. The ability to grow high-density fields of whiskers (103-106/cm2) in a reasonable period of time (days, weeks) has accelerated progress in whisker growth and aided in development of whisker mitigation strategies.
A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures
Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George
2012-01-01
We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.
Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2
Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.
1994-07-01
that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.
Dynamics of wind bubbles and superbubbles. I - Slow winds and fast winds. II - Analytic theory
NASA Technical Reports Server (NTRS)
Koo, Bon-Chul; Mckee, Christopher F.
1992-01-01
The paper describes the overall evolution of wind-blown bubbles in a uniform medium from the initial, free-expansion stage to the final stage in which the pressure of the ambient medium is significant. The concepts of slow and fast winds, which naturally arise from consideration of radiative losses at the free-expansion stage, are introduced. The evolution of bubbles in a plane-parallel disk, where the density decreases steeply along a vertical direction, is considered. The questions of when a bubble can break out of a thin galactic disk and how they evolve after the breakout are discussed. After breakout, bubbles can evolve into jets. Steady, collimated jets can form only over a limited range of wind luminosity and Mach number; astronomical jets are likely to be unsteady and/or hydromagnetic. The results are applied to the neutral stellar wind in the HH 7-11 region, to the north polar spur, and to the galactic winds in starburst galaxies. The evolution of wind-blown bubbles in a power-law density distribution is investigated. Characteristic evolutionary time scales, as well as the equation of motion for both the swept-up gas and the wind shock in each evolutionary stage are obtained.
Fast multipole method for the biharmonic equation in three dimensions
Gumerov, Nail A. . E-mail: gumerov@umiacs.umd.edu; Duraiswami, Ramani
2006-06-10
The evaluation of sums (matrix-vector products) of the solutions of the three-dimensional biharmonic equation can be accelerated using the fast multipole method, while memory requirements can also be significantly reduced. We develop a complete translation theory for these equations. It is shown that translations of elementary solutions of the biharmonic equation can be achieved by considering the translation of a pair of elementary solutions of the Laplace equations. The extension of the theory to the case of polyharmonic equations in R{sup 3} is also discussed. An efficient way of performing the FMM for biharmonic equations using the solution of a complex valued FMM for the Laplace equation is presented. Compared to previous methods presented for the biharmonic equation our method appears more efficient. The theory is implemented and numerical tests presented that demonstrate the performance of the method for varying problem sizes and accuracy requirements. In our implementation, the FMM for the biharmonic equation is faster than direct matrix-vector product for a matrix size of 550 for a relative L{sub 2} accuracy {epsilon}{sub 2}=10{sup -4}, and N=3550 for {epsilon}{sub 2}=10{sup -12}.
A fast minimum variance beamforming method using principal component analysis.
Kim, Kyuhong; Park, Suhyun; Kim, Jungho; Park, Sung-Bae; Bae, MooHo
2014-06-01
Minimum variance (MV) beamforming has been studied for improving the performance of a diagnostic ultrasound imaging system. However, it is not easy for the MV beamforming to be implemented in a real-time ultrasound imaging system because of the enormous amount of computation time associated with the covariance matrix inversion. In this paper, to address this problem, we propose a new fast MV beamforming method that almost optimally approximates the MV beamforming while reducing the computational complexity greatly through dimensionality reduction using principal component analysis (PCA). The principal components are estimated offline from pre-calculated conventional MV weights. Thus, the proposed method does not directly calculate the MV weights but approximates them by a linear combination of a few selected dominant principal components. The combinational weights are calculated in almost the same way as in MV beamforming, but in the transformed domain of beamformer input signal by the PCA, where the dimension of the transformed covariance matrix is identical to the number of some selected principal component vectors. Both computer simulation and experiment were carried out to verify the effectiveness of the proposed method with echo signals from simulation as well as phantom and in vivo experiments. It is confirmed that our method can reduce the dimension of the covariance matrix down to as low as 2 × 2 while maintaining the good image quality of MV beamforming.
Fast multipole method for the biharmonic equation in three dimensions
NASA Astrophysics Data System (ADS)
Gumerov, Nail A.; Duraiswami, Ramani
2006-06-01
The evaluation of sums (matrix-vector products) of the solutions of the three-dimensional biharmonic equation can be accelerated using the fast multipole method, while memory requirements can also be significantly reduced. We develop a complete translation theory for these equations. It is shown that translations of elementary solutions of the biharmonic equation can be achieved by considering the translation of a pair of elementary solutions of the Laplace equations. The extension of the theory to the case of polyharmonic equations in R3 is also discussed. An efficient way of performing the FMM for biharmonic equations using the solution of a complex valued FMM for the Laplace equation is presented. Compared to previous methods presented for the biharmonic equation our method appears more efficient. The theory is implemented and numerical tests presented that demonstrate the performance of the method for varying problem sizes and accuracy requirements. In our implementation, the FMM for the biharmonic equation is faster than direct matrix-vector product for a matrix size of 550 for a relative L2 accuracy ɛ2 = 10 -4, and N = 3550 for ɛ2 = 10 -12.
Zhu, Liang; Hu, Zhong; Gamez, Gerardo; Law, Wai Siang; Chen, HuanWen; Yang, ShuiPing; Chingin, Konstantin; Balabin, Roman M; Wang, Rui; Zhang, TingTing; Zenobi, Renato
2010-09-01
By gently bubbling nitrogen gas through beer, an effervescent beverage, both volatile and non-volatile compounds can be simultaneously sampled in the form of aerosol. This allows for fast (within seconds) fingerprinting by extractive electrospray ionization mass spectrometry (EESI-MS) in both negative and positive ion mode, without the need for any sample pre-treatment such as degassing and dilution. Trace analytes such as volatile esters (e.g., ethyl acetate and isoamyl acetate), free fatty acids (e.g., caproic acid, caprylic acid, and capric acid), semi/non-volatile organic/inorganic acids (e.g., lactic acid), and various amino acids, commonly present in beer at the low parts per million or at sub-ppm levels, were detected and identified based on tandem MS data. Furthermore, the appearance of solvent cluster ions in the mass spectra gives insight into the sampling and ionization mechanisms: aerosol droplets containing semi/non-volatile substances are thought to be generated via bubble bursting at the surface of the liquid; these neutral aerosol droplets then collide with the charged primary electrospray ionization droplets, followed by analyte extraction, desolvation, ionization, and MS detection. With principal component analysis, several beer samples were successfully differentiated. Therefore, the present study successfully extends the applicability of EESI-MS to the direct analysis of complex liquid samples with high gas content.
Method modification of the Legipid® Legionella fast detection test kit.
Albalat, Guillermo Rodríguez; Broch, Begoña Bedrina; Bono, Marisa Jiménez
2014-01-01
Legipid(®) Legionella Fast Detection is a test based on combined magnetic immunocapture and enzyme-immunoassay (CEIA) for the detection of Legionella in water. The test is based on the use of anti-Legionella antibodies immobilized on magnetic microspheres. Target microorganism is preconcentrated by filtration. Immunomagnetic analysis is applied on these preconcentrated water samples in a final test portion of 9 mL. The test kit was certified by the AOAC Research Institute as Performance Tested Method(SM) (PTM) No. 111101 in a PTM validation which certifies the performance claims of the test method in comparison to the ISO reference method 11731-1998 and the revision 11731-2004 "Water Quality: Detection and Enumeration of Legionella pneumophila" in potable water, industrial water, and waste water. The modification of this test kit has been approved. The modification includes increasing the target analyte from L. pneumophila to Legionella species and adding an optical reader to the test method. In this study, 71 strains of Legionella spp. other than L. pneumophila were tested to determine its reactivity with the kit based on CEIA. All the strains of Legionella spp. tested by the CEIA test were confirmed positive by reference standard method ISO 11731. This test (PTM 111101) has been modified to include a final optical reading. A methods comparison study was conducted to demonstrate the equivalence of this modification to the reference culture method. Two water matrixes were analyzed. Results show no statistically detectable difference between the test method and the reference culture method for the enumeration of Legionella spp. The relative level of detection was 93 CFU/volume examined (LOD50). For optical reading, the LOD was 40 CFU/volume examined and the LOQ was 60 CFU/volume examined. Results showed that the test Legipid Legionella Fast Detection is equivalent to the reference culture method for the enumeration of Legionella spp.
Palmieri, M; Vagnini, Manuela; Pitzurra, L; Rocchi, P; Brunetti, B G; Sgamellotti, A; Cartechini, L
2011-03-01
Enzyme-linked immunosorbent assay (ELISA) analysis of proteins offers a particularly promising approach for investigations in cultural heritage on account of its appreciated properties of being highly specific, sensitive, relatively fast, and cost-affordable with respect to other conventional techniques. In spite of that, it has never been fully exploited for routine analyses of painting materials in consideration of several analytical issues that inhibited its diffusion in conservation science: limited sample dimensions, decrease of binder solubility and reduced availability of antibody bonding sites occurring with protein degradation. In this study, an ELISA analytical protocol suited for the identification of aged denatured proteins in ancient painting micro-samples has been developed. We focused on the detection of bovine β-casein and chicken ovalbumin as markers of bovine milk (or casein) and chicken albumen, respectively. A systematic experimentation of the ELISA protocol has been carried out on mock-ups of mural and easel painting prepared with 13 different pigments to assess limits and strengths of the method when applied for the identification of proteins in presence of a predominant inorganic matrix. The analytical procedure has been optimized with respect to protein extraction, antibodies' concentrations, incubation time and temperature; it allows the detection of the investigated proteins with sensitivity down to nanograms. The optimized protocol was then tested on artificially aged painting models. Analytical results were very encouraging and demonstrated that ELISA allows for protein analysis also in degraded painting samples. To address the feasibility of the developed ELISA methodology, we positively investigated real painting samples and results have been cross-validated by gas chromatography-mass spectrometry.
[Clinical application of testing methods on acid-fast bacteria].
Ichiyama, Satoshi; Suzuki, Katsuhiro
2005-02-01
Clinical bacteriology pertaining to acid-fast bacteria has made marked advances over the past decade, initiated by the development of a DNA probe kit for identification of acid-fast bacteria. Wide-spread use of nucleic acid amplification for rapid detection of tubercle bacillus contributed more greatly than any other factor to such advances in this field. At present, 90% of all kits used for nucleic acid amplification in the world are consumed in Japan. Unfortunately, not a few clinicians in Japan have a false idea that the smear method and nucleic acid amplification are necessary but culture is not. In any event nucleic acid amplification has exerted significant impacts on the routine works at bacteriology laboratories. Among others, collecting bacteria by pretreatment with NALC-NaOH has simplified the introduction of the collective mode smear method and liquid media. Furthermore, as clinicians have become increasingly more experienced with various methods of molecular biology, it now seems possible to apply these techniques for detection of genes encoding drug resistance and for utilization of molecular epidemiology in routine laboratory works. Meanwhile, attempts to diagnose acid-fast bacteriosis by checking blood for antibody have also been made, primarily in Japan. At present, two kits for detecting antibodies to glycolipids (LAM, TDM, etc.) are covered by national health insurance in Japan. We have an impression that in Japan clinicians do not have adequate knowledge and skill to make full use of these new testing methods clinically. We, as the chairmen of this symposium, hope that this symposium will help clinicians increase their skill related to new testing methods, eventually leading to stimulation of advances in clinical practices related to acid-fast bacteria in Japan. 1. Smear microscopy by concentration method and broth culture system: Kazunari TSUYUGUCHI (Clinical Research Center, National Hospital Organization Kinki-chuo Chest Medical Center) Smear
Zhao, Huaying; Brautigam, Chad A.; Ghirlando, Rodolfo; Schuck, Peter
2013-01-01
Significant progress in the interpretation of analytical ultracentrifugation (AUC) data in the last decade has led to profound changes in the practice of AUC, both for sedimentation velocity (SV) and sedimentation equilibrium (SE). Modern computational strategies have allowed for the direct modeling of the sedimentation process of heterogeneous mixtures, resulting in SV size-distribution analyses with significantly improved detection limits and strongly enhanced resolution. These advances have transformed the practice of SV, rendering it the primary method of choice for most existing applications of AUC, such as the study of protein self- and hetero-association, the study of membrane proteins, and applications in biotechnology. New global multi-signal modeling and mass conservation approaches in SV and SE, in conjunction with the effective-particle framework for interpreting the sedimentation boundary structure of interacting systems, as well as tools for explicit modeling of the reaction/diffusion/sedimentation equations to experimental data, have led to more robust and more powerful strategies for the study of reversible protein interactions and multi-protein complexes. Furthermore, modern mathematical modeling capabilities have allowed for a detailed description of many experimental aspects of the acquired data, thus enabling novel experimental opportunities, with important implications for both sample preparation and data acquisition. The goal of the current commentary is to supplement previous AUC protocols, Current Protocols in Protein Science 20.3 (1999) and 20.7 (2003), and 7.12 (2008), and provide an update describing the current tools for the study of soluble proteins, detergent-solubilized membrane proteins and their interactions by SV and SE. PMID:23377850
Analytic Method to Estimate Particle Acceleration in Flux Ropes
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
Fast method for dynamic thresholding in volume holographic memories
NASA Astrophysics Data System (ADS)
Porter, Michael S.; Mitkas, Pericles A.
1998-11-01
It is essential for parallel optical memory interfaces to incorporate processing that dynamically differentiates between databit values. These thresholding points will vary as a result of system noise -- due to contrast fluctuations, variations in data page composition, reference beam misalignment, etc. To maintain reasonable data integrity it is necessary to select the threshold close to its optimal level. In this paper, a neural network (NN) approach is proposed as a fast method of determining the threshold to meet the required transfer rate. The multi-layered perceptron network can be incorporated as part of a smart photodetector array (SPA). Other methods have suggested performing the operation by means of histogram or by use of statistical information. These approaches fail in that they unnecessarily switch to a 1-D paradigm. In this serial domain, global thresholding is pointless since sequence detection could be applied. The discussed approach is a parallel solution with less overhead than multi-rail encoding. As part of this method, a small set of values are designated as threshold determination data bits; these are interleaved with the information data bits and are used as inputs to the NN. The approach has been tested using both simulated data as well as data obtained from a volume holographic memory system. Results show convergence of the training and an ability to generalize upon untrained data for binary and multi-level gray scale datapage images. Methodologies are discussed for improving the performance by a proper training set selection.
Fast alternating projection methods for constrained tomographic reconstruction.
Liu, Li; Han, Yongxin; Jin, Mingwu
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
Fast integral methods for integrated optical systems simulations: a review
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.
2015-09-01
-functional profiles, very deep ones, very large ones compared to wavelength, or simple smooth profiles. This integral method with either trigonometric or spline collocation, iterative solver with O(N2) complexity, named IESMP, was significantly improved by an efficient mesh refinement, matrix preconditioning, Ewald summation method, and an exponentially convergent quadrature in 2006 by G. Schmidt and A. Rathsfeld from Weierstrass-Institute (WIAS) Berlin. The so-called modified integral method (MIM) is a modification of the IEM of D. Maystre and has been introduced by L. Goray in 1995. It has been improved for weak convergence problems in 2001 and it was the only commercial available integral method for a long time, known as PCGRATE. All referenced integral methods so far are for in-plane diffraction only, no conical diffraction was possible. The first integral method for gratings in conical mounting was developed and proven under very weak conditions by G. Schmidt (WIAS) in 2010. It works for separated interfaces and for inclusions as well as for interpenetrating interfaces and for a large number of thin and thick layers in the same stable way. This very fast method has then been implemented for parallel processing under Unix and Windows operating systems. This work gives an overview over the most important BIMs for grating diffraction. It starts by presenting the historical evolution of the methods, highlights their advantages and differences, and gives insight into new approaches and their achievements. It addresses future open challenges at the end.
Fast multipole method applied to Lagrangian simulations of vortical flows
NASA Astrophysics Data System (ADS)
Ricciardi, Túlio R.; Wolf, William R.; Bimbato, Alex M.
2017-10-01
Lagrangian simulations of unsteady vortical flows are accelerated by the multi-level fast multipole method, FMM. The combination of the FMM algorithm with a discrete vortex method, DVM, is discussed for free domain and periodic problems with focus on implementation details to reduce numerical dissipation and avoid spurious solutions in unsteady inviscid flows. An assessment of the FMM-DVM accuracy is presented through a comparison with the direct calculation of the Biot-Savart law for the simulation of the temporal evolution of an aircraft wake in the Trefftz plane. The role of several parameters such as time step restriction, truncation of the FMM series expansion, number of particles in the wake discretization and machine precision is investigated and we show how to avoid spurious instabilities. The FMM-DVM is also applied to compute the evolution of a temporal shear layer with periodic boundary conditions. A novel approach is proposed to achieve accurate solutions in the periodic FMM. This approach avoids a spurious precession of the periodic shear layer and solutions are shown to converge to the direct Biot-Savart calculation using a cotangent function.
A Domain Decomposition Parallelization of the Fast Marching Method
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.
a Novel and Fast Corner Detection Method for SAR Imagery
NASA Astrophysics Data System (ADS)
Jiao, N.; Kang, W.; Xiang, Y.; You, H.
2017-09-01
Corners play an important role on image processing, while it is difficult to detect reliable and repeatable corners in SAR images due to the complex property of SAR sensors. In this paper, we propose a fast and novel corner detection method for SAR imagery. First, a local processing window is constructed for each point. We use the local mean of a 3 x 3 mask to represent a single point, which is weighted by a Gaussian template. Then the candidate point is compared with 16 surrounding points in the processing window. Considering the multiplicative property of speckle noise, the similarity measure between the center point and the surrounding points is calculated by the ratio of their local means. If there exist more than M continuous points are different from the center point, then the candidate point is labelled as a corner point. Finally, a selection strategy is implemented by ranking the corner score and employing the non-maxima suppression method. Extreme situations such as isolated bright points are also removed. Experimental results on both simulated and real-world SAR images show that the proposed detector has a high repeatability and a low localization error, compared with other state-of-the-art detectors.
Fast and sensitive method for detecting volatile species in liquids
NASA Astrophysics Data System (ADS)
Trimarco, Daniel B.; Pedersen, Thomas; Hansen, Ole; Chorkendorff, Ib; Vesborg, Peter C. K.
2015-07-01
This paper presents a novel apparatus for extracting volatile species from liquids using a "sniffer-chip." By ultrafast transfer of the volatile species through a perforated and hydrophobic membrane into an inert carrier gas stream, the sniffer-chip is able to transport the species directly to a mass spectrometer through a narrow capillary without the use of differential pumping. This method inherits features from differential electrochemical mass spectrometry (DEMS) and membrane inlet mass spectrometry (MIMS), but brings the best of both worlds, i.e., the fast time-response of a DEMS system and the high sensitivity of a MIMS system. In this paper, the concept of the sniffer-chip is thoroughly explained and it is shown how it can be used to quantify hydrogen and oxygen evolution on a polycrystalline platinum thin film in situ at absolute faradaic currents down to ˜30 nA. To benchmark the capabilities of this method, a CO-stripping experiment is performed on a polycrystalline platinum thin film, illustrating how the sniffer-chip system is capable of making a quantitative in situ measurement of <1 % of a monolayer of surface adsorbed CO being electrochemically stripped off an electrode at a potential scan-rate of 50 mV s-1.
Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus
2016-02-01
We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.
Microorganisms as Analytical Indicators. Experimental Methods and Techniques,
1980-01-01
analytic indicators: gram- negative and gram-positive sporiferous and nonsporiferous bacteria, yeasts, mycelial fungi, and actinomycetes (Refs. 4, 6-8...growing species of microorganisms, the cultivation period is appropriately increased. This factor is less important for actinomycetes , fungi and sporiferous
40 CFR 141.852 - Analytical methods and laboratory certification.
Code of Federal Regulations, 2014 CFR
2014-07-01
... only determine the presence or absence of total coliforms and E. coli; a determination of density is...). (5) Systems must conduct total coliform and E. coli analyses in accordance with one of the analytical... E. coli and other Total Coliforms in Water,” August 28, 2009. (ii) (4) EMD Millipore (a division of...
40 CFR 141.852 - Analytical methods and laboratory certification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... only determine the presence or absence of total coliforms and E. coli; a determination of density is...). (5) Systems must conduct total coliform and E. coli analyses in accordance with one of the analytical... E. coli and other Total Coliforms in Water,” August 28, 2009. (ii) (4) EMD Millipore (a division of...
A fast mollified impulse method for biomolecular atomistic simulations
NASA Astrophysics Data System (ADS)
Fath, L.; Hochbruck, M.; Singh, C. V.
2017-03-01
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice-ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.
Sappenfield, William M; Peck, Magda G; Gilbert, Carol S; Haynatzka, Vera R; Bryant, Thomas
2010-11-01
The Perinatal Periods of Risk (PPOR) methods provide the necessary framework and tools for large urban communities to investigate feto-infant mortality problems. Adapted from the Periods of Risk model developed by Dr. Brian McCarthy, the six-stage PPOR approach includes epidemiologic methods to be used in conjunction with community planning processes. Stage 2 of the PPOR approach has three major analytic parts: Analytic Preparation, which involves acquiring, preparing, and assessing vital records files; Phase 1 Analysis, which identifies local opportunity gaps; and Phase 2 Analyses, which investigate the opportunity gaps to determine likely causes of feto-infant mortality and to suggest appropriate actions. This article describes the first two analytic parts of PPOR, including methods, innovative aspects, rationale, limitations, and a community example. In Analytic Preparation, study files are acquired and prepared and data quality is assessed. In Phase 1 Analysis, feto-infant mortality is estimated for four distinct perinatal risk periods defined by both birthweight and age at death. These mutually exclusive risk periods are labeled Maternal Health and Prematurity, Maternal Care, Newborn Care, and Infant Health to suggest primary areas of prevention. Disparities within the study community are identified by comparing geographic areas, subpopulations, and time periods. Excess mortality numbers and rates are estimated by comparing the study population to an optimal reference population. This excess mortality is described as the opportunity gap because it indicates where communities have the potential to make improvement.
Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C
2016-08-24
Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid. Copyright © 2016 Elsevier B.V. All rights reserved.
Hanford environmental analytical methods: Methods as of March 1990. Volume 3, Appendix A2-I
Goheen, S.C.; McCulloch, M.; Daniel, J.L.
1993-05-01
This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually.
A method for fast automated microscope image stitching.
Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong
2013-05-01
Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images.
Ion mobility spectrometry as a fast analytical tool in benzalkonium chloride homologs determination.
Gallart-Mateu, D; Armenta, S; Esteve-Turrillas, F A; de la Guardia, M
2017-03-01
A novel procedure is proposed for the determination by ion mobility spectrometry (IMS) of C12, C14 and C16 benzalkonium chloride (BAC) homologs. The proposed method requires minimum sample treatment and the measurement was made in less than one minute. A high sensitivity was obtained for BAC determination by IMS with limit of detection values from 37 to 69µgL(-1). Accuracy of the proposed methodology was evaluated through the analysis of aqueous and alcoholic samples spiked with BAC at concentration levels from 0.002% to 20% (w/v), providing recovery values from 91% to 104%. BAC was determined in sanitary alcohols, nasal sprays, postharvest products, algaecides, and treated swimming pool water. Results obtained by the proposed IMS methodology were statistically comparable to those provided by a liquid chromatography-ultraviolet (LC-UV) reference methodology. The Green Certificate evaluation of the proposed IMS methodology provided 91 score points in the Eco-Scale as compared with 77 for LC-UV method.
Sverko, Ed
2006-01-01
Analytical methods for the analysis of polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs) are widely available and are the result of a vast amount of environmental analytical method development and research on persistent organic pollutants (POPs) over the past 30–40 years. This review summarizes procedures and examines new approaches for extraction, isolation, identification and quantification of individual congeners/isomers of the PCBs and OCPs. Critical to the successful application of this methodology is the collection, preparation, and storage of samples, as well as specific quality control and reporting criteria, and therefore these are also discussed. With the signing of the Stockholm convention on POPs and the development of global monitoring programs, there is an increased need for laboratories in developing countries to determine PCBs and OCPs. Thus, while this review attempts to summarize the current best practices for analysis of PCBs and OCPs, a major focus is the need for low-cost methods that can be easily implemented in developing countries. A “performance based” process is described whereby individual laboratories can adapt methods best suited to their situations. Access to modern capillary gas chromatography (GC) equipment with either electron capture or low-resolution mass spectrometry (MS) detection to separate and quantify OCP/PCBs is essential. However, screening of samples, especially in areas of known use of OCPs or PCBs, could be accomplished with bioanalytical methods such as specific commercially available enzyme-linked immunoabsorbent assays and thus this topic is also reviewed. New analytical techniques such two-dimensional GC (2D-GC) and “fast GC” using GC–ECD may be well-suited for broader use in routine PCB/OCP analysis in the near future given their relatively low costs and ability to provide high-resolution separations of PCB/OCPs. Procedures with low environmental impact (SPME, microscale, low
SSAHA: A Fast Search Method for Large DNA Databases
Ning, Zemin; Cox, Anthony J.; Mullikin, James C.
2001-01-01
We describe an algorithm, SSAHA (Sequence Search and Alignment by Hashing Algorithm), for performing fast searches on databases containing multiple gigabases of DNA. Sequences in the database are preprocessed by breaking them into consecutive k-tuples of k contiguous bases and then using a hash table to store the position of each occurrence of each k-tuple. Searching for a query sequence in the database is done by obtaining from the hash table the “hits” for each k-tuple in the query sequence and then performing a sort on the results. We discuss the effect of the tuple length k on the search speed, memory usage, and sensitivity of the algorithm and present the results of computational experiments which show that SSAHA can be three to four orders of magnitude faster than BLAST or FASTA, while requiring less memory than suffix tree methods. The SSAHA algorithm is used for high-throughput single nucleotide polymorphism (SNP) detection and very large scale sequence assembly. Also, it provides Web-based sequence search facilities for Ensembl projects. PMID:11591649
A fast method for particle picking in cryo-electron micrographs based on fast R-CNN
NASA Astrophysics Data System (ADS)
Xiao, Yifan; Yang, Guangwen
2017-06-01
We propose a fast method to automatically pick protein particles in cryo-EM micrographs, which is now completed manually in practice. Our method is based on Fast R-CNN, with sliding window as the regions proposal solution. To reduce the false positive detections, we set a single class for the major contaminant ice, and pick out all the ice particles in the whole datasets. Tests on the recently-published cryo-EM data of three proteins have demonstrated that our approach can automatically accomplish the human-level particle picking task, and we successfully reduce the test time from 1.5 minutes of previous deep learning method to 2 seconds without any recall or precision losses. Our program is available under the MIT License at https://github.com/xiao1fan/FastParticlePicker.
Utility perspective on USEPA analytical methods program redirection
Koch, B.; Davis, M.K.; Krasner, S.W.
1996-11-01
The Metropolitan Water District of Southern California (Metropolitan) is a public, municipal corporation, created by the State of California, which wholesales supplemental water trough 27 member agencies (cities and water districts). Metropolitan serves nearly 16 million people in an area along the coastal plain of Southern California that covers approximately 5200 square miles. Water deliveries have averaged up to 2.5 million acre-feet per year. Metropolitan`s Water Quality Laboratory (WQL) conducts compliance monitoring of its source and finished drinking waters for chemical and microbial constituents. The laboratory maintains certification of a large number and variety of analytical procedures. The WQL operates in a 17,000-square-foot facility. The equipment is state-of-the-art analytical instrumentation. The staff consists of 40 professional chemists and microbiologists whose experience and expertise are extensive and often highly specialized. The staff turnover is very low, and the laboratory is consistently, efficiently, and expertly run.
Sonoluminescence Spectroscopy as a Promising New Analytical Method
NASA Astrophysics Data System (ADS)
Yurchenko, O. I.; Kalinenko, O. S.; Baklanov, A. N.; Belov, E. A.; Baklanova, L. V.
2016-03-01
The sonoluminescence intensity of Cs, Ru, K, Na, Li, Sr, In, Ga, Ca, Th, Cr, Pb, Mn, Ag, and Mg salts in aqueous solutions of various concentrations was investigated as a function of ultrasound frequency and intensity. Techniques for the determination of these elements in solutions of table salt and their own salts were developed. It was shown that the proposed analytical technique gave results at high concentrations with better metrological characteristics than atomic-absorption spectroscopy because the samples were not diluted.
Computational and Analytical Methods in Nonlinear Fluid Dynamics
1993-09-01
boundary layer and, skin friction and heat transfer coefficients are determined through matching to the analytic embedded functions. Through use of...further to develop simple turbulence models which can predict skin friction and heat transfer coefficients . The study of He et al. (1992) reports the...governing one of the flows listed above: it uncouples from the second set which, essentially, is linear with coefficients that are determined by the primary
An analytic method for sensitivity analysis of complex systems
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping Alexandre; Li, Wei; Cai, Xu
2017-03-01
Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and identifying which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A γ-order relative uncertainty for output, denoted by Rvγ, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the input-output function is almost linear. Two applications of the analytic formula are performed to the power grid and economic systems where the sensitivities of both actual power output and Economic Order Quantity models are analyzed. The importance of each input variable in response to the model output is quantified by the analytic formula.
Method for Operating a Sensor to Differentiate Between Analytes in a Sample
Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J
1998-07-28
Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.
Antibodies Covalently Immobilized on Actin Filaments for Fast Myosin Driven Analyte Transport
Kumar, Saroj; ten Siethoff, Lasse; Persson, Malin; Lard, Mercy; te Kronnie, Geertruy; Linke, Heiner; Månsson, Alf
2012-01-01
Biosensors would benefit from further miniaturization, increased detection rate and independence from external pumps and other bulky equipment. Whereas transportation systems built around molecular motors and cytoskeletal filaments hold significant promise in the latter regard, recent proof-of-principle devices based on the microtubule-kinesin motor system have not matched the speed of existing methods. An attractive solution to overcome this limitation would be the use of myosin driven propulsion of actin filaments which offers motility one order of magnitude faster than the kinesin-microtubule system. Here, we realized a necessary requirement for the use of the actomyosin system in biosensing devices, namely covalent attachment of antibodies to actin filaments using heterobifunctional cross-linkers. We also demonstrated consistent and rapid myosin II driven transport where velocity and the fraction of motile actin filaments was negligibly affected by the presence of antibody-antigen complexes at rather high density (>20 µm−1). The results, however, also demonstrated that it was challenging to consistently achieve high density of functional antibodies along the actin filament, and optimization of the covalent coupling procedure to increase labeling density should be a major focus for future work. Despite the remaining challenges, the reported advances are important steps towards considerably faster nanoseparation than shown for previous molecular motor based devices, and enhanced miniaturization because of high bending flexibility of actin filaments. PMID:23056279
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.
Progress in the GEOROC Database - Fast and Simple Access to Analytical Data by Precompilation
NASA Astrophysics Data System (ADS)
Sarbas, B.
2001-12-01
sample, these are compiled according to specific rules. These rules consider the method of analysis as well as the year of publication.
Fuguet, Elisabet; Ràfols, Clara; Bosch, Elisabeth; Rosés, Martí
2009-04-24
A new and fast method to determine acidity constants of monoprotic weak acids and bases by capillary zone electrophoresis based on the use of an internal standard (compound of similar nature and acidity constant as the analyte) has been developed. This method requires only two electrophoretic runs for the determination of an acidity constant: a first one at a pH where both analyte and internal standard are totally ionized, and a second one at another pH where both are partially ionized. Furthermore, the method is not pH dependent, so an accurate measure of the pH of the buffer solutions is not needed. The acidity constants of several phenols and amines have been measured using internal standards of known pK(a), obtaining a mean deviation of 0.05 pH units compared to the literature values.
Strano Rossi, Sabina; de la Torre, Xavier; Botrè, Francesco
2010-05-30
A fast method has been developed for the simultaneous determination of 52 stimulants and narcotics excreted unconjugated in urine by gas chromatography/mass spectrometry (GC/MS). The procedure involves the liquid/liquid extraction of the analytes from urine at strong alkaline pH and the injection of the extract into a GC/MS instrument with a fast GC column (10 m x 0.18 mm i.d.); the short column allows the complete separation of the 52 analytes in a chromatographic run of 8 min. The method has been fully validated giving lower limits of detection (LLODs) satisfactory for its application to antidoping analysis as well as to forensic toxicology. The repeatability of the concentrations and the retention times are good both for intra- and for inter-day experiments (%CV of concentrations always lower than 15 and %CV of retention times lower than 0.6). In addition, the analytical bias is satisfactory (A% always >15%). The method proposed here would be particularly useful whenever there are time constraints and the analyses have to be completed in the shortest possible time. Copyright 2010 John Wiley & Sons, Ltd.
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food-producing animals. 530.22 Section 530.22 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS
Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...
PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS
Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...
Analytic method for spin transfer matrix in presence of snakes
Tepikian, S.
1985-01-01
Large accelerators can be made spin transparent using Siberian snakes. However, the number of snakes required is yet to be determined. An algorithm for finding the spin transfer matrix analytically is developed. This is applied to find the cos /sub p/ for the case involving 6 snakes in two different configurations. This is in contrast to R. Ruth's approach who found that the number of snakes is proportional to absolute value of epsilonS, where epsilon is the depolarizing resonance strength. Half the trace of the spin precession matrix with 6 equally spaced snakes is found analytically for two configurations. The first configuration involves alternating snakes with precession axes of +45 and -45 while the second configuration are alternating +75 and -75 as proposed by K. Steffen. Then the largest resonance strength absolute value of epsilon such that absolute value of cos /sub p/ less than or equal to less than or equal to 1 is determined. Finally, a comparison with tracking studies is made.
Analytical method for calculation of deviations from intended dosages during multi-infusion.
Konings, Maurits K; Snijder, Roland A; Radermacher, Joris H; Timmerman, Annemoon M
2017-01-17
In this paper, a new method is presented that combines mechanical compliance effects with Poiseuille flow and push-out effects ("dead volume") in one single mathematical framework for calculating dosing errors in multi-infusion set-ups. In contrast to existing numerical methods, our method produces explicit expressions that illustrate the mathematical dependencies of the dosing errors on hardware parameters and pump flow rate settings. Our new approach uses the Z-transform to model the contents of the catheter, and after implementation in Mathematica (Wolfram), explicit expressions are produced automatically. Consistency of the resulting analytical expressions has been examined for limiting cases, and three types of in-vitro measurements have been performed to obtain a first experimental test of the validity of the theoretical results. The relative contribution of various factors affecting the dosing errors, such as the Poiseuille flow profile, resistance and internal volume of the catheter, mechanical compliance of the syringes and the various pump flow rate settings, can now be discerned clearly in the structure of the expressions generated by our method. The in-vitro experiments showed a standard deviation between theory and experiment of 14% for the delay time in the catheter, and of 13% for the time duration of the dosing error bolus. Our method provides insight and predictability in a large range of possible situations involving many variables and dependencies, which is potentially very useful for e.g. the development of a fast, bed-side tool ("calculator") that provides the clinician with a precise prediction of dosing errors and delay times interactively for many scenario's. The interactive nature of such a device has now been made feasible by the fact that, using our method, explicit expressions are available for these situations, as opposed to conventional time-consuming numerical simulations.
Development of a fast voltage control method for electrostatic accelerators
NASA Astrophysics Data System (ADS)
Lobanov, Nikolai R.; Linardakis, Peter; Tsifakis, Dimitrios
2014-12-01
The concept of a novel fast voltage control loop for tandem electrostatic accelerators is described. This control loop utilises high-frequency components of the ion beam current intercepted by the image slits to generate a correction voltage that is applied to the first few gaps of the low- and high-energy acceleration tubes adjoining the high voltage terminal. New techniques for the direct measurement of the transfer function of an ultra-high impedance structure, such as an electrostatic accelerator, have been developed. For the first time, the transfer function for the fast feedback loop has been measured directly. Slow voltage variations are stabilised with common corona control loop and the relationship between transfer functions for the slow and new fast control loops required for optimum operation is discussed. The main source of terminal voltage instabilities, which are due to variation of the charging current caused by mechanical oscillations of charging chains, has been analysed.
Methods for performing fast discrete curvelet transforms of data
Candes, Emmanuel; Donoho, David; Demanet, Laurent
2010-11-23
Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.
A fast and efficient method for device level layout analysis
NASA Astrophysics Data System (ADS)
Dong, YaoQi; Zou, Elaine; Pang, Jenny; Huang, Lucas; Yang, Legender; Zhang, Chunlei; Du, Chunshan; Hu, Xinyi; Wan, Qijian
2017-03-01
There is an increasing demand for device level layout analysis, especially as technology advances. The analysis is to study standard cells by extracting and classifying critical dimension parameters. There are couples of parameters to extract, like channel width, length, gate to active distance, and active to adjacent active distance, etc. for 14nm technology, there are some other parameters that are cared about. On the one hand, these parameters are very important for studying standard cell structures and spice model development with the goal of improving standard cell manufacturing yield and optimizing circuit performance; on the other hand, a full chip device statistics analysis can provide useful information to diagnose the yield issue. Device analysis is essential for standard cell customization and enhancements and manufacturability failure diagnosis. Traditional parasitic parameters extraction tool like Calibre xRC is powerful but it is not sufficient for this device level layout analysis application as engineers would like to review, classify and filter out the data more easily. This paper presents a fast and efficient method based on Calibre equation-based DRC (eqDRC). Equation-based DRC extends the traditional DRC technology to provide a flexible programmable modeling engine which allows the end user to define grouped multi-dimensional feature measurements using flexible mathematical expressions. This paper demonstrates how such an engine and its programming language can be used to implement critical device parameter extraction. The device parameters are extracted and stored in a DFM database which can be processed by Calibre YieldServer. YieldServer is data processing software that lets engineers query, manipulate, modify, and create data in a DFM database. These parameters, known as properties in eqDRC language, can be annotated back to the layout for easily review. Calibre DesignRev can create a HTML formatted report of the results displayed in Calibre
NASA Astrophysics Data System (ADS)
Pletikapić, Galja; Ivošević DeNardis, Nadica
2017-01-01
Surface analytical methods are applied to examine the environmental status of seawaters. The present overview emphasizes advantages of combining surface analytical methods, applied to a hazardous situation in the Adriatic Sea, such as monitoring of the first aggregation phases of dissolved organic matter in order to potentially predict the massive mucilage formation and testing of oil spill cleanup. Such an approach, based on fast and direct characterization of organic matter and its high-resolution visualization, sets a continuous-scale description of organic matter from micro- to nanometre scales. Electrochemical method of chronoamperometry at the dropping mercury electrode meets the requirements for monitoring purposes due to the simple and fast analysis of a large number of natural seawater samples enabling simultaneous differentiation of organic constituents. In contrast, atomic force microscopy allows direct visualization of biotic and abiotic particles and provides an insight into structural organization of marine organic matter at micro- and nanometre scales. In the future, merging data at different spatial scales, taking into account experimental input on micrometre scale, observations on metre scale and modelling on kilometre scale, will be important for developing sophisticated technological platforms for knowledge transfer, reports and maps applicable for the marine environmental protection and management of the coastal area, especially for tourism, fishery and cruiser trafficking.
Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph
2012-11-02
Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization.
Dümichen, Erik; Eisentraut, Paul; Bannick, Claus Gerhard; Barthel, Anne-Kathrin; Senz, Rainer; Braun, Ulrike
2017-05-01
In order to determine the relevance of microplastic particles in various environmental media, comprehensive investigations are needed. However, no analytical method exists for fast identification and quantification. At present, optical spectroscopy methods like IR and RAMAN imaging are used. Due to their time consuming procedures and uncertain extrapolation, reliable monitoring is difficult. For analyzing polymers Py-GC-MS is a standard method. However, due to a limited sample amount of about 0.5 mg it is not suited for analysis of complex sample mixtures like environmental samples. Therefore, we developed a new thermoanalytical method as a first step for identifying microplastics in environmental samples. A sample amount of about 20 mg, which assures the homogeneity of the sample, is subjected to complete thermal decomposition. The specific degradation products of the respective polymer are adsorbed on a solid-phase adsorber and subsequently analyzed by thermal desorption gas chromatography mass spectrometry. For certain identification, the specific degradation products for the respective polymer were selected first. Afterwards real environmental samples from the aquatic (three different rivers) and the terrestrial (bio gas plant) systems were screened for microplastics. Mainly polypropylene (PP), polyethylene (PE) and polystyrene (PS) were identified for the samples from the bio gas plant and PE and PS from the rivers. However, this was only the first step and quantification measurements will follow.
Furlanetto, Sandra; Orlandini, Serena; Pasquini, Benedetta; Caprini, Claudia; Mura, Paola; Pinzauti, Sergio
2015-10-01
A fast capillary zone electrophoresis method for the simultaneous analysis of glibenclamide and its impurities (I(A) and I(B)) in pharmaceutical dosage forms was fully developed within a quality by design framework. Critical quality attributes were represented by I(A) peak efficiency, critical resolution between glibenclamide and I(B), and analysis time. Experimental design was efficiently used for rapid and systematic method optimization. A 3(5)//16 symmetric screening matrix was chosen for investigation of the five selected critical process parameters throughout the knowledge space, and the results obtained were the basis for the planning of the subsequent response surface study. A Box-Behnken design for three factors allowed the contour plots to be drawn and the design space to be identified by introduction of the concept of probability. The design space corresponded to the multidimensional region where all the critical quality attributes reached the desired values with a degree of probability π ≥ 90%. Under the selected working conditions, the full separation of the analytes was obtained in less than 2 min. A full factorial design simultaneously allowed the design space to be validated and method robustness to be tested. A control strategy was finally implemented by means of a system suitability test. The method was fully validated and was applied to real samples of glibenclamide tablets.
Dabalus Islam, M; Schweikert Turcu, M; Cannavan, A
2008-12-01
A simple and inexpensive liquid chromatographic method for the determination of seven sulphonamides in animal tissues was validated. The measurement uncertainty of the method was estimated using two approaches: a 'top-down' approach based on in-house validation data, which used either repeatability data or intra-laboratory reproducibility; and a 'bottom-up' approach, which included repeatability data from spiking experiments. The decision limits (CCalpha) applied in the European Union were calculated for comparison. The bottom-up approach was used to identify critical steps in the analytical procedure, which comprised extraction, concentration, hexane-wash and HPLC-UV analysis. Six replicates of porcine kidney were fortified at the maximum residue limit (100 microg kg(-1)) at three different stages of the analytical procedure, extraction, evaporation, and final wash/HPLC analysis, to provide repeatability data for each step. The uncertainties of the gravimetric and volumetric measurements were estimated and integrated in the calculation of the total combined uncertainties by the bottom-up approach. Estimates for systematic error components were included in both approaches. Combined uncertainty estimates for the seven compounds using the 'top-down' approach ranged from 7.9 to 12.5% (using reproducibility) and from 5.4 to 9.5% (using repeatability data) and from 5.1 to 9.0% using the bottom-up approach. CCalpha values ranged from 105.6 to 108.5 microg kg(-1). The major contributor to the combined uncertainty for each analyte was identified as the extraction step. Since there was no statistical difference between the uncertainty values obtained by either approach, the analyst would be justified in applying the 'top-down' estimation using method validation data, rather than performing additional experiments to obtain uncertainty data.
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
Experimental cosmology using fast parallel N-body methods
NASA Astrophysics Data System (ADS)
Warren, Michael S.
This dissertation describes a parallel treecode N-body algorithm and a number of simulations which have been performed on massively parallel computers. We use the results of these simulations (which use between 1 and 17 million particles each) to investigate the structure, kinematics, shapes, masses, spatial correlations, and relative pairwise velocity dispersions of dark matter halos. Using cold dark matter (CDM) initial conditions normalized to the anisotropies detected by the COBE satellite, we show that the mass function and spatial distribution of halos is compatible with the observations. We also show the value of the relative pair-wise velocity dispersion sigma(nu), used as evidence against COBE-normalized CDM models, is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(nu) are applied to catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples, and overlap observational estimates. We also study smaller systems which form by gravitational collapse from scale-free and more general Gaussian initial density perturbations. We analyze the structure and kinematics of the approximately 102 largest relaxed halos in a number of simulations. A typical halo is a triaxial spheroid which tends to be more often prolate than oblate. These shapes are maintained by anisotropic velocity dispersion rather than by angular momentum (spin parameter lambda approximately 0.05). Nevertheless, there is a significant tendency for the total angular momentum vector to be aligned with the minor axis of the density distribution. In addition, we report on an efficient adaptive parallel N-body method which we have designed and implemented. The algorithm computes the forces on an arbitrary distribution of bodies in a time which scales as N log N with the particle number. The accuracy of the force calculations is analytically bounded, and can be adjusted via a user
Stankovicha, Joseph J; Gritti, Fabrice; Beaver, Lois Ann; Stevensona, Paul G; Guiochon, Georges
2013-11-29
Five methods were used to implement fast gradient separations: constant flow rate, constant column-wall temperature, constant inlet pressure at moderate and high pressures (controlled by a pressure controller),and programmed flow constant pressure. For programmed flow constant pressure, the flow rates and gradient compositions are controlled using input into the method instead of the pressure controller. Minor fluctuations in the inlet pressure do not affect the mobile phase flow rate in programmed flow. There producibilities of the retention times, the response factors, and the eluted band width of six successive separations of the same sample (9 components) were measured with different equilibration times between 0 and 15 min. The influence of the length of the equilibration time on these reproducibilities is discussed. The results show that the average column temperature may increase from one separation to the next and that this contributes to fluctuation of the results.
Verstraeten, Ingrid M.; Steele, G.V.; Cannia, J.C.; Bohlke, J.K.; Kraemer, T.E.; Hitch, D.E.; Wilson, K.E.; Carnes, A.E.
2001-01-01
A study of the water resources of the Dutch Flats area in the western part of the North Platte Natural Resources District, western Nebraska, was conducted from 1995 through 1999 to describe the surface water and hydrogeology, the spatial distribution of selected water-quality constituents in surface and ground water, and the surface-water/ground-water interaction in selected areas. This report describes the selected field and analytical methods used in the study and selected analytical results from the study not previously published. Specifically, dissolved gases, age-dating data, and other isotopes collected as part of an intensive sampling effort in August and November 1998 and all uranium and uranium isotope data collected through the course of this study are included in the report.
Geodynamic simulations using the fast multipole boundary element method
NASA Astrophysics Data System (ADS)
Drombosky, Tyler W.
Interaction between viscous fluids models two important phenomena in geophysics: (i) the evolution of partially molten rocks, and (ii) the dynamics of Ultralow-Velocity Zones. Previous attempts to numerically model these behaviors have been plagued either by poor resolution at the fluid interfaces or high computational costs. We employ the Fast Multipole Boundary Element Method, which tracks the evolution of the fluid interfaces explicitly and is scalable to large problems, to model these systems. The microstructure of partially molten rocks strongly influences the macroscopic physical properties. The fractional area of intergranular contact, contiguity, is a key parameter that controls the elastic strength of the grain network in the partially molten aggregate. We study the influence of matrix deformation on the contiguity of an aggregate by carrying out pure shear and simple shear deformations of an aggregate. We observe that the differential shortening, the normalized difference between the major and minor axes of grains is inversely related to the ratio between the principal components of the contiguity tensor. From the numerical results, we calculate the seismic anisotropy resulting from melt redistribution during pure and simple shear deformation. During deformation, the melt is expelled from tubules along three grain corners to films along grain edges. The initially isotropic fractional area of intergranular contact, contiguity, becomes anisotropic due to deformation. Consequently, the component of contiguity evaluated on the plane parallel to the axis of maximum compressive stress decreases. We demonstrate that the observed global shear wave anisotropy and shear wave speed reduction of the Lithosphere-Asthenosphere Boundary are best explained by 0.1 vol% partial melt distributed in horizontal films created by deformation. We use our microsimulation in conjunction with a large scale mantle deep Earth simulation to gain insight into the formation of
Analytical methods for gelatin differentiation from bovine and porcine origins and food products.
Nhari, Raja Mohd Hafidz Raja; Ismail, Amin; Che Man, Yaakob B
2012-01-01
Usage of gelatin in food products has been widely debated for several years, which is about the source of gelatin that has been used, religion, and health. As an impact, various analytical methods have been introduced and developed to differentiate gelatin whether it is made from porcine or bovine sources. The analytical methods comprise a diverse range of equipment and techniques including spectroscopy, chemical precipitation, chromatography, and immunochemical. Each technique can differentiate gelatins for certain extent with advantages and limitations. This review is focused on overview of the analytical methods available for differentiation of bovine and porcine gelatin and gelatin in food products so that new method development can be established.
Analytic solution for Telegraph equation by differential transform method
NASA Astrophysics Data System (ADS)
Biazar, J.; Eslami, M.
2010-06-01
In this article differential transform method (DTM) is considered to solve Telegraph equation. This method is a powerful tool for solving large amount of problems (Zhou (1986) [1], Chen and Ho (1999) [2], Jang et al. (2001) [3], Kangalgil and Ayaz (2009) [4], Ravi Kanth and Aruna (2009) [5], Arikoglu and Ozkol (2007) [6]). Using differential transform method, it is possible to find the exact solution or a closed approximate solution of an equation. To illustrate the ability and reliability of the method some examples are provided. The results reveal that the method is very effective and simple.
Šatínský, Dalibor; Chocholouš, Petr; Válová, Olga; Hanusová, Lucia; Solich, Petr
2013-09-30
This paper deals with a novel approach to separate two analytes with different chemical properties and different lipophilicity. The newly described methodology is based on the two column system that was used for isocratic separation of two analytes with very different lipophilicity-dexamethasone and cinchocaine. Simultaneous separation of model compounds cinchocaine and dexamethasone was carried under the following conditions in two-column sequential injection chromatography system (2-C SIC). A 25×4.6 mm C-18 monolithic column was used in the first dimension for retention and separation of dexamethasone with mobile phase acetonitrile:water 30:70 (v/v), flow rate 0.9 mL min(-1) and consumption of 1.7 mL. A 10×4.6 mm C-18 monolithic column with 5×4.6 mm C-18 precolumn was used in the second dimension for retention and separation of cinchocaine using mobile phase acetonitrile:water 60:40 (v/v), flow rate 0.9 mL min(-1) and consumption 1.5 mL. Whole analysis time including both mobile phase's aspirations and both column separations was performed in less than 4 min. The method was fully validated and used for determination of cinchocaine and dexamethasone in pharmaceutical otic drops. The developed 2-C SIC method was compared with HPLC method under the isocratic conditions of separation on monolithic column (25×4.6 mm C-18). Spectrophotometric detection of both compounds was performed at wavelength 240 nm. System repeatability and method precision were found in the range (0.39-3.12%) for both compounds. Linearity of determination was evaluated in the range 50-500 μg mL(-1) and coefficients of determination were found to be r(2)=0.99912 for dexamethasone and r(2)=0.99969 for cinchocaine. Copyright © 2013 Elsevier B.V. All rights reserved.
de Andrade, Jucimara Kulek; de Andrade, Camila Kulek; Komatsu, Emy; Perreault, Hélène; Torres, Yohandra Reyes; da Rosa, Marcos Roberto; Felsner, Maria Lurdes
2017-08-01
Corn syrups, important ingredients used in food and beverage industries, often contain high levels of 5-hydroxymethyl-2-furfural (HMF), a toxic contaminant. In this work, an in house validation of a difference spectrophotometric method for HMF analysis in corn syrups was developed using sophisticated statistical tools by the first time. The methodology showed excellent analytical performance with good selectivity, linearity (R(2)=99.9%, r>0.99), accuracy and low limits (LOD=0.10mgL(-1) and LOQ=0.34mgL(-1)). An excellent precision was confirmed by repeatability (RSD (%)=0.30) and intermediate precision (RSD (%)=0.36) estimates and by Horrat value (0.07). A detailed study of method precision using a nested design demonstrated that variation sources such as instruments, operators and time did not interfere in the variability of results within laboratory and consequently in its intermediate precision. The developed method is environmentally friendly, fast, cheap and easy to implement resulting in an attractive alternative for corn syrups quality control in industries and official laboratories.
Fernandez-Delgado, Manuel; Ribeiro, Jorge; Cernadas, Eva; Ameneiro, Senén Barro
2011-11-01
Parallel perceptrons (PPs) are very simple and efficient committee machines (a single layer of perceptrons with threshold activation functions and binary outputs, and a majority voting decision scheme), which nevertheless behave as universal approximators. The parallel delta (P-Delta) rule is an effective training algorithm, which, following the ideas of statistical learning theory used by the support vector machine (SVM), raises its generalization ability by maximizing the difference between the perceptron activations for the training patterns and the activation threshold (which corresponds to the separating hyperplane). In this paper, we propose an analytical closed-form expression to calculate the PPs' weights for classification tasks. Our method, called Direct Parallel Perceptrons (DPPs), directly calculates (without iterations) the weights using the training patterns and their desired outputs, without any search or numeric function optimization. The calculated weights globally minimize an error function which simultaneously takes into account the training error and the classification margin. Given its analytical and noniterative nature, DPPs are computationally much more efficient than other related approaches (P-Delta and SVM), and its computational complexity is linear in the input dimensionality. Therefore, DPPs are very appealing, in terms of time complexity and memory consumption, and are very easy to use for high-dimensional classification tasks. On real benchmark datasets with two and multiple classes, DPPs are competitive with SVM and other approaches but they also allow online learning and, as opposed to most of them, have no tunable parameters.
Application of an analytical method for solution of thermal hydraulic conservation equations
Fakory, M.R.
1995-09-01
An analytical method has been developed and applied for solution of two-phase flow conservation equations. The test results for application of the model for simulation of BWR transients are presented and compared with the results obtained from application of the explicit method for integration of conservation equations. The test results show that with application of the analytical method for integration of conservation equations, the Courant limitation associated with explicit Euler method of integration was eliminated. The results obtained from application of the analytical method (with large time steps) agreed well with the results obtained from application of explicit method of integration (with time steps smaller than the size imposed by Courant limitation). The results demonstrate that application of the analytical approach significantly improves the numerical stability and computational efficiency.
Computational Neutronics Methods and Transmutation Performance Analyses for Fast Reactors
R. Ferrer; M. Asgari; S. Bays; B. Forget
2007-03-01
The once-through fuel cycle strategy in the United States for the past six decades has resulted in an accumulation of Light Water Reactor (LWR) Spent Nuclear Fuel (SNF). This SNF contains considerable amounts of transuranic (TRU) elements that limit the volumetric capacity of the current planned repository strategy. A possible way of maximizing the volumetric utilization of the repository is to separate the TRU from the LWR SNF through a process such as UREX+1a, and convert it into fuel for a fast-spectrum Advanced Burner Reactor (ABR). The key advantage in this scenario is the assumption that recycling of TRU in the ABR (through pyroprocessing or some other approach), along with a low capture-to-fission probability in the fast reactor’s high-energy neutron spectrum, can effectively decrease the decay heat and toxicity of the waste being sent to the repository. The decay heat and toxicity reduction can thus minimize the need for multiple repositories. This report summarizes the work performed by the fuel cycle analysis group at the Idaho National Laboratory (INL) to establish the specific technical capability for performing fast reactor fuel cycle analysis and its application to a high-priority ABR concept. The high-priority ABR conceptual design selected is a metallic-fueled, 1000 MWth SuperPRISM (S-PRISM)-based ABR with a conversion ratio of 0.5. Results from the analysis showed excellent agreement with reference values. The independent model was subsequently used to study the effects of excluding curium from the transuranic (TRU) external feed coming from the LWR SNF and recycling the curium produced by the fast reactor itself through pyroprocessing. Current studies to be published this year focus on analyzing the effects of different separation strategies as well as heterogeneous TRU target systems.
ERIC Educational Resources Information Center
Rayson, Gary D.
2004-01-01
A unifying modular approach to the description of modern instrumentation used in analytical measurements is described. Through the identification of five modules for analytical instrument, discussions pertaining to the different physical and chemical interactions can be better directed towards the similarities and differences in the actual methods.
Tank 48H Waste Composition and Results of Investigation of Analytical Methods
Walker , D.D.
1997-04-02
This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
An Analytical Method for Measuring Competence in Project Management
ERIC Educational Resources Information Center
González-Marcos, Ana; Alba-Elías, Fernando; Ordieres-Meré, Joaquín
2016-01-01
The goal of this paper is to present a competence assessment method in project management that is based on participants' performance and value creation. It seeks to close an existing gap in competence assessment in higher education. The proposed method relies on information and communication technology (ICT) tools and combines Project Management…
An Analytical Method for Measuring Competence in Project Management
ERIC Educational Resources Information Center
González-Marcos, Ana; Alba-Elías, Fernando; Ordieres-Meré, Joaquín
2016-01-01
The goal of this paper is to present a competence assessment method in project management that is based on participants' performance and value creation. It seeks to close an existing gap in competence assessment in higher education. The proposed method relies on information and communication technology (ICT) tools and combines Project Management…
Fast Bundle-Level Type Methods for Unconstrained and Ball-Constrained Convex Optimization
2014-12-01
ZHANG ¶ Abstract. It has been shown in [14] that the accelerated prox-level ( APL ) method and its variant, the uniform smoothing level (USL) method...introduce two new variants of level methods, i.e., the fast APL (FAPL) method and the fast USL (FUSL) method, for solving large scale black-box and...structured convex programming problems respectively. Both FAPL and FUSL enjoy the same optimal iteration complexity as APL and USL, while the number of
Ho, T.-S.; Rabitz, H.; Aoiz, F. J.; Banares, L.; Vazquez, S. A.; Harding, L. B.; Chemistry; Princeton Univ.; Univ. Complutense
2003-08-08
A new implementation is presented for the potential energy surface (PES) of the 1{sup 2}A' state of the N({sup 2}D)+H{sub 2} system based on a set of 2715 ab initio points resulting from the multireference configuration interaction (MRCI) calculations. The implementation is carried out using the reproducing Kernel Hilbert Space interpolation method. Range parameters, via bond-order-like coordinates, are properly chosen to render a sufficiently short-range three-body interaction and a regularization procedure is invoked to yield a globally smooth PES. A fast algorithm, with the help of low-order spline reproducing kernels, is implemented for the computation of the PES and, particularly, its gradients, whose fast evaluation is essential for large scale quasi-classical trajectory calculations. It is found that the new PES can be evaluated more than ten times faster than that of an existing (old) PES based on a smaller number (1141) of data points resulting from the same MRCI calculations and a similar interpolation procedure. Although there is a general good correspondence between the two surfaces, the new PES is in much better agreement with the ab initio calculations, especially in key stationary point regions including the C{sub 2v} minimum, the C{sub 2v} transition state, and the N-H-H linear barrier. Moreover, the new PES is free of spurious small scale features. Analytic gradients are made available in the new PES code to further facilitate quasiclassical trajectory calculations, which have been performed and compared with the results based on the old surface.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production.
Analytical methods to determine phosphonic and amino acid group-containing pesticides.
Stalikas, C D; Konidari, C N
2001-01-12
A comprehensive view on the possibilities of the most recently developed chromatographic methods and emerging techniques in the analysis of pesticides glyphosate, glufosinate, bialaphos and their metabolites is presented. The state-of-the-art of the individual pre-treatment steps (extraction, pre-concentration, clean-up, separation, quantification) of the employed analytical methods for this group of chemicals is reviewed. The advantages and drawbacks of the described analytical methods are discussed and the present status and future trends are outlined.
Supak Smolcic, Vesna; Bilic-Zulle, Lidija; Fisic, Elizabeta
2011-01-01
Cobas 6000 (Roche, Germany) is biochemistry analyzer for spectrophotometric, immunoturbidimetric and ion-selective determination of biochemical analytes. Hereby we present analytical validation with emphasis on method performance judgment for routine operation. Validation was made for 30 analytes (metabolites, enzymes, trace elements, specific proteins and electrolytes). Research included determination of within-run (N = 20) and between-run imprecision (N = 30), inaccuracy (N = 30) and method comparison with routine analyzer (Beckman Coulter AU640) (N = 50). For validation of complete analytical process we calculated total error (TE). Results were judged according to quality specification criteria given by European Working Group. Within-run imprecision CVs were all below 5% except for cholesterol, triglycerides, IgA and IgM. Between-run CVs for all analytes were below 10%. Analytes that did not meet the required specifications for imprecision were: total protein, albumin, calcium, sodium, chloride, immunoglobulins and HDL cholesterol. Analytes that did not fulfill requirements for inaccuracy were: total protein, calcium, sodium and chloride. Analytes that deviated from quality specifications for total error were: total protein, albumin, calcium, sodium, chloride and IgM. Passing-Bablok regression analysis provided linear equation and 95% confidence interval for intercept and slope. Complete accordance with routine analyzer Beckman Coulter AU640 showed small number of analytes. Other analytes showed small proportional and/or small constant difference and therefore need to be adjusted for routine operation. Regarding low CV values, tested analyzer has satisfactory accuracy and precision and is extremely stable. Except for analytes that are coherent on both analyzers, some analytes require adjustments of slope and intercept for complete accordance.
Manual of analytical methods for the Industrial Hygiene Chemistry Laboratory
Greulich, K.A.; Gray, C.E.
1991-08-01
This Manual is compiled from techniques used in the Industrial Hygiene Chemistry Laboratory of Sandia National Laboratories in Albuquerque, New Mexico. The procedures are similar to those used in other laboratories devoted to industrial hygiene practices. Some of the methods are standard; some, modified to suit our needs; and still others, developed at Sandia. The authors have attempted to present all methods in a simple and concise manner but in sufficient detail to make them readily usable. It is not to be inferred that these methods are universal for any type of sample, but they have been found very reliable for the types of samples mentioned.
EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD
Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...
EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD
Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...
Burtis, Carl A.; Johnson, Wayne F.; Walker, William A.
1988-01-01
A rotor and disc assembly for use in a centrifugal fast analyzer. The assembly is designed to process multiple samples of whole blood followed by aliquoting of the resultant serum into precisely measured samples for subsequent chemical analysis. The assembly requires minimal operator involvement with no mechanical pipetting. The system comprises (1) a whole blood sample disc, (2) a serum sample disc, (3) a sample preparation rotor, and (4) an analytical rotor. The blood sample disc and serum sample disc are designed with a plurality of precision bore capillary tubes arranged in a spoked array. Samples of blood are loaded into the blood sample disc in capillary tubes filled by capillary action and centrifugally discharged into cavities of the sample preparation rotor where separation of serum and solids is accomplished. The serum is loaded into the capillaries of the serum sample disc by capillary action and subsequently centrifugally expelled into cuvettes of the analytical rotor for analysis by conventional methods.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current
Ortega, Alejandra; Tong, Ling; D'hooge, Jan
2014-04-01
Essential to (cardiac) 3D ultrasound are 2D matrix array transducer technology and the associated (two-stage) beam forming. Given the large number of degrees of freedom and the complexity of this problem, simulation tools play an important role. Hereto, the impulse response (IR) method is commonly used. Unfortunately, given the large element count of 2D array transducers, simulation times become significant jeopardizing the efficacy of the design process. The aim of this study was therefore to derive a new analytical expression to more efficiently calculate the IR in order to speed up the calculation process. To compare accuracy and computation time, the reference and the proposed method were implemented in MATLAB and contrasted. For all points of observation tested, the IR with both methods was identical. The mean calculation time however reduced in average by a factor of 3.93±0.03 times. The proposed IR method therefore speeds up the calculation time of the IR of an individual transducer element while remaining perfectly accurate. This new expression will be particularly relevant for 2D matrix transducer design where computation times remain currently a bottle neck in the design process. Copyright © 2014 Elsevier B.V. All rights reserved.
A fast LC-APCI/MS method for analyzing benzodiazepines in whole blood using monolithic support.
Bugey, Aurélie; Rudaz, Serge; Staub, Christian
2006-03-07
A simple and fast procedure was developed for the simultaneous determination of eight benzodiazepines (BZDs) in whole blood using liquid chromatography-atmospheric pressure chemical ionization-mass spectrometry (LC-APCI-MS). Sample pretreatment was carried out using a simple liquid-liquid extraction (LLE) with n-butylchloride, and chromatographic separation was performed using a monolithic silica column to speed up the analytical process. APCI and electrospray ionization (ESI) were compared. Whereas both ionization techniques appeared suitable for BZDs, APCI was found to be slightly more sensitive, especially for the determination of frequently low-dosed compounds. The method was validated according to the guidelines of the "Société Française des Sciences et Techniques Pharmaceutiques" (SFSTP) in the concentration range of 2.5-500 microg/L. The limit of quantification (LOQ) was 2.5 microg/L for all the compounds. Validation data including linearity, precision, and trueness were obtained, allowing subtherapeutic quantification of frequently low-dosed BZDs. The high selectivity of the mass spectrometer, along with the properties of the monolithic support, allowed unequivocal analysis of the eight compounds in less than 5 min. To demonstrate the potential of the method, it was used for the analysis of benzodiazepines in postmortem blood samples.
Hyperspectral imaging based method for fast characterization of kidney stone types
NASA Astrophysics Data System (ADS)
Blanco, Francisco; López-Mesas, Montserrat; Serranti, Silvia; Bonifazi, Giuseppe; Havel, Josef; Valiente, Manuel
2012-07-01
The formation of kidney stones is a common and highly studied disease, which causes intense pain and presents a high recidivism. In order to find the causes of this problem, the characterization of the main compounds is of great importance. In this sense, the analysis of the composition and structure of the stone can give key information about the urine parameters during the crystal growth. But the usual methods employed are slow, analyst dependent and the information obtained is poor. In the present work, the near infrared (NIR)-hyperspectral imaging technique was used for the analysis of 215 samples of kidney stones, including the main types usually found and their mixtures. The NIR reflectance spectra of the analyzed stones showed significant differences that were used for their classification. To do so, a method was created by the use of artificial neural networks, which showed a probability higher than 90% for right classification of the stones. The promising results, robust methodology, and the fast analytical process, without the need of an expert assistance, lead to an easy implementation at the clinical laboratories, offering the urologist a rapid diagnosis that shall contribute to minimize urolithiasis recidivism.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
Technological and Analytical Methods for Arabinoxylan Quantification from Cereals.
Döring, Clemens; Jekle, Mario; Becker, Thomas
2016-01-01
Arabinoxylan (AX) is the major nonstarch polysaccharide contained in various types of grains. AX consists of a backbone of β1.4D-xylopyranosyl residues with randomly linked αlarabinofuranosyl units. Once isolated and included as food additive, AX affects foodstuff attributes and has positive effects on human health. AX can be classified into waterextractable and waterunextractable AX. For isolating AX out of their natural matrix, a range of methods was developed, adapted, and improved. This review presents a survey of the commonly used extraction methods for AX by the influence of different techniques. It also provides a brief overview of the structural and technological impact of AX as a dough additive. A concluding section summarizes different detection methods for analyzing and quantification AX.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
A Parallel QR Method Using Fast Givens’ Rotations.
1984-01-01
fill-in am-t^-t. elimination of £**+,> Sameh and Kuek [SK78] proposed a simple elimination strategy which, subject to (P2) below, adheres to regular...matrices <?’*) of order 2 are equal to standard Givens’ rotations. Applying 2x2 matrices C**’ to band matrices is the order prescribed by Sameh and Kuck...Rath, W., "Fast Givens Rotations for Orthogonal Similarity Transformations," Num. Math. 40, 47-56 (1982). |SK78] Sameh , A.H. and Kuck, D.J, "On
A fast collocation method for a variable-coefficient nonlocal diffusion model
NASA Astrophysics Data System (ADS)
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... defines phenolics as ferric iron oxidized compounds that react with 4-aminoantipyrine (4-AAP) at pH 10... functions include chloride by SM4500-Cl-E-1997, hardness by EPA Method 130.1, cyanide by ASTM D6888 or...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... defines phenolics as ferric iron oxidized compounds that react with 4-aminoantipyrine (4-AAP) at pH 10... functions include chloride by SM4500-Cl-E-1997, hardness by EPA Method 130.1, cyanide by ASTM D6888 or...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... defines phenolics as ferric iron oxidized compounds that react with 4-aminoantipyrine (4-AAP) at pH 10... functions include chloride by SM4500-Cl-E-1997, hardness by EPA Method 130.1, cyanide by ASTM D6888 or...
Teaching Analytical Method Development in an Undergraduate Instrumental Analysis Course
ERIC Educational Resources Information Center
Lanigan, Katherine C.
2008-01-01
Method development and assessment, central components of carrying out chemical research, require problem-solving skills. This article describes a pedagogical approach for teaching these skills through the adaptation of published experiments and application of group-meeting style discussions to the curriculum of an undergraduate instrumental…
Varying Coefficient Meta-Analytic Methods for Alpha Reliability
ERIC Educational Resources Information Center
Bonett, Douglas G.
2010-01-01
The conventional fixed-effects (FE) and random-effects (RE) confidence intervals that are used to assess the average alpha reliability across multiple studies have serious limitations. The FE method, which is based on a constant coefficient model, assumes equal reliability coefficients across studies and breaks down under minor violations of this…
Methods and Techniques for Clinical Text Modeling and Analytics
ERIC Educational Resources Information Center
Ling, Yuan
2017-01-01
This study focuses on developing and applying methods/techniques in different aspects of the system for clinical text understanding, at both corpus and document level. We deal with two major research questions: First, we explore the question of "How to model the underlying relationships from clinical notes at corpus level?" Documents…
Single Subject Research: A Synthesis of Analytic Methods
ERIC Educational Resources Information Center
Alresheed, Fahad; Hott, Brittany L.; Bano, Carmen
2013-01-01
Historically, the synthesis of single subject design has employed visual inspection to yield significance of results. However, current research is supporting different techniques that will facilitate the interpretation of these intervention outcomes. These methods can provide more reliable data than employing visual inspection in isolation. This…
An Analytical Method of Identifying Biased Test Items.
ERIC Educational Resources Information Center
Plake, Barbara S.; Hoover, H. D.
1979-01-01
A follow-up technique is needed to identify items contributing to items-by-groups interaction when using an ANOVA procedure to examine a test for biased items. The method described includes distribution theory for assessing level of significance and is sensitive to items at all difficulty levels. (Author/GSK)
Field Sampling and Selecting On-Site Analytical Methods for Explosives in Soil
The purpose of this issue paper is to provide guidance to Remedial Project Managers regarding field sampling and on-site analytical methods fordetecting and quantifying secondary explosive compounds in soils.
The Superfund Innovative Technology Evaluation (SITE) Program evaluates new technologies to assess their effectiveness. This bulletin summarizes results from the 1993 SITE demonstration of the Field Analytical Screening Program (FASP) Pentachlorophenol (PCP) Method to determine P...
EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A
SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...
EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A
SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...
Method for using fast fluidized bed dry bottom coal gasification
Snell, George J.; Kydd, Paul H.
1983-01-01
Carbonaceous solid material such as coal is gasified in a fast fluidized bed gasification system utilizing dual fluidized beds of hot char. The coal in particulate form is introduced along with oxygen-containing gas and steam into the fast fluidized bed gasification zone of a gasifier assembly wherein the upward superficial gas velocity exceeds about 5.0 ft/sec and temperature is 1500.degree.-1850.degree. F. The resulting effluent gas and substantial char are passed through a primary cyclone separator, from which char solids are returned to the fluidized bed. Gas from the primary cyclone separator is passed to a secondary cyclone separator, from which remaining fine char solids are returned through an injection nozzle together with additional steam and oxygen-containing gas to an oxidation zone located at the bottom of the gasifier, wherein the upward gas velocity ranges from about 3-15 ft/sec and is maintained at 1600.degree.-200.degree. F. temperature. This gasification arrangement provides for increased utilization of the secondary char material to produce higher overall carbon conversion and product yields in the process.
Methodology for the validation of analytical methods involved in uniformity of dosage units tests.
Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph
2013-01-14
Validation of analytical methods is required prior to their routine use. In addition, the current implementation of the Quality by Design (QbD) framework in the pharmaceutical industries aims at improving the quality of the end products starting from its early design stage. However, no regulatory guideline or none of the published methodologies to assess method validation propose decision methodologies that effectively take into account the final purpose of developed analytical methods. In this work a solution is proposed for the specific case of validating analytical methods involved in the assessment of the content uniformity or uniformity of dosage units of a batch of pharmaceutical drug products as proposed in the European or US pharmacopoeias. This methodology uses statistical tolerance intervals as decision tools. Moreover it adequately defines the Analytical Target Profile of analytical methods in order to obtain analytical methods that allow to make correct decisions about Content uniformity or uniformity of dosage units with high probability. The applicability of the proposed methodology is further illustrated using an HPLC-UV assay as well as a near infra-red spectrophotometric method.
Wahl, Claudia; Hirtz, Dennis; Elling, Lothar
2016-10-01
Nucleotide sugars are considered as bottleneck and expensive substrates for enzymatic glycan synthesis using Leloir-glycosyltransferases. Synthesis from cheap substrates such as monosaccharides is accomplished by multi-enzyme cascade reactions. Optimization of product yields in such enzyme modules is dependent on the interplay of multiple parameters of the individual enzymes and governed by a considerable time effort when convential analytic methods like capillary electrophoresis (CE) or HPLC are applied. We here demonstrate for the first time multiplexed CE (MP-CE) as fast analytical tool for the optimization of nucleotide sugar synthesis with multi-enzyme cascade reactions. We introduce a universal separation method for nucleotides and nucleotide sugars enabling us to analyze the composition of six different enzyme modules in a high-throughput format. Optimization of parameters (T, pH, inhibitors, kinetics, cofactors and enzyme amount) employing MP-CE analysis is demonstrated for enzyme modules for the synthesis of UDP-α-D-glucuronic acid (UDP-GlcA) and UDP-α-D-galactose (UDP-Gal). In this way we achieve high space-time-yields: 1.8 g/L⋆h for UDP-GlcA and 17 g/L⋆h for UDP-Gal. The presented MP-CE methodology has the impact to be used as general analytical tool for fast optimization of multi-enzyme cascade reactions.
An Improved Analytical Method for Atmospheric Chlorides in Tropic Tests
1975-07-01
Chloride Electrode Chloride Analysis Methodology Tropic Regions Diphenylcarbazone- Panama Canal Zone Tropic Test Center Bromphenol Blue Salt Wet Candle 20...ambient salt has been measured for corrosion studies by wet- candle sampling and determining water-soluble chlorides by manual mercuric nitrate titration...of total chloride in wet- candle samplers. For the past 8 years atmospheric salt has been measured at tropic test sites by the wet- candle method
Advanced and In Situ Analytical Methods for Solar Fuel Materials.
Chan, Candace K; Tüysüz, Harun; Braun, Artur; Ranjan, Chinmoy; La Mantia, Fabio; Miller, Benjamin K; Zhang, Liuxian; Crozier, Peter A; Haber, Joel A; Gregoire, John M; Park, Hyun S; Batchellor, Adam S; Trotochaud, Lena; Boettcher, Shannon W
2016-01-01
In situ and operando techniques can play important roles in the development of better performing photoelectrodes, photocatalysts, and electrocatalysts by helping to elucidate crucial intermediates and mechanistic steps. The development of high throughput screening methods has also accelerated the evaluation of relevant photoelectrochemical and electrochemical properties for new solar fuel materials. In this chapter, several in situ and high throughput characterization tools are discussed in detail along with their impact on our understanding of solar fuel materials.
Development of an Analytical Method for Explosive Residues in Soil,
1987-06-01
most popular approaches have re- lied on either gas chromatography (GC) using electron capture (ECD), thermal electron ( TEA ):3r mass spectro...higher for the sonic bath method was superior, shaking procedure using acetone, although it is unclear Johnsen and Starr (1972) also compared the extrac ...and Richard (1986) studied the efficiency of ex- based on batch ultrasonic agitation and Soxhlet extrac - traction of polycyclic organics from spiked
[Development of analytical method for determination nicotine metabolites in urine].
Piekoszewski, Wojciech; Florek, Ewa; Kulza, Maksymilian; Wilimowska, Jolanta; Loba, Urszula
2009-01-01
The assay of biomarkers in biological material is the most popular and reliable method in estimate exposure to tobacco smoke. Nicotine and its metabolites qualify to the most specific biomarkers for tobacco smoke. Currently the most often used are cotinine and trans-3'-hydroxycotinine. The aim of this study was development of easy and quick method of determining nicotine and its main metabolites with high performance liquid chromatography--available in most laboratories. Nicotine and its metabolites in urine (cotinine, trans-3'-hydroxycotinine, nornicotine and nicotine N-oxide) was determined by means of high performance liquid chromatography with spectrometry detection (HPLC-UV). The determined compounds were extracted from urine by means of the liquid-liquid technique, before analysed by the HPLC method. Developed technique of high performance liquid chromatography proved to be useful to assessment nicotine and its four metabolites in smokers, though further research are necessary. The further modification of procedure is required, because of the interferences of cotinine N-oxide with matrix, which prevent determination. Increasing the efficiency of extraction nicotine and nornicotine could enable the determination in people exposed on environmental tobacco smoke (ETS). This study confirm other authors' observations that 3'-hydroxycotinine might be equivalent with cotinine predictor of tobacco smoke exposure, however further studies are required.
Concordance of seabird population parameters: Analytical methods and interpretation
Hatch, Scott A.
1996-01-01
In an ecological context, concordance may be defined as the tendency for paired values of some parameter, such as the annual productivity of bird species, to show similar directions and magnitudes of deviation from the mean. Where concordance among populations is high, there is an implied similarity of the ecological factors affecting performance. Conversely, if populations behave discordantly, dissimilarity of underlying ecological factors is likely. In evaluating birds as indicators of the marine environment, the biologist typically is confronted with a three-dimensional array of observations (species, areas, and years) in which there are more missing values than filled cells. This frustrates attempts to analyze concordance using existing methods (e.g., Kendall's coefficient, or correlation combined with cluster analysis), which are either impossible to apply to incomplete data sets or potentially misleading when applied to incomplete data sets. I suggest an alternative method for analyzing concordance that makes maximal use of available data. For a given data set partitioned into the smallest units containing information about concordance, one computes an index of concordance using a regression approach and tests for significance using randomization methods. This procedure would seem to have wide application to ecological studies generally and to seabird monitoring in particular.
Analytical methods applied to diverse types of Brazilian propolis
2011-01-01
Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented. PMID:21631940
Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua
2015-09-01
Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.
Computing Correlations with Q-Sort Data for McQuitty's Pattern-Analytic Methods
ERIC Educational Resources Information Center
Lee, Jae-Won
1977-01-01
McQuitty has developed a number of pattern analytic methods that can be computed by hand, but the matrices of associations used in these methods cannot be so readily computed. A simplified but exact method of computing product moment correlations based on Q sort data for McQuitty's methods is described. (Author/JKS)
Critical node treatment in the analytic function expansion method for Pin Power Reconstruction
Gao, Z.; Xu, Y.; Downar, T.
2013-07-01
Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)
A Vocal-Based Analytical Method for Goose Behaviour Recognition
Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole
2012-01-01
Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037
[Analytic methods for seed models with genotype x environment interactions].
Zhu, J
1996-01-01
Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by
Plastics in soil: Analytical methods and possible sources.
Bläsing, Melanie; Amelung, Wulf
2017-08-29
At least 300 Mio t of plastic are produced annually, from which large parts end up in the environment, where it persists over decades, harms biota and enters the food chain. Yet, almost nothing is known about plastic pollution of soil; hence, the aims of this work are to review current knowledge on i) available methods for the quantification and identification of plastic in soil, ii) the quantity and possible input pathways of plastic into soil, (including first preliminary screening of plastic in compost), and iii) its fate in soil. Methods for plastic analyses in sediments can potentially be adjusted for application to soil; yet, the applicability of these methods for soil needs to be tested. Consequently, the current data base on soil pollution with plastic is still poor. Soils may receive plastic inputs via plastic mulching or the application of plastic containing soil amendments. In compost up to 2.38-1200mg plastic kg(-1) have been found so far; the plastic concentration of sewage sludge varies between 1000 and 24,000 plastic items kg(-1). Also irrigation with untreated and treated wastewater (1000-627,000 and 0-125,000 plastic items m(-3), respectively) as well as flooding with lake water (0.82-4.42 plastic items m(-3)) or river water (0-13,751 items km(-2)) can provide major input pathways for plastic into soil. Additional sources comprise littering along roads and trails, illegal waste dumping, road runoff as well as atmospheric input. With these input pathways, plastic concentrations in soil might reach the per mill range of soil organic carbon. Most of plastic (especially >1μm) will presumably be retained in soil, where it persists for decades or longer. Accordingly, further research on the prevalence and fate of such synthetic polymers in soils is urgently warranted. Copyright © 2017 Elsevier B.V. All rights reserved.
Kolber, Z.; Falkowski, P.
1995-06-20
A fast repetition rate fluorometer device and method for measuring in vivo fluorescence of phytoplankton or higher plants chlorophyll and photosynthetic parameters of phytoplankton or higher plants is revealed. The phytoplankton or higher plants are illuminated with a series of fast repetition rate excitation flashes effective to bring about and measure resultant changes in fluorescence yield of their Photosystem II. The series of fast repetition rate excitation flashes has a predetermined energy per flash and a rate greater than 10,000 Hz. Also, disclosed is a flasher circuit for producing the series of fast repetition rate flashes. 14 figs.
Kolber, Zbigniew; Falkowski, Paul
1995-06-20
A fast repetition rate fluorometer device and method for measuring in vivo fluorescence of phytoplankton or higher plants chlorophyll and photosynthetic parameters of phytoplankton or higher plants by illuminating the phytoplankton or higher plants with a series of fast repetition rate excitation flashes effective to bring about and measure resultant changes in fluorescence yield of their Photosystem II. The series of fast repetition rate excitation flashes has a predetermined energy per flash and a rate greater than 10,000 Hz. Also, disclosed is a flasher circuit for producing the series of fast repetition rate flashes.
An evaluation of four single element airfoil analytic methods
NASA Technical Reports Server (NTRS)
Freuler, R. J.; Gregorek, G. M.
1979-01-01
A comparison of four computer codes for the analysis of two-dimensional single element airfoil sections is presented for three classes of section geometries. Two of the computer codes utilize vortex singularities methods to obtain the potential flow solution. The other two codes solve the full inviscid potential flow equation using finite differencing techniques, allowing results to be obtained for transonic flow about an airfoil including weak shocks. Each program incorporates boundary layer routines for computing the boundary layer displacement thickness and boundary layer effects on aerodynamic coefficients. Computational results are given for a symmetrical section represented by an NACA 0012 profile, a conventional section illustrated by an NACA 65A413 profile, and a supercritical type section for general aviation applications typified by a NASA LS(1)-0413 section. The four codes are compared and contrasted in the areas of method of approach, range of applicability, agreement among each other and with experiment, individual advantages and disadvantages, computer run times and memory requirements, and operational idiosyncrasies.
Sulfathiazole: analytical methods for quantification in seawater and macroalgae.
Leston, Sara; Nebot, Carolina; Nunes, Margarida; Cepeda, Alberto; Pardal, Miguel Ângelo; Ramos, Fernando
2015-01-01
The awareness of the interconnection between pharmaceutical residues, human health, and aquaculture has highlighted the concern with the potential harmful effects it can induce. Furthermore, to better understand the consequences more research is needed and to achieve that new methodologies on the detection and quantification of pharmaceuticals are necessary. Antibiotics are a major class of drugs included in the designation of emerging contaminants, representing a high risk to natural ecosystems. Among the most prescribed are sulfonamides, with sulfathiazole being the selected compound to be investigated in this study. In the environment, macroalgae are an important group of producers, continuously exposed to contaminants, with a significant role in the trophic web. Due to these characteristics are already under scope for the possibility of being used as bioindicators. The present study describes two new methodologies based on liquid chromatography for the determination of sulfathiazole in seawater and in the green macroalgae Ulva lactuca. Results show both methods were validated according to international standards, with MS/MS detection showing more sensitivity as expected with LODs of 2.79ng/g and 1.40ng/mL for algae and seawater, respectively. As for UV detection the values presented were respectively 2.83μg/g and 2.88μg/mL, making it more suitable for samples originated in more contaminated sites. The methods were also applied to experimental data with success with results showing macroalgae have potential use as indicators of contamination.
NASA Astrophysics Data System (ADS)
Wailliez, Sébastien E.
2014-03-01
In the two-body model, time of flight between two positions can be expressed as a single-variable function and a variety of formulations exist. Lambert’s problem can be solved by inverting such a function. In this article, a method which inverts Lagrange’s flight time equation and supports the problematic 180° transfer is proposed. This method relies on a Householder algorithm of variable order. However, unlike other iterative methods, it is semi-analytical in the sense that flight time functions are derived analytically to second order vs. first order finite differences. The author investigated the profile of Lagrange’s elliptic flight time equation and its derivatives with a special focus on their significance to the behaviour of the proposed method and the stated goal of guaranteed convergence. Possible numerical deficiencies were identified and dealt with. As a test, 28 scenarios of variable difficulty were designed to cover a wide variety of geometries. The context of this research being the orbit determination of artificial satellites and debris, the scenarios are representative of typical such objects in Low-Earth, Geostationary and Geostationary Transfer Orbits. An analysis of the computational impact of the quality of the initial guess vs. that of the order of the method was also done, providing clues for further research and optimisations (e.g. asteroids, long period comets, multi-revolution cases). The results indicate fast to very fast convergence in all test cases, they validate the numerical safeguards and also give a quantitative assessment of the importance of the initial guess.
A Study of Instructional Methods Used in Fast-Paced Classes
ERIC Educational Resources Information Center
Lee, Seon-Young; Olszewski-Kubilius, Paula
2006-01-01
This study involved 15 secondary-level teachers who taught fast-paced classes at a university based summer program and similar regularly paced classes in their local schools in order to examine how teachers differentiate or modify instructional methods and content selections for fast-paced classes. Interviews were conducted with the teachers…
Using an analytical geometry method to improve tiltmeter data presentation
Su, W.-J.
2000-01-01
The tiltmeter is a useful tool for geologic and geotechnical applications. To obtain full benefit from the tiltmeter, easy and accurate data presentations should be used. Unfortunately, the most commonly used method for tilt data reduction now may yield inaccurate and low-resolution results. This article describes a simple, accurate, and high-resolution approach developed at the Illinois State Geological Survey for data reduction and presentation. The orientation of tiltplates is determined first by using a trigonometric relationship, followed by a matrix transformation, to obtain the true amount of rotation change of the tiltplate at any given time. The mathematical derivations used for the determination and transformation are then coded into an integrated PC application by adapting the capabilities of commercial spreadsheet, database, and graphics software. Examples of data presentation from tiltmeter applications in studies of landfill covers, characterizations of mine subsidence, and investigations of slope stability are also discussed.
Ultrasensitive biochemical sensing device and method of sensing analytes
Pinchuk, Anatoliy
2017-06-06
Systems and methods biochemically sense a concentration of a ligand using a sensor having a substrate having a metallic nanoparticle array formed onto a surface of the substrate. A light source is incident on the surface. A matrix is deposited over the nanoparticle array and contains a protein adapted to binding the ligand. A detector detects s-polarized and p-polarized light from the reflective surface. Spacing of nanoparticles in the array and wavelength of light are selected such that plasmon resonance occurs with an isotropic point such that -s and -p polarizations of the incident light result in substantially identical surface Plasmon resonance, wherein binding of the ligand to the protein shifts the resonance such that differences between the -S and -P polarizations give in a signal indicative of presence of the ligand.
Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 4, Organic methods
Not Available
1993-08-01
This interim notice covers the following: extractable organic halides in solids, total organic halides, analysis by gas chromatography/Fourier transform-infrared spectroscopy, hexadecane extracts for volatile organic compounds, GC/MS analysis of VOCs, GC/MS analysis of methanol extracts of cryogenic vapor samples, screening of semivolatile organic extracts, GPC cleanup for semivolatiles, sample preparation for GC/MS for semi-VOCs, analysis for pesticides/PCBs by GC with electron capture detection, sample preparation for pesticides/PCBs in water and soil sediment, report preparation, Florisil column cleanup for pesticide/PCBs, silica gel and acid-base partition cleanup of samples for semi-VOCs, concentrate acid wash cleanup, carbon determination in solids using Coulometrics` CO{sub 2} coulometer, determination of total carbon/total organic carbon/total inorganic carbon in radioactive liquids/soils/sludges by hot persulfate method, analysis of solids for carbonates using Coulometrics` Model 5011 coulometer, and soxhlet extraction.