Interpolation method for accurate affinity ranking of arrayed ligand-analyte interactions.
Schasfoort, Richard B M; Andree, Kiki C; van der Velde, Niels; van der Kooi, Alex; Stojanović, Ivan; Terstappen, Leon W M M
2016-05-01
The values of the affinity constants (kd, ka, and KD) that are determined by label-free interaction analysis methods are affected by the ligand density. This article outlines a surface plasmon resonance (SPR) imaging method that yields high-throughput globally fitted affinity ranking values using a 96-plex array. A kinetic titration experiment without a regeneration step has been applied for various coupled antibodies binding to a single antigen. Globally fitted rate (kd and ka) and dissociation equilibrium (KD) constants for various ligand densities and analyte concentrations are exponentially interpolated to the KD at Rmax = 100 RU response level (KD(R100)).
Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn
2012-07-01
The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
Robust Accurate Non-Invasive Analyte Monitor
Robinson, Mark R.
1998-11-03
An improved method and apparatus for determining noninvasively and in vivo one or more unknown values of a known characteristic, particularly the concentration of an analyte in human tissue. The method includes: (1) irradiating the tissue with infrared energy (400 nm-2400 nm) having at least several wavelengths in a given range of wavelengths so that there is differential absorption of at least some of the wavelengths by the tissue as a function of the wavelengths and the known characteristic, the differential absorption causeing intensity variations of the wavelengths incident from the tissue; (2) providing a first path through the tissue; (3) optimizing the first path for a first sub-region of the range of wavelengths to maximize the differential absorption by at least some of the wavelengths in the first sub-region; (4) providing a second path through the tissue; and (5) optimizing the second path for a second sub-region of the range, to maximize the differential absorption by at least some of the wavelengths in the second sub-region. In the preferred embodiment a third path through the tissue is provided for, which path is optimized for a third sub-region of the range. With this arrangement, spectral variations which are the result of tissue differences (e.g., melanin and temperature) can be reduced. At least one of the paths represents a partial transmission path through the tissue. This partial transmission path may pass through the nail of a finger once and, preferably, twice. Also included are apparatus for: (1) reducing the arterial pulsations within the tissue; and (2) maximizing the blood content i the tissue.
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
Bicanic, Dane; Swarts, Jan; Luterotti, Svjetlana; Pietraperzia, Giangaetano; Dóka, Otto; de Rooij, Hans
2004-09-01
The concept of the optothermal window (OW) is proposed as a reliable analytical tool to rapidly determine the concentration of lycopene in a large variety of commercial tomato products in an extremely simple way (the determination is achieved without the need for pretreatment of the sample). The OW is a relative technique as the information is deduced from the calibration curve that relates the OW data (i.e., the product of the absorption coefficient beta and the thermal diffusion length micro) with the lycopene concentration obtained from spectrophotometric measurements. The accuracy of the method has been ascertained with a high correlation coefficient (R = 0.98) between the OW data and results acquired from the same samples by means of the conventional extraction spectrophotometric method. The intrinsic precision of the OW method is quite high (better than 1%), whereas the repeatability of the determination (RSD = 0.4-9.5%, n= 3-10) is comparable to that of spectrophotometry.
Accurate, meshless methods for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Analytical Methods for Online Searching.
ERIC Educational Resources Information Center
Vigil, Peter J.
1983-01-01
Analytical methods for facilitating comparison of multiple sets during online searching are illustrated by description of specific searching methods that eliminate duplicate citations and a factoring procedure based on syntactic relationships that establishes ranked sets. Searches executed in National Center for Mental Health database on…
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
Analytical methods under emergency conditions
Sedlet, J.
1983-01-01
This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references. (ACR)
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.
1995-05-01
The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department`s goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...-2417. (b) Other analytical methods for citrus products may be used as approved by the AMS...
Advanced epidemiologic and analytical methods.
Albanese, E
2016-01-01
Observational studies are indispensable for etiologic research, and are key to test life-course hypotheses and improve our understanding of neurologic diseases that have long induction and latency periods. In recent years a plethora of advanced design and analytic techniques have been developed to strengthen the robustness and ultimately the validity of the results of observational studies, and to address their inherent proneness to bias. It is the responsibility of clinicians and researchers to critically appraise and appropriately contextualize the findings of the exponentially expanding scientific literature. This critical appraisal should be rooted in a thorough understanding of advanced epidemiologic methods and techniques commonly used to formulate and test relevant hypotheses and to keep bias at bay. PMID:27637951
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity,...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity,...
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field
NASA Astrophysics Data System (ADS)
Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang
2016-08-01
Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
.... Environmental Protection Agency (EPA) Chemical Exposure Research Branch, EPA Office of Research and Development... Evaluating Solid Waste Physical/Chemical Methods, Environmental Protection Agency, Office of Solid Waste, SW... and Engineering Center's Military Specifications, approved analytical test methods noted therein,...
Method of identity analyte-binding peptides
Kauvar, Lawrence M.
1990-01-01
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4-20 amino acids for specific affinity to the analyte.
Method of identity analyte-binding peptides
Kauvar, L.M.
1990-10-16
A method for affinity chromatography or adsorption of a designated analyte utilizes a paralog as the affinity partner. The immobilized paralog can be used in purification or analysis of the analyte; the paralog can also be used as a substitute for antibody in an immunoassay. The paralog is identified by screening candidate peptide sequences of 4--20 amino acids for specific affinity to the analyte. 5 figs.
Matrix Methods to Analytic Geometry.
ERIC Educational Resources Information Center
Bandy, C.
1982-01-01
The use of basis matrix methods to rotate axes is detailed. It is felt that persons who have need to rotate axes often will find that the matrix method saves considerable work. One drawback is that most students first learning to rotate axes will not yet have studied linear algebra. (MP)
Method and apparatus for detecting an analyte
Allendorf, Mark D.; Hesketh, Peter J.
2011-11-29
We describe the use of coordination polymers (CP) as coatings on microcantilevers for the detection of chemical analytes. CP exhibit changes in unit cell parameters upon adsorption of analytes, which will induce a stress in a static microcantilever upon which a CP layer is deposited. We also describe fabrication methods for depositing CP layers on surfaces.
Fast optical proximity correction: analytical method
NASA Astrophysics Data System (ADS)
Shioiri, Satomi; Tanabe, Hiroyoshi
1995-05-01
In automating optical proximity correction, calculation speed becomes important. In this paper we present a novel method for calculating proximity corrected features analytically. The calculation will take only several times the amount it takes to calculate intensity of one point on wafer. Therefore, the calculation will become extremely faster than conventional repetitive aerial image calculations. This method is applied to a simple periodic pattern. The simulated results show great improvement on linearity after correction and have proved the effectiveness of this analytical method.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., Gaithersburg, MD 20877-2417. (d) Manual of Analytical Methods for the Analysis of Pesticide Residues in Human...), Volumes I and II, Food and Drug Administration, Center for Food Safety and Applied Nutrition (CFSAN),...
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
A fast and accurate method for echocardiography strain rate imaging
NASA Astrophysics Data System (ADS)
Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh
2009-02-01
Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.
A fourth order accurate adaptive mesh refinement method forpoisson's equation
Barad, Michael; Colella, Phillip
2004-08-20
We present a block-structured adaptive mesh refinement (AMR) method for computing solutions to Poisson's equation in two and three dimensions. It is based on a conservative, finite-volume formulation of the classical Mehrstellen methods. This is combined with finite volume AMR discretizations to obtain a method that is fourth-order accurate in solution error, and with easily verifiable solvability conditions for Neumann and periodic boundary conditions.
Analytical Methods for Trace Metals. Training Manual.
ERIC Educational Resources Information Center
Office of Water Program Operations (EPA), Cincinnati, OH. National Training and Operational Technology Center.
This training manual presents material on the theoretical concepts involved in the methods listed in the Federal Register as approved for determination of trace metals. Emphasis is on laboratory operations. This course is intended for chemists and technicians with little or no experience in analytical methods for trace metals. Students should have…
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
An accurate method for two-point boundary value problems
NASA Technical Reports Server (NTRS)
Walker, J. D. A.; Weigand, G. G.
1979-01-01
A second-order method for solving two-point boundary value problems on a uniform mesh is presented where the local truncation error is obtained for use with the deferred correction process. In this simple finite difference method the tridiagonal nature of the classical method is preserved but the magnitude of each term in the truncation error is reduced by a factor of two. The method is applied to a number of linear and nonlinear problems and it is shown to produce more accurate results than either the classical method or the technique proposed by Keller (1969).
Analytical methods for quantitation of prenylated flavonoids from hops.
Nikolić, Dejan; van Breemen, Richard B
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical methods for quantitation of prenylated flavonoids from hops
Nikolić, Dejan; van Breemen, Richard B.
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
Accurate Method for Determining Adhesion of Cantilever Beams
Michalske, T.A.; de Boer, M.P.
1999-01-08
Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.
Method for Accurately Calibrating a Spectrometer Using Broadband Light
NASA Technical Reports Server (NTRS)
Simmons, Stephen; Youngquist, Robert
2011-01-01
A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods. Official analyses for peanuts, nuts, corn, oilseeds, and related vegetable oils are found in the following... Recommended Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O....
Analytical Methods for Characterizing Magnetic Resonance Probes
Manus, Lisa M.; Strauch, Renee C.; Hung, Andy H.; Eckermann, Amanda L.; Meade, Thomas J.
2012-01-01
SUMMARY The efficiency of Gd(III) contrast agents in magnetic resonance image enhancement is governed by a set of tunable structural parameters. Understanding and measuring these parameters requires specific analytical techniques. This Feature describes strategies to optimize each of the critical Gd(III) relaxation parameters for molecular imaging applications and the methods employed for their evaluation. PMID:22624599
BASIC: A Simple and Accurate Modular DNA Assembly Method.
Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S
2017-01-01
Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2]. PMID:27671933
BASIC: A Simple and Accurate Modular DNA Assembly Method.
Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S
2017-01-01
Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].
Prioritizing pesticide compounds for analytical methods development
Norman, Julia E.; Kuivila, Kathryn M.; Nowell, Lisa H.
2012-01-01
The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1
Analytic sequential methods for detecting network intrusions
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Walker, Ernest
2014-05-01
In this paper, we propose an analytic sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. We have developed explicit formulae for quick determination of the parameters of the new detection algorithm.
NASA Technical Reports Server (NTRS)
Schlosser, Herbert; Ferrante, John
1989-01-01
An accurate analytic expression for the nonlinear change of the volume of a solid as a function of applied pressure is of great interest in high-pressure experimentation. It is found that a two-parameter analytic expression, fits the experimental volume-change data to within a few percent over the entire experimentally attainable pressure range. Results are presented for 24 different materials including metals, ceramic semiconductors, polymers, and ionic and rare-gas solids.
Videometric terminal guidance method and system for UAV accurate landing
NASA Astrophysics Data System (ADS)
Zhou, Xiang; Lei, Zhihui; Yu, Qifeng; Zhang, Hongliang; Shang, Yang; Du, Jing; Gui, Yang; Guo, Pengyu
2012-06-01
We present a videometric method and system to implement terminal guidance for Unmanned Aerial Vehicle(UAV) accurate landing. In the videometric system, two calibrated cameras attached to the ground are used, and a calibration method in which at least 5 control points are applied is developed to calibrate the inner and exterior parameters of the cameras. Cameras with 850nm spectral filter are used to recognize a 850nm LED target fixed on the UAV which can highlight itself in images with complicated background. NNLOG (normalized negative laplacian of gaussian) operator is developed for automatic target detection and tracking. Finally, 3-D position of the UAV with high accuracy can be calculated and transfered to control system to direct UAV accurate landing. The videometric system can work in the rate of 50Hz. Many real flight and static accuracy experiments demonstrate the correctness and veracity of the method proposed in this paper, and they also indicate the reliability and robustness of the system proposed in this paper. The static accuracy experiment results show that the deviation is less-than 10cm when target is far from the cameras and lessthan 2cm in 100m region. The real flight experiment results show that the deviation from DGPS is less-than 20cm. The system implement in this paper won the first prize in the AVIC Cup-International UAV Innovation Grand Prix, and it is the only one that achieved UAV accurate landing without GPS or DGPS.
A novel automated image analysis method for accurate adipocyte quantification
Osman, Osman S; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J; O’Dowd, Jacqueline F; Cawthorne, Michael A; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth
2013-01-01
Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-sectional area of adipocytes in histological sections, allowing rapid high-throughput quantification of fat cell size and number. Photomicrographs of H&E-stained paraffin sections of murine gonadal adipose were transformed using standard image processing/analysis algorithms to reduce background and enhance edge-detection. This allowed the isolation of individual adipocytes from which their area could be calculated. Performance was compared with manual measurements made from the same images, in which adipocyte area was calculated from estimates of the major and minor axes of individual adipocytes. Both methods identified an increase in mean adipocyte size in a murine model of obesity, with good concordance, although the calculation used to identify cell area from manual measurements was found to consistently over-estimate cell size. Here we report an accurate method to determine adipocyte area in histological sections that provides a considerable time saving over manual methods. PMID:23991362
An Analytical Method of Estimating Turbine Performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1948-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration.
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
A New Analytic Alignment Method for a SINS
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
Dismer, Florian; Hansen, Sigrid; Oelmeier, Stefan Alexander; Hubbuch, Jürgen
2013-03-01
Chromatography is the method of choice for the separation of proteins, at both analytical and preparative scale. Orthogonal purification strategies for industrial use can easily be implemented by combining different modes of adsorption. Nevertheless, with flexibility comes the freedom of choice and optimal conditions for consecutive steps need to be identified in a robust and reproducible fashion. One way to address this issue is the use of mathematical models that allow for an in silico process optimization. Although this has been shown to work, model parameter estimation for complex feedstocks becomes the bottleneck in process development. An integral part of parameter assessment is the accurate measurement of retention times in a series of isocratic or gradient elution experiments. As high-resolution analytics that can differentiate between proteins are often not readily available, pure protein is mandatory for parameter determination. In this work, we present an approach that has the potential to solve this problem. Based on the uniqueness of UV absorption spectra of proteins, we were able to accurately measure retention times in systems of up to four co-eluting compounds. The presented approach is calibration-free, meaning that prior knowledge of pure component absorption spectra is not required. Actually, pure protein spectra can be determined from co-eluting proteins as part of the methodology. The approach was tested for size-exclusion chromatograms of 38 mixtures of co-eluting proteins. Retention times were determined with an average error of 0.6 s (1.6% of average peak width), approximated and measured pure component spectra showed an average coefficient of correlation of 0.992.
Accurate method of modeling cluster scaling relations in modified gravity
NASA Astrophysics Data System (ADS)
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
Methods for accurate homology modeling by global optimization.
Joo, Keehyoung; Lee, Jinwoo; Lee, Jooyoung
2012-01-01
High accuracy protein modeling from its sequence information is an important step toward revealing the sequence-structure-function relationship of proteins and nowadays it becomes increasingly more useful for practical purposes such as in drug discovery and in protein design. We have developed a protocol for protein structure prediction that can generate highly accurate protein models in terms of backbone structure, side-chain orientation, hydrogen bonding, and binding sites of ligands. To obtain accurate protein models, we have combined a powerful global optimization method with traditional homology modeling procedures such as multiple sequence alignment, chain building, and side-chain remodeling. We have built a series of specific score functions for these steps, and optimized them by utilizing conformational space annealing, which is one of the most successful combinatorial optimization algorithms currently available.
Algorithmic and analytical methods in network biology.
Koyutürk, Mehmet
2010-01-01
During the genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining to functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of biological systems, commonly using network models. Such efforts lead to novel insights into the complexity of living systems, through development of sophisticated abstractions, algorithms, and analytical techniques that address a broad range of problems, including the following: (1) inference and reconstruction of complex cellular networks; (2) identification of common and coherent patterns in cellular networks, with a view to understanding the organizing principles and building blocks of cellular signaling, regulation, and metabolism; and (3) characterization of cellular mechanisms that underlie the differences between living systems, in terms of evolutionary diversity, development and differentiation, and complex phenotypes, including human disease. These problems pose significant algorithmic and analytical challenges because of the inherent complexity of the systems being studied; limitations of data in terms of availability, scope, and scale; intractability of resulting computational problems; and limitations of reference models for reliable statistical inference. This article provides a broad overview of existing algorithmic and analytical approaches to these problems, highlights key biological insights provided by these approaches, and outlines emerging opportunities and challenges in computational systems biology.
Algorithmic and analytical methods in network biology
Koyutürk, Mehmet
2011-01-01
During genomic revolution, algorithmic and analytical methods for organizing, integrating, analyzing, and querying biological sequence data proved invaluable. Today, increasing availability of high-throughput data pertaining functional states of biomolecules, as well as their interactions, enables genome-scale studies of the cell from a systems perspective. The past decade witnessed significant efforts on the development of computational infrastructure for large-scale modeling and analysis of biological systems, commonly using network models. Such efforts lead to novel insights into the complexity of living systems, through development of sophisticated abstractions, algorithms, and analytical techniques that address a broad range of problems, including the following: (1) inference and reconstruction of complex cellular networks; (2) identification of common and coherent patterns in cellular networks, with a view to understanding the organizing principles and building blocks of cellular signaling, regulation, and metabolism; and (3) characterization of cellular mechanisms that underlie the differences between living systems, in terms of evolutionary diversity, development and differentiation, and complex phenotypes, including human disease. These problems pose significant algorithmic and analytical challenges because of the inherent complexity of the systems being studied; limitations of data in terms of availability, scope, and scale; intractability of resulting computational problems; and limitations of reference models for reliable statistical inference. This article provides a broad overview of existing algorithmic and analytical approaches to these problems, highlights key biological insights provided by these approaches, and outlines emerging opportunities and challenges in computational systems biology. PMID:20836029
Secondary waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S.; Schilling, J.B.
1995-07-01
The characterization phase of site remediation is an important and costly part of the process. Because toxic solvents and other hazardous materials are used in common analytical methods, characterization is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the volume or form of hazardous waste produced either in the sample preparation step or in the measurement step. The authors are examining alternative methods in the areas of inorganic, radiological, and organic analysis. For determining inorganic constituents, alternative methods were studied for sample introduction into inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as their associated waste volumes, were compared with the conventional approaches. In the radiological area, the authors are comparing conventional methods for gross {alpha}/{beta} measurements of soil samples to an alternative method that uses high-pressure microwave dissolution. For determination of organic constituents, microwave-assisted extraction was studied for RCRA regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil; polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil. Extraction efficiencies were determined under varying conditions of time, temperature, microwave power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in conventional extraction methods to about 30 mL. Extraction results varied from one matrix to another. In most cases, the microwave-assisted extraction technique was as efficient as the more common Soxhlet or sonication extraction techniques.
Accurate photometric redshift probability density estimation - method comparison and application
NASA Astrophysics Data System (ADS)
Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-10-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.
A method for accurate temperature measurement using infrared thermal camera.
Tokunaga, Tomoharu; Narushima, Takashi; Yonezawa, Tetsu; Sudo, Takayuki; Okubo, Shuichi; Komatsubara, Shigeyuki; Sasaki, Katsuhiro; Yamamoto, Takahisa
2012-08-01
The temperature distribution on a centre-holed thin foil of molybdenum, used as a sample and heated using a sample-heating holder for electron microscopy, was measured using an infrared thermal camera. The temperature on the heated foil area located near the heating stage of the heating holder is almost equal to the temperature on the heating stage. However, during the measurement of the temperature at the edge of the hole of the foil located farthest from the heating stage, a drop in temperature should be taken into consideration; however, so far, no method has been developed to locally measure the temperature distribution on the heated sample. In this study, a method for the accurate measurement of temperature distribution on heated samples for electron microscopy is discussed.
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
Analytical Methods for Secondary Metabolite Detection.
Taibon, Judith; Strasser, Hermann
2016-01-01
The entomopathogenic fungi Metarhizium brunneum, Beauveria bassiana, and B. brongniartii are widely applied as biological pest control agent in OECD countries. Consequently, their use has to be flanked by a risk management approach, which includes the need to monitor the fate of their relevant toxic metabolites. There are still data gaps claimed by regulatory authorities pending on their identification and quantification of relevant toxins or secondary metabolites. In this chapter, analytical methods are presented allowing the qualitative and quantitative analysis of the relevant toxic B. brongniartii metabolite oosporein and the three M. brunneum relevant destruxin (dtx) derivatives dtx A, dtx B, and dtx E. PMID:27565501
Analytical chromatography. Methods, instrumentation and applications
NASA Astrophysics Data System (ADS)
Yashin, Ya I.; Yashin, A. Ya
2006-04-01
The state-of-the-art and the prospects in the development of main methods of analytical chromatography, viz., gas, high performance liquid and ion chromatographic techniques, are characterised. Achievements of the past 10-15 years in the theory and general methodology of chromatography and also in the development of new sorbents, columns and chromatographic instruments are outlined. The use of chromatography in the environmental control, biology, medicine, pharmaceutics, and also for monitoring the quality of foodstuffs and products of chemical, petrochemical and gas industries, etc. is considered.
Analytical Methods for Secondary Metabolite Detection.
Taibon, Judith; Strasser, Hermann
2016-01-01
The entomopathogenic fungi Metarhizium brunneum, Beauveria bassiana, and B. brongniartii are widely applied as biological pest control agent in OECD countries. Consequently, their use has to be flanked by a risk management approach, which includes the need to monitor the fate of their relevant toxic metabolites. There are still data gaps claimed by regulatory authorities pending on their identification and quantification of relevant toxins or secondary metabolites. In this chapter, analytical methods are presented allowing the qualitative and quantitative analysis of the relevant toxic B. brongniartii metabolite oosporein and the three M. brunneum relevant destruxin (dtx) derivatives dtx A, dtx B, and dtx E.
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
The greening of PCB analytical methods
Erickson, M.D.; Alvarado, J.S.; Aldstadt, J.H.
1995-12-01
Green chemistry incorporates waste minimization, pollution prevention and solvent substitution. The primary focus of green chemistry over the past decade has been within the chemical industry; adoption by routine environmental laboratories has been slow because regulatory standard methods must be followed. A related paradigm, microscale chemistry has gained acceptance in undergraduate teaching laboratories, but has not been broadly applied to routine environmental analytical chemistry. We are developing green and microscale techniques for routine polychlorinated biphenyl (PCB) analyses as an example of the overall potential within the environmental analytical community. Initial work has focused on adaptation of commonly used routine EPA methods for soils and oils. Results of our method development and validation demonstrate that: (1) Solvent substitution can achieve comparable results and eliminate environmentally less-desirable solvents, (2) Microscale extractions can cut the scale of the analysis by at least a factor of ten, (3) We can better match the amount of sample used with the amount needed for the GC determination step, (4) The volume of waste generated can be cut by at least a factor of ten, and (5) Costs are reduced significantly in apparatus, reagent consumption, and labor.
Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency
NASA Astrophysics Data System (ADS)
Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao
2008-05-01
Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.
Accurate measurement method for tube's endpoints based on machine vision
NASA Astrophysics Data System (ADS)
Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng
2016-08-01
Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.
Numerical methods: Analytical benchmarking in transport theory
Ganapol, B.D. )
1988-01-01
Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
Groeneboom, N. E.; Dahle, H.
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Ghittorelli, Matteo; Torricelli, Fabrizio; Kovács-Vajna, Zsolt Miklos
2015-12-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of Thin-Film Transistors (TFTs) and, in turn, of Organic Thin-Film Transistors (OTFTs), available today. However, the need for iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not enough accurate to model OTFTs and, in particular, transconductances and transcapacitances. In this paper we present an accurate and computationally efficient closed-form approximation of the surface potential, based on the Lagrange Reversion Theorem, that can be exploited in advanced surface-potential-based OTFTs and TFTs device models.
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
Analytical methods for optical remote sensing
Spellicy, R.L.
1997-12-31
Optical monitoring systems are very powerful because of their ability to see many compounds simultaneously as well as their ability to report results in real time. However, these strengths also present unique problems to analysis of the resulting data and validation of observed results. Today, many FTIR and UV-DOAS systems are in use. Some of these are manned systems supporting short term tests while others are totally unmanned systems which are expected to operate without intervention for weeks or months at a time. The analytical methods needed to support both the diversity of compounds and the diversity of applications is challenging. In this paper, the fundamental concepts of spectral analysis for IR/UV systems are presented. This is followed by examples of specific field data from both short term measurement programs looking at unique sources and long-term unmanned monitoring systems looking at ambient air.
Recent advances in analytical methods for mycotoxins.
Gilbert, J
1993-01-01
Recent advances in analytical methods are reviewed using the examples of aflatoxins and trichothecene mycotoxins. The most dramatic advances are seen as being those based on immunological principles utilized for aflatoxins to produce simple screening methods and for rapid specific clean-up. The possibilities of automation using immunoaffinity columns is described. In contrast for the trichothecenes immunological methods have not had the same general impact. Post-column derivatization using bromine or iodine to enhance fluorescence for HPLC detection of aflatoxins has become widely employed and there are similar possibilities for improved HPLC detection for trichothecenes using electrochemical or trichothecene-specific post-column reactions. There have been improvements in the use of more rapid and specific clean-up methods for trichothecenes, whilst HPLC and GC remain equally favoured for the end-determination. More sophisticated instrumental techniques such as mass spectrometry (LC/MS, MS/MS) and supercritical fluid chromatography (SFC/MS) have been demonstrated to have potential for application to mycotoxin analysis, but have not as yet made much general impact.
Pyrroloquinoline quinone: Metabolism and analytical methods
Smidt, C.R.
1990-01-01
Pyrroloquinoline quinone (PQQ) functions as a cofactor for bacterial oxidoreductases. Whether or not PQQ serves as a cofactor in higher plants and animals remains controversial. Nevertheless, strong evidence exists that PQQ has nutritional importance. In highly purified, chemically defined diets PQQ stimulates animal growth. Further PQQ deprivation impairs connective tissue maturation, particularly when initiated in utero and throughout perinatal development. The study addresses two main objectives: (1) to elucidate basic aspects of the metabolism of PQQ in animals, and (2) to develop and improve existing analytical methods for PQQ. To study intestinal absorption of PQQ, ten mice were administered [[sup 14]C]-PQQ per os. PQQ was readily absorbed (62%) in the lower intestine and was excreted by the kidney within 24 hours. Significant amounts of labeled-PQQ were retained only by skin and kidney. Three approaches were taken to answer the question whether or not PQQ is synthesized by the intestinal microflora of mice. First, dietary antibiotics had no effect on fecal PQQ excretion. Then, no bacterial isolates could be identified that are known to synthesize PQQ. Last, cecal contents were incubated anaerobically with radiolabeled PQQ-precursors with no label appearing in isolated PQQ. Thus, intestinal PQQ synthesis is unlikely. Analysis of PQQ in biological samples is problematic since PQQ forms adducts with nucleophilic compounds and binds to the protein fraction. Existing analytical methods are reviewed and a new approach is introduced that allows for detection of PQQ in animal tissue and foods. PQQ is freed from proteins by ion exchange chromatography, purified on activated silica cartridges, detected by a colorimetric redox-cycling assay, and identified by mass spectrometry. That compounds with the properties of PQQ may be nutritionally important offers interesting areas for future investigation.
New simple method for fast and accurate measurement of volumes
NASA Astrophysics Data System (ADS)
Frattolillo, Antonio
2006-04-01
A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.
Barbosa, Marconi; James, Andrew C
2014-08-01
A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98.... Army Individual Protection Directorate's Military Specifications, approved analytical test...
Analytical Methods for Measuring Mercury in Water, Sediment and Biota
Lasorsa, Brenda K.; Gill, Gary A.; Horvat, Milena
2012-06-07
Mercury (Hg) exists in a large number of physical and chemical forms with a wide range of properties. Conversion between these different forms provides the basis for mercury's complex distribution pattern in local and global cycles and for its biological enrichment and effects. Since the 1960’s, the growing awareness of environmental mercury pollution has stimulated the development of more accurate, precise and efficient methods of determining mercury and its compounds in a wide variety of matrices. During recent years new analytical techniques have become available that have contributed significantly to the understanding of mercury chemistry in natural systems. In particular, these include ultra sensitive and specific analytical equipment and contamination-free methodologies. These improvements allow for the determination of total mercury as well as major species of mercury to be made in water, sediments and soils, and biota. Analytical methods are selected depending on the nature of the sample, the concentration levels of mercury, and what species or fraction is to be quantified. The terms “speciation” and “fractionation” in analytical chemistry were addressed by the International Union for Pure and Applied Chemistry (IUPAC) which published guidelines (Templeton et al., 2000) or recommendations for the definition of speciation analysis. "Speciation analysis is the analytical activity of identifying and/or measuring the quantities of one or more individual chemical species in a sample. The chemical species are specific forms of an element defined as to isotopic composition, electronic or oxidation state, and/or complex or molecular structure. The speciation of an element is the distribution of an element amongst defined chemical species in a system. In case that it is not possible to determine the concentration of the different individual chemical species that sum up the total concentration of an element in a given matrix, meaning it is impossible to
[Pharmacokinetics, metabolism, and analytical methods of ethanol].
Goullé, J-P; Guerbet, M
2015-09-01
Alcohol is a licit substance whose significant consumption is responsible for a major public health problem. Every year, a large number of deaths are related to its consumption. It is also involved in various accidents, on the road, at work, as well as during acts of violence. Ethanol absorption and its fate are detailed. It is mainly absorbed in the small intestine. It accompanies the movements of the water, so it diffuses in all the tissues uniformly with the exception of bones and fat. The major route of ethanol detoxification is located into the liver. Detoxification is a saturable two-step oxidation. During the first stage ethanol is oxidized into acetaldehyde, under the action of alcohol dehydrogenase. During the second stage acetaldehyde is oxidized into acetate. Genetic factors or some drugs are able to disturb the absorption and the metabolism of ethanol. The analytical methods for the quantification of alcohol in man include analysis in exhaled air and in blood. The screening and quantification of ethanol for road safety are performed in exhaled air. In hospitals, blood ethanol determination is routinely performed by enzymatic method, but the rule for forensic samples is gas chromatography.
Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method
NASA Astrophysics Data System (ADS)
Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben
2010-05-01
Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux
An accurate and practical method for inference of weak gravitational lensing from galaxy images
NASA Astrophysics Data System (ADS)
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Provisions § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration... the potassium ferricyanide titration method for the determination of sulfide in wastewaters...
Green analytical method development for statin analysis.
Assassi, Amira Louiza; Roy, Claude-Eric; Perovitch, Philippe; Auzerie, Jack; Hamon, Tiphaine; Gaudin, Karen
2015-02-01
Green analytical chemistry method was developed for pravastatin, fluvastatin and atorvastatin analysis. HPLC/DAD method using ethanol-based mobile phase with octadecyl-grafted silica with various grafting and related-column parameters such as particle sizes, core-shell and monolith was studied. Retention, efficiency and detector linearity were optimized. Even for column with particle size under 2 μm, the benefit of keeping efficiency within a large range of flow rate was not obtained with ethanol based mobile phase compared to acetonitrile one. Therefore the strategy to shorten analysis by increasing the flow rate induced decrease of efficiency with ethanol based mobile phase. An ODS-AQ YMC column, 50 mm × 4.6 mm, 3 μm was selected which showed the best compromise between analysis time, statin separation, and efficiency. HPLC conditions were at 1 mL/min, ethanol/formic acid (pH 2.5, 25 mM) (50:50, v/v) and thermostated at 40°C. To reduce solvent consumption for sample preparation, 0.5mg/mL concentration of each statin was found the highest which respected detector linearity. These conditions were validated for each statin for content determination in high concentrated hydro-alcoholic solutions. Solubility higher than 100mg/mL was found for pravastatin and fluvastatin, whereas for atorvastatin calcium salt the maximum concentration was 2mg/mL for hydro-alcoholic binary mixtures between 35% and 55% of ethanol in water. Using atorvastatin instead of its calcium salt, solubility was improved. Highly concentrated solution of statins offered potential fluid for per Buccal Per-Mucous(®) administration with the advantages of rapid and easy passage of drugs. PMID:25582487
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements,...
Metalaxyl: persistence, degradation, metabolism, and analytical methods.
Sukul, P; Spiteller, M
2000-01-01
Metalaxyl is a systemic fungicide used to control plant diseases caused by Oomycete fungi. Its formulations include granules, wettable powders, dusts, and emulsifiable concentrates. Application may be by foliar or soil incorporation, surface spraying (broadcast or band), drenching, and seed treatment. Metalaxyl registered products either contain metalaxyl as the sole active ingredient or are combined with other active ingredients (e.g., captan, mancozeb, copper compounds, carboxin). Due to its broad-spectrum activity, metalaxyl is used world-wide on a variety of fruit and vegetable crops. Its effectiveness results from inhibition of uridine incorporation into RNA and specific inhibition of RNA polymerase-1. Metalaxyl has both curative and systemic properties. Its mammalian toxicity is classified as EPA toxicity class III and it is also relatively non-toxic to most nontarget arthropod and vertebrate species. Adequate analytical methods of TLC, GLC, HPLC, MS, and other techniques are available for identification and determination of metalaxyl residues and its metabolites. Available laboratory and field studies indicate that metalaxyl is stable to hydrolysis under normal environmental pH values, It is also photolytically stable in water and soil when exposed to natural sunlight. Its tolerance to a wide range of pH, light, and temperature leads to its continued use in agriculture. Metalaxyl is photodecomposed in UV light, and photoproducts are formed by rearrangement of the N-acyl group to the aromatic ring, demethoxylation, N-deacylation, and elimination of the methoxycarbonyl group from the molecule. Photosensitizers such as humic acid, TiO2, H2O2, acetone, and riboflavin accelerate its photodecomposition. Information is provided on the fate of metalaxyl in plant, soil, water, and animals. Major metabolic routes include hydrolysis of the methyl ester and methyl ether oxidation of the ring-methyl groups. The latter are precursors of conjugates in plants and animals
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
A more accurate method for measurement of tuberculocidal activity of disinfectants.
Ascenzi, J M; Ezzell, R J; Wendt, T M
1987-01-01
The current Association of Official Analytical Chemists method for testing tuberculocidal activity of disinfectants has been shown to be inaccurate and to have a high degree of variability. An alternate test method is proposed which is more accurate, more precise, and quantitative. A suspension of Mycobacterium bovis BCG was exposed to a variety of disinfectant chemicals and a kill curve was constructed from quantitative data. Data are presented that show the discrepancy between current claims, determined by the Association of Official Analytical Chemists method, of selected commercially available products and claims generated by the proposed method. The effects of different recovery media were examined. The data indicated that Mycobacteria 7H11 and Middlebrook 7H10 agars were equal in recovery of the different chemically treated cells, with Lowenstein-Jensen agar having approximately the same recovery rate but requiring incubation for up to 3 weeks longer for countability. The kill curves generated for several different chemicals were reproducible, as indicated by the standard deviations of the slopes and intercepts of the linear regression curves. PMID:3314707
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration method... ferricyanide titration method for the determination of sulfide in wastewaters discharged by plants operating...
40 CFR 425.03 - Sulfide analytical methods and applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... § 425.03 Sulfide analytical methods and applicability. (a) The potassium ferricyanide titration method... ferricyanide titration method for the determination of sulfide in wastewaters discharged by plants operating...
Method accurately measures mean particle diameters of monodisperse polystyrene latexes
NASA Technical Reports Server (NTRS)
Kubitschek, H. E.
1967-01-01
Photomicrographic method determines mean particle diameters of monodisperse polystyrene latexes. Many diameters are measured simultaneously by measuring row lengths of particles in a triangular array at a glass-oil interface. The method provides size standards for electronic particle counters and prevents distortions, softening, and flattening.
A new approach to constructing efficient stiffly accurate EPIRK methods
NASA Astrophysics Data System (ADS)
Rainwater, G.; Tokman, M.
2016-10-01
The structural flexibility of the exponential propagation iterative methods of Runge-Kutta type (EPIRK) enables construction of particularly efficient exponential time integrators. While the EPIRK methods have been shown to perform well on stiff problems, all of the schemes proposed up to now have been derived using classical order conditions. In this paper we extend the stiff order conditions and the convergence theory developed for the exponential Rosenbrock methods to the EPIRK integrators. We derive stiff order conditions for the EPIRK methods and develop algorithms to solve them to obtain specific schemes. Moreover, we propose a new approach to constructing particularly efficient EPIRK integrators that are optimized to work with an adaptive Krylov algorithm. We use a set of numerical examples to illustrate the computational advantages that the newly constructed EPIRK methods offer compared to previously proposed exponential integrators.
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
Construction of higher order accurate vortex and particle methods
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.
1986-01-01
The standard point vortex method has recently been shown to be of high order of accuracy for problems on the whole plane, when using a uniform initial subdivision for assigning the vorticity to the points. If obstacles are present in the flow, this high order deteriorates to first or second order. New vortex methods are introduced which are of arbitrary accuracy (under regularity assumptions) regardless of the presence of bodies and the uniformity of the initial subdivision.
A new cation-exchange method for accurate field speciation of hexavalent chromium
Ball, J.W.; McCleskey, R.B.
2003-01-01
A new method for field speciation of Cr(VI) has been developed to meet present stringent regulatory standards and to overcome the limitations of existing methods. The method consists of passing a water sample through strong acid cation-exchange resin at the field site, where Cr(III) is retained while Cr(VI) passes into the effluent and is preserved for later determination. The method is simple, rapid, portable, and accurate, and makes use of readily available, inexpensive materials. Cr(VI) concentrations are determined later in the laboratory using any elemental analysis instrument sufficiently sensitive to measure the Cr(VI) concentrations of interest. The new method allows measurement of Cr(VI) concentrations as low as 0.05 ??g 1-1, storage of samples for at least several weeks prior to analysis, and use of readily available analytical instrumentation. Cr(VI) can be separated from Cr(III) between pH 2 and 11 at Cr(III)/Cr(VI) concentration ratios as high as 1000. The new method has demonstrated excellent comparability with two commonly used methods, the Hach Company direct colorimetric method and USEPA method 218.6. The new method is superior to the Hach direct colorimetric method owing to its relative sensitivity and simplicity. The new method is superior to USEPA method 218.6 in the presence of Fe(II) concentrations up to 1 mg 1-1 and Fe(III) concentrations up to 10 mg 1-1. Time stability of preserved samples is a significant advantage over the 24-h time constraint specified for USEPA method 218.6.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method....
The chain collocation method: A spectrally accurate calculus of forms
NASA Astrophysics Data System (ADS)
Rufat, Dzhelil; Mason, Gemma; Mullen, Patrick; Desbrun, Mathieu
2014-01-01
Preserving in the discrete realm the underlying geometric, topological, and algebraic structures at stake in partial differential equations has proven to be a fruitful guiding principle for numerical methods in a variety of fields such as elasticity, electromagnetism, or fluid mechanics. However, structure-preserving methods have traditionally used spaces of piecewise polynomial basis functions for differential forms. Yet, in many problems where solutions are smoothly varying in space, a spectral numerical treatment is called for. In an effort to provide structure-preserving numerical tools with spectral accuracy on logically rectangular grids over periodic or bounded domains, we present a spectral extension of the discrete exterior calculus (DEC), with resulting computational tools extending well-known collocation-based spectral methods. Its efficient implementation using fast Fourier transforms is provided as well.
A highly accurate method for determination of dissolved oxygen: gravimetric Winkler method.
Helm, Irja; Jalukse, Lauri; Leito, Ivo
2012-09-01
A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012-0.018 mg dm(-3) corresponding to the k=2 expanded uncertainty in the range of 0.023-0.035 mg dm(-3) (0.27-0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.
An accurate and simple method for measurement of paw edema.
Fereidoni, M; Ahmadiani, A; Semnanian, S; Javan, M
2000-01-01
Several methods for measuring inflammation are available that rely on the parameters changing during inflammation. The most commonly used methods estimate the volume of edema formed. In this study, we present a novel method for measuring the volume of pathologically or artificially induced edema. In this model, a liquid column is placed on a balance. When an object is immersed, the liquid applies a force F to attempt its expulsion. Physically, F is the weight (W) of the volume of liquid displaced by that part of the object inserted into the liquid. A balance is used to measure this force (F=W).Therefore, the partial or entire volume of any object, for example, the inflamed hind paw of a rat, can be calculated thus, using the specific gravity of the immersion liquid, at equilibrium mass/specific gravity=volume (V). The extent of edema at time t (measured as V) will be V(t)-V(o). This method is easy to use, materials are of low cost and readily available. It is important that the rat paw (or any object whose volume is being measured) is kept from contacting the wall of the column containing the fluid whilst the value on the balance is read.
Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations
Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg
2007-08-10
In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.
Pendant bubble method for an accurate characterization of superhydrophobic surfaces.
Ling, William Yeong Liang; Ng, Tuck Wah; Neild, Adrian
2011-12-01
The commonly used sessile drop method for measuring contact angles and surface tension suffers from errors on superhydrophobic surfaces. This occurs from unavoidable experimental error in determining the vertical location of the liquid-solid-vapor interface due to a camera's finite pixel resolution, thereby necessitating the development and application of subpixel algorithms. We demonstrate here the advantage of a pendant bubble in decreasing the resulting error prior to the application of additional algorithms. For sessile drops to attain an equivalent accuracy, the pixel count would have to be increased by 2 orders of magnitude. PMID:22017500
An analytic reconstruction method for PET based on cubic splines
NASA Astrophysics Data System (ADS)
Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.
2014-03-01
PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.
Thiram: degradation, applications and analytical methods.
Sharma, Vaneet Kumar; Aulakh, J S; Malik, Ashok Kumar
2003-10-01
In this review a brief introduction to thiram (tetramethylthiuram disulfide; TMTD) pesticide has been given along with other applications. All the important methods available are systematically arranged and are listed under various techniques. Some of these methods have been applied for the determination of thiram in commercial formulations, synthetic mixtures in grains, vegetables and fruits. A comparison of different methods is the salient feature of this review.
Learner Language Analytic Methods and Pedagogical Implications
ERIC Educational Resources Information Center
Dyson, Bronwen
2010-01-01
Methods for analysing interlanguage have long aimed to capture learner language in its own right. By surveying the cognitive methods of Error Analysis, Obligatory Occasion Analysis and Frequency Analysis, this paper traces reformulations to attain this goal. The paper then focuses on Emergence Analysis, which fine-tunes learner language analysis…
Fast Analytical Methods for Macroscopic Electrostatic Models in Biomolecular Simulations*
Xu, Zhenli; Cai, Wei
2013-01-01
We review recent developments of fast analytical methods for macroscopic electrostatic calculations in biological applications, including the Poisson–Boltzmann (PB) and the generalized Born models for electrostatic solvation energy. The focus is on analytical approaches for hybrid solvation models, especially the image charge method for a spherical cavity, and also the generalized Born theory as an approximation to the PB model. This review places much emphasis on the mathematical details behind these methods. PMID:23745011
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... MEALS, READY-TO-EAT (MREs), MEATS, AND MEAT PRODUCTS MREs, Meats, and Related Meat Food Products § 98.4... of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... MEALS, READY-TO-EAT (MRE's), MEATS, AND MEAT PRODUCTS MRE's, Meats, and Related Meat Food Products § 98... perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... are used. The manuals of standard methods most often used by the Science and Technology laboratories... Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O. Box 3489,...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... are used. The manuals of standard methods most often used by the Science and Technology laboratories... Practices of the American Oil Chemists' Society (AOCS), American Oil Chemists' Society, P.O. Box 3489,...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association.... Environmental Protection Agency (EPA) Chemical Exposure Research Branch, EPA Office of Research and Development... Methods for the Examination of Dairy Products, American Public Health Association, 1015 Fifteenth...
Keeping the edge: an accurate numerical method to solve the stream power law
NASA Astrophysics Data System (ADS)
Campforts, B.; Govers, G.
2015-12-01
Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Federal Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR.... (b) E. coli. System must use methods for enumeration of E. coli in source water approved in § 136.3(a... of an E. coli sample for up to 48 hours between sample collection and initiation of analysis if...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005. (b... Examination of Dairy Products, American Public Health Association, 1015 Fifteenth Street, NW, Washington,...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005. (b... Examination of Dairy Products, American Public Health Association, 1015 Fifteenth Street, NW, Washington,...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005. (b... Examination of Dairy Products, American Public Health Association, 1015 Fifteenth Street, NW, Washington,...
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
...), Volumes I and II, Food and Drug Administration, Center for Food Safety and Applied Nutrition (CFSAN), 200... follows: (a) Compendium Methods for the Microbiological Examination of Foods, Carl Vanderzant and Don Splittstoesser (Editors), American Public Health Association, 1015 Fifteenth Street, NW, Washington, DC 20005....
FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT
This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Analytical chemistry methods for mixed oxide fuel, March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of materials used to produce mixed oxide fuel. These materials are ceramic fuel and insulator pellets and the plutonium and uranium oxides and nitrates used to fabricate these pellets.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Walji, Sadru; Sentjens, Katherine
2013-06-01
Alkali hydride diatomic molecules have long been the object of spectroscopic studies. However, their small reduced mass makes them species for which the conventional semiclassical-based methods of analysis tend to have the largest errors. To date, the only quantum-mechanically accurate direct-potential-fit (DPF) analysis for one of these molecules was the one for LiH reported by Coxon and Dickinson. The present paper extends this level of analysis to NaH, and reports a DPF analysis of all available spectroscopic data for the A ^1Σ^+-X ^1Σ^+ system of NaH which yields analytic potential energy functions for these two states that account for those data (on average) to within the experimental uncertainties. W.C. Stwalley, W.T. Zemke and S.C. Yang, J. Phys. Chem. Ref. Data {20}, 153-187 (1991). J.A. Coxon and C.S. Dickinson, J. Chem. Phys. {121}, 8378 (2004).
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method... oxygen demand. (6) QC means “quality control.” (b) Method modifications. (1) If the underlying chemistry... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method... oxygen demand. (6) QC means “quality control.” (b) Method modifications. (1) If the underlying chemistry... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method... oxygen demand. (6) QC means “quality control.” (b) Method modifications. (1) If the underlying chemistry... notification should be of the form “Method xxx has been modified within the flexibility allowed in 40 CFR...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... analytical test method. Because of the matrix differences of the chemicals listed for testing, no one method... to separate the HDDs/HDFs from the sample matrix. Methods are reviewed in the Guidelines under § 766... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution...
Analytical techniques for instrument design - matrix methods
Robinson, R.A.
1997-09-01
We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Handbook of Analytical Methods for Textile Composites
NASA Technical Reports Server (NTRS)
Cox, Brian N.; Flanagan, Gerry
1997-01-01
The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.
Analytical and numerical methods; advanced computer concepts
Lax, P D
1991-03-01
This past year, two projects have been completed and a new is under way. First, in joint work with R. Kohn, we developed a numerical algorithm to study the blowup of solutions to equations with certain similarity transformations. In the second project, the adaptive mesh refinement code of Berger and Colella for shock hydrodynamic calculations has been parallelized and numerical studies using two different shared memory machines have been done. My current effort is towards the development of Cartesian mesh methods to solve pdes with complicated geometries. Most of the coming year will be spent on this project, which is joint work with Prof. Randy Leveque at the University of Washington in Seattle.
Analytical techniques for instrument design -- Matrix methods
Robinson, R.A.
1997-12-31
The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Relativistic mirrors in laser plasmas (analytical methods)
NASA Astrophysics Data System (ADS)
Bulanov, S. V.; Esirkepov, T. Zh; Kando, M.; Koga, J.
2016-10-01
Relativistic flying mirrors in plasmas are realized as thin dense electron (or electron-ion) layers accelerated by high-intensity electromagnetic waves to velocities close to the speed of light in vacuum. The reflection of an electromagnetic wave from the relativistic mirror results in its energy and frequency changing. In a counter-propagation configuration, the frequency of the reflected wave is multiplied by the factor proportional to the Lorentz factor squared. This scientific area promises the development of sources of ultrashort x-ray pulses in the attosecond range. The expected intensity will reach the level at which the effects predicted by nonlinear quantum electrodynamics start to play a key role. We present an overview of theoretical methods used to describe relativistic flying, accelerating, oscillating mirrors emerging in intense laser-plasma interactions.
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-01-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Analytical instruments, ionization sources, and ionization methods
Atkinson, David A.; Mottishaw, Paul
2006-04-11
Methods and apparatus for simultaneous vaporization and ionization of a sample in a spectrometer prior to introducing the sample into the drift tube of the analyzer are disclosed. The apparatus includes a vaporization/ionization source having an electrically conductive conduit configured to receive sample particulate which is conveyed to a discharge end of the conduit. Positioned proximate to the discharge end of the conduit is an electrically conductive reference device. The conduit and the reference device act as electrodes and have an electrical potential maintained between them sufficient to cause a corona effect, which will cause at least partial simultaneous ionization and vaporization of the sample particulate. The electrical potential can be maintained to establish a continuous corona, or can be held slightly below the breakdown potential such that arrival of particulate at the point of proximity of the electrodes disrupts the potential, causing arcing and the corona effect. The electrical potential can also be varied to cause periodic arcing between the electrodes such that particulate passing through the arc is simultaneously vaporized and ionized. The invention further includes a spectrometer containing the source. The invention is particularly useful for ion mobility spectrometers and atmospheric pressure ionization mass spectrometers.
Analytic Methods for Simulated Light Transport
NASA Astrophysics Data System (ADS)
Arvo, James Richard
1995-01-01
This thesis presents new mathematical and computational tools for the simulation of light transport in realistic image synthesis. New algorithms are presented for exact computation of direct illumination effects related to light emission, shadowing, and first-order scattering from surfaces. New theoretical results are presented for the analysis of global illumination algorithms, which account for all interreflections of light among surfaces of an environment. First, a closed-form expression is derived for the irradiance Jacobian, which is the derivative of a vector field representing radiant energy flux. The expression holds for diffuse polygonal scenes and correctly accounts for shadowing, or partial occlusion. Three applications of the irradiance Jacobian are demonstrated: locating local irradiance extrema, direct computation of isolux contours, and surface mesh generation. Next, the concept of irradiance is generalized to tensors of arbitrary order. A recurrence relation for irradiance tensors is derived that extends a widely used formula published by Lambert in 1760. Several formulas with applications in computer graphics are derived from this recurrence relation and are independently verified using a new Monte Carlo method for sampling spherical triangles. The formulas extend the range of non-diffuse effects that can be computed in closed form to include illumination from directional area light sources and reflections from and transmissions through glossy surfaces. Finally, new analysis for global illumination is presented, which includes both direct illumination and indirect illumination due to multiple interreflections of light. A novel operator equation is proposed that clarifies existing deterministic algorithms for simulating global illumination and facilitates error analysis. Basic properties of the operators and solutions are identified which are not evident from previous formulations. A taxonomy of errors that arise in simulating global illumination is
Internal R and D task summary report: analytical methods development
Schweighardt, F.K.
1983-07-01
International Coal Refining Company (ICRC) conducted two research programs to develop analytical procedures for characterizing the feed, intermediates,and products of the proposed SRC-I Demonstration Plant. The major conclusion is that standard analytical methods must be defined and assigned statistical error limits of precision and reproducibility early in development. Comparing all SRC-I data or data from different processes is complex and expensive if common data correlation procedures are not followed. ICRC recommends that processes be audited analytically and statistical analyses generated as quickly as possible, in order to quantify process-dependent and -independent variables. 16 references, 10 figures, 20 tables.
Study of an analytical method for hexavalent chromium.
Bhargava, O P; Bumsted, H E; Grunder, F I; Hunt, B L; Manning, G E; Riemann, R A; Samuels, J K; Tatone, V; Waldschmidt, S J; Hernandez, P
1983-06-01
The diphenylcarbazide colorimetric method was evaluated by analyzing spiked PVC filters prepared by an AIHA-accredited consultant laboratory for chromium (VI). All seven participating laboratories received the samples and performed the analyses at the same time. Three laboratories simultaneously tested three alternative analytical procedures. Reduced amounts of chromium (VI) were found by both the consultant and participating laboratories when using the test procedure and one of the alternative methods. Two of the alternative analytical methods, both of which involve an alkaline extraction procedure, provided higher recoveries and more precise values for the test filters. It appears that the alkaline extraction procedure may be more appropriate for occupational health samples taken in steel industry environments which may include several interferents. Suggestions are made for further studies to determine the most appropriate analytical method.
Computer Subroutines for Analytic Rotation by Two Gradient Methods.
ERIC Educational Resources Information Center
van Thillo, Marielle
Two computer subroutine packages for the analytic rotation of a factor matrix, A(p x m), are described. The first program uses the Flectcher (1970) gradient method, and the second uses the Polak-Ribiere (Polak, 1971) gradient method. The calculations in both programs involve the optimization of a function of free parameters. The result is a…
An analytical method for designing low noise helicopter transmissions
NASA Technical Reports Server (NTRS)
Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.
1978-01-01
The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Atkins, Harold L.; Pampell, Alyssa
2011-01-01
A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.
Analytical calculation of spectral phase of grism pairs by the geometrical ray tracing method
NASA Astrophysics Data System (ADS)
Rahimi, L.; Askari, A. A.; Saghafifar, H.
2016-07-01
The most optimum operation of a grism pair is practically approachable when an analytical expression of its spectral phase is in hand. In this paper, we have employed the accurate geometrical ray tracing method to calculate the analytical phase shift of a grism pair, at transmission and reflection configurations. As shown by the results, for a great variety of complicated configurations, the spectral phase of a grism pair is in the same form of that of a prism pair. The only exception is when the light enters into and exits from different facets of a reflection grism. The analytical result has been used to calculate the second-order dispersions of several examples of grism pairs in various possible configurations. All results are in complete agreement with those from ray tracing method. The result of this work can be very helpful in the optimal design and application of grism pairs at various configurations.
Beamforming and holography image formation methods: an analytic study.
Solimene, Raffaele; Cuccaro, Antonio; Ruvio, Giuseppe; Tapia, Daniel Flores; O'Halloran, Martin
2016-04-18
Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the conﬁguration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) conﬁguration.. PMID:27137336
Beamforming and holography image formation methods: an analytic study.
Solimene, Raffaele; Cuccaro, Antonio; Ruvio, Giuseppe; Tapia, Daniel Flores; O'Halloran, Martin
2016-04-18
Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the conﬁguration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) conﬁguration..
Uncertainty profiles for the validation of analytical methods.
Saffaj, T; Ihssane, B
2011-09-15
This article aims to expose a new global strategy for the validation of analytical methods and the estimation of measurement uncertainty. Our purpose is to allow to researchers in the field of analytical chemistry get access to a powerful tool for the evaluation of quantitative analytical procedures. Indeed, the proposed strategy facilitates analytical validation by providing a decision tool based on the uncertainty profile and the β-content tolerance interval. Equally important, this approach allows a good estimate of measurement uncertainty by using data validation and without recourse to other additional experiments. In the example below, we confirmed the applicability of this new strategy for the validation of a chromatographic bioanalytical method and the good estimate of the measurement uncertainty without referring to any extra effort and additional experiments. A comparative study with the SFSTP approach showed that both strategies have selected the same calibration functions. The holistic character of the measurement uncertainty compared to the total error was influenced by our choice of profile uncertainty. Nevertheless, we think that the adoption of the uncertainty in the validation stage controls the risk of using the analytical method in routine phase.
Analytical Methods for Detonation Residues of Insensitive Munitions
NASA Astrophysics Data System (ADS)
Walsh, Marianne E.
2016-01-01
Analytical methods are described for the analysis of post-detonation residues from insensitive munitions. Standard methods were verified or modified to obtain the mass of residues deposited per round. In addition, a rapid chromatographic separation was developed and used to measure the mass of NTO (3-nitro-1,2,4-triazol-5-one), NQ (nitroguanidine) and DNAN (2,4-dinitroanisole). The HILIC (hydrophilic-interaction chromatography) separation described here uses a trifunctionally-bonded amide phase to retain the polar analytes. The eluent is 75/25 v/v acetonitrile/water acidified with acetic acid, which is also suitable for LC/MS applications. Analytical runtime was three minutes. Solid phase extraction and LC/MS conditions are also described.
Accurate compressed look up table method for CGH in 3D holographic display.
Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian
2015-12-28
Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.
Accurate compressed look up table method for CGH in 3D holographic display.
Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian
2015-12-28
Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987
ANALYTICAL METHOD READINESS FOR THE CONTAMINANT CANDIDATE LIST
The Contaminant Candidate List (CCL), which was promulgated in March 1998, includes 50 chemical and 10 microbiological contaminants/contaminant groups. At the time of promulgation, analytical methods were available for 6 inorganic and 28 organic contaminants. Since then, 4 anal...
Analytical chemistry methods for metallic core components: Revision March 1985
Not Available
1985-03-01
This standard provides analytical chemistry methods for the analysis of alloys used to fabricate core components. These alloys are 302, 308, 316, 316-Ti, and 321 stainless steels and 600 and 718 Inconels and they may include other 300-series stainless steels.
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Developing the analytical test method. 766.16 Section 766.16 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.16 Developing...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 161.180 Section 161.180 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data...
Analytic energy gradient for the projected Hartree-Fock method
NASA Astrophysics Data System (ADS)
Schutski, Roman; Jiménez-Hoyos, Carlos A.; Scuseria, Gustavo E.
2014-05-01
We derive and implement the analytic energy gradient for the symmetry Projected Hartree-Fock (PHF) method avoiding the solution of coupled-perturbed HF-like equations, as in the regular unprojected method. Our formalism therefore has mean-field computational scaling and cost, despite the elaborate multi-reference character of the PHF wave function. As benchmark examples, we here apply our gradient implementation to the ortho-, meta-, and para-benzyne biradicals, and discuss their equilibrium geometries and vibrational frequencies.
Recent developments in detection methods for microfabricated analytical devices.
Schwarz, M A; Hauser, P C
2001-09-01
Sensitive detection in microfluidic analytical devices is a challenge because of the extremely small detection volumes available. Considerable efforts have been made lately to further address this aspect and to investigate techniques other than fluorescence. Among the newly introduced techniques are the optical methods of chemiluminescence, refraction and thermooptics, as well as the electrochemical methods of amperometry, conductimetry and potentiometry. Developments are also in progress to create miniaturized plasma-emission spectrometers and sensitive detectors for gas-chromatographic separations.
Pérez-Ortega, Patricia; Lara-Ortega, Felipe J; García-Reyes, Juan F; Gilbert-López, Bienvenida; Trojanowicz, Marek; Molina-Díaz, Antonio
2016-11-01
The feasibility of accurate-mass multi-residue screening methods using liquid chromatography high-resolution mass spectrometry (UHPLC-HRMS) using time-of-flight mass spectrometry has been evaluated, including over 625 multiclass food contaminants as case study. Aspects such as the selectivity and confirmation capability provided by HRMS with different acquisition modes (full-scan or full-scan combined with collision induced dissociation (CID) with no precursor ion isolation), and chromatographic separation along with main limitations such as sensitivity or automated data processing have been examined. Compound identification was accomplished with retention time matching and accurate mass measurements of the targeted ions for each analyte (mainly (de)protonated molecules). Compounds with the same nominal mass (isobaric species) were very frequent due to the large number of compounds included. Although 76% of database compounds were involved in isobaric groups, they were resolved in most cases (99% of these isobaric species were distinguished by retention time, resolving power, isotopic profile or fragment ions). Only three pairs could not be resolved with these tools. In-source CID fragmentation was evaluated in depth, although the results obtained in terms of information provided were not as thorough as those obtained using fragmentation experiments without precursor ion isolation (all ion mode). The latter acquisition mode was found to be the best suited for this type of large-scale screening method instead of classic product ion scan, as provided excellent fragmentation information for confirmatory purposes for an unlimited number of compounds. Leaving aside the sample treatment limitations, the main weaknesses noticed are basically the relatively low sensitivity for compounds which does not map well against electrospray ionization and also quantitation issues such as those produced by signal suppression due to either matrix effects from coeluting matrix or from
Pérez-Ortega, Patricia; Lara-Ortega, Felipe J; García-Reyes, Juan F; Gilbert-López, Bienvenida; Trojanowicz, Marek; Molina-Díaz, Antonio
2016-11-01
The feasibility of accurate-mass multi-residue screening methods using liquid chromatography high-resolution mass spectrometry (UHPLC-HRMS) using time-of-flight mass spectrometry has been evaluated, including over 625 multiclass food contaminants as case study. Aspects such as the selectivity and confirmation capability provided by HRMS with different acquisition modes (full-scan or full-scan combined with collision induced dissociation (CID) with no precursor ion isolation), and chromatographic separation along with main limitations such as sensitivity or automated data processing have been examined. Compound identification was accomplished with retention time matching and accurate mass measurements of the targeted ions for each analyte (mainly (de)protonated molecules). Compounds with the same nominal mass (isobaric species) were very frequent due to the large number of compounds included. Although 76% of database compounds were involved in isobaric groups, they were resolved in most cases (99% of these isobaric species were distinguished by retention time, resolving power, isotopic profile or fragment ions). Only three pairs could not be resolved with these tools. In-source CID fragmentation was evaluated in depth, although the results obtained in terms of information provided were not as thorough as those obtained using fragmentation experiments without precursor ion isolation (all ion mode). The latter acquisition mode was found to be the best suited for this type of large-scale screening method instead of classic product ion scan, as provided excellent fragmentation information for confirmatory purposes for an unlimited number of compounds. Leaving aside the sample treatment limitations, the main weaknesses noticed are basically the relatively low sensitivity for compounds which does not map well against electrospray ionization and also quantitation issues such as those produced by signal suppression due to either matrix effects from coeluting matrix or from
Current analytical methods for plant auxin quantification--A review.
Porfírio, Sara; Gomes da Silva, Marco D R; Peixe, Augusto; Cabrita, Maria J; Azadi, Parastoo
2016-01-01
Plant hormones, and especially auxins, are low molecular weight compounds highly involved in the control of plant growth and development. Auxins are also broadly used in horticulture, as part of vegetative plant propagation protocols, allowing the cloning of genotypes of interest. Over the years, large efforts have been put in the development of more sensitive and precise methods of analysis and quantification of plant hormone levels in plant tissues. Although analytical techniques have evolved, and new methods have been implemented, sample preparation is still the limiting step of auxin analysis. In this review, the current methods of auxin analysis are discussed. Sample preparation procedures, including extraction, purification and derivatization, are reviewed and compared. The different analytical techniques, ranging from chromatographic and mass spectrometry methods to immunoassays and electrokinetic methods, as well as other types of detection are also discussed. Considering that auxin analysis mirrors the evolution in analytical chemistry, the number of publications describing new and/or improved methods is always increasing and we considered appropriate to update the available information. For that reason, this article aims to review the current advances in auxin analysis, and thus only reports from the past 15 years will be covered.
Method for accurate growth of vertical-cavity surface-emitting lasers
Chalmers, S.A.; Killeen, K.P.; Lear, K.L.
1995-03-14
The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.
Method for accurate growth of vertical-cavity surface-emitting lasers
Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.
1995-01-01
We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.
Accurate Analytic Potential Functions for the a ^3Π_1 and X ^1Σ^+ States of {IBr}
NASA Astrophysics Data System (ADS)
Yukiya, Tokio; Nishimiya, Nobuo; Suzuki, Masao; Le Roy, Robert
2014-06-01
Spectra of IBr in various wavelength regions have been measured by a number of researchers using traditional diffraction grating and microwave methods, as well as using high-resolution laser techniques combined with a Fourier transform spectrometer. In a previous paper at this meeting, we reported a preliminary determination of analytic potential energy functions for the A ^3Π_1 and X ^1Σ^+ states of IBr from a direct-potential-fit (DPF) analysis of all of the data available at that time. That study also confirmed the presence of anomalous fluctuations in the v--dependence of the first differences of the inertial rotational constant, Δ Bv=Bv+1-Bv in the A ^3Π_1 state for vibrational levels with v'(A) in the mid 20's. However, our previous experience in a recent study of the analogous A ^3Π_1-X ^1Σ_g^+ system of Br_2 suggested that the effect of such fluctuations may be overcome if sufficient data are available. The present work therefore reports new measurements of transitions to levels in the v'(A)=23-26 region, together with a new global DPF analysis that uses ``robust" least-squares fits to average properly over the effect of such fluctuations in order to provide an optimum delineation of the underlying potential energy curve(s). L.E.Selin,Ark. Fys. 21,479(1962) E. Tiemann and Th. Moeller, Z. Naturforsch. A 30,986 (1975) E.M. Weinstock and A. Preston, J. Mol. Spectrosc. 70, 188 (1978) D.R.T. Appadoo, P.F. Bernath, and R.J. Le Roy, Can. J. Phys. 72, 1265 (1994) N. Nishimiya, T. Yukiya and M. Suzuki, J. Mol. Spectrosc. 173, 8 (1995). T. Yukiya, N. Nishimiya, and R.J. Le Roy, Paper MF12 at the 65th Ohio State University International Symposium on Molecular Spectroscopy, Columbus, Ohio, June 20-24, 2011. T. Yukiya, N. Nishimiya, Y. Samajima, K. Yamaguchi, M. Suzuki, C.D. Boone, I. Ozier and R.J. Le Roy, J. Mol. Spectrosc. 283, 32 (2013) J.K.G. Watson, J. Mol. Spectrosc. 219, 326 (2003).
An Overview of Conventional and Emerging Analytical Methods for the Determination of Mycotoxins
Cigić, Irena Kralj; Prosen, Helena
2009-01-01
Mycotoxins are a group of compounds produced by various fungi and excreted into the matrices on which they grow, often food intended for human consumption or animal feed. The high toxicity and carcinogenicity of these compounds and their ability to cause various pathological conditions has led to widespread screening of foods and feeds potentially polluted with them. Maximum permissible levels in different matrices have also been established for some toxins. As these are quite low, analytical methods for determination of mycotoxins have to be both sensitive and specific. In addition, an appropriate sample preparation and pre-concentration method is needed to isolate analytes from rather complicated samples. In this article, an overview of methods for analysis and sample preparation published in the last ten years is given for the most often encountered mycotoxins in different samples, mainly in food. Special emphasis is on liquid chromatography with fluorescence and mass spectrometric detection, while in the field of sample preparation various solid-phase extraction approaches are discussed. However, an overview of other analytical and sample preparation methods less often used is also given. Finally, different matrices where mycotoxins have to be determined are discussed with the emphasis on their specific characteristics important for the analysis (human food and beverages, animal feed, biological samples, environmental samples). Various issues important for accurate qualitative and quantitative analyses are critically discussed: sampling and choice of representative sample, sample preparation and possible bias associated with it, specificity of the analytical method and critical evaluation of results. PMID:19333436
Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii
2016-10-01
Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85–100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis
Analytical methods for physicochemical characterization of antibody drug conjugates
Wakankar, Aditya; Chen, Yan; Gokarn, Yatin
2011-01-01
Antibody-drug conjugates (ADCs), produced through the chemical linkage of a potent small molecule cytotoxin (drug) to a monoclonal antibody, have more complex and heterogeneous structures than the corresponding antibodies. This review describes the analytical methods that have been used in their physicochemical characterization. The selection of the most appropriate methods for a specific ADC is heavily dependent on the properties of the linker, the drug and the choice of attachment sites (lysines, inter-chain cysteines, Fc glycans). Improvements in analytical techniques such as protein mass spectrometry and capillary electrophoresis have significantly increased the quality of information that can be obtained for use in product and process characterization and for routine lot release and stability testing. PMID:21441786
Analytical method for determination of benzene-arsenic acids
Mitchell, G.L.; Bayse, G.S.
1988-01-01
A sensitive analytical method has been modified for use in determination of several benzenearsonic acids, including arsanilic acid (p-aminobenzenearsonic acid), Roxarsone (3-nitro-4-hydroxybenzenearsonic acid), and p-ureidobenzene arsonic acid. Controlled acid hydrolysis of these compounds produces a quantitative yield of arsenate, which is measured colorimetrically as the molybdenum blue complex at 865 nm. The method obeys Beer's Law over the micromolar concentration range. These benzenearsonic acids are routinely used as feed additives in poultry and swine. This method should be useful in assessing tissue levels of the arsenicals in appropriate extracts.
Customizing computational methods for visual analytics with big data.
Choo, Jaegul; Park, Haesun
2013-01-01
The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data.
Active controls: A look at analytical methods and associated tools
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Adams, W. M., Jr.; Mukhopadhyay, V.; Tiffany, S. H.; Abel, I.
1984-01-01
A review of analytical methods and associated tools for active controls analysis and design problems is presented. Approaches employed to develop mathematical models suitable for control system analysis and/or design are discussed. Significant efforts have been expended to develop tools to generate the models from the standpoint of control system designers' needs and develop the tools necessary to analyze and design active control systems. Representative examples of these tools are discussed. Examples where results from the methods and tools have been compared with experimental data are also presented. Finally, a perspective on future trends in analysis and design methods is presented.
Lead-210 in animal and human bone: A new analytical method
Fisenne, I.M. )
1994-01-01
Lead-210 delivers the highest radiation dose to the skeleton of any naturally occurring radionuclide. A robust analytical method for the accurate determination of its concentration in bone was developed which minimizes the use of hazardous chemicals. Dry-ashing experiments showed that no substantial loss of [sup 210]Pb occurred at [le]700[degrees]C. Additional experiments showed that no loss of [sup 222]Rn occurred from dry-ashed bone. Ashed human-bone samples from three US regional areas were analyzed for [sup 210]Pb and [sup 226]Ra using the new method. 9 refs., 3 figs., 1 tab.
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
Which Method Is Most Precise; Which Is Most Accurate? An Undergraduate Experiment
ERIC Educational Resources Information Center
Jordan, A. D.
2007-01-01
A simple experiment, the determination of the density of a liquid by several methods, is presented. Since the concept of density is a familiar one, the experiment is suitable for the introductory laboratory period of a first- or second-year course in physical or analytical chemistry. The main objective of the experiment is to familiarize students…
Technology Transfer Automated Retrieval System (TEKTRAN)
The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. New methods a...
Analytical Methods for Biomass Characterization during Pretreatment and Bioconversion
Pu, Yunqiao; Meng, Xianzhi; Yoo, Chang Geun; Li, Mi; Ragauskas, Arthur J
2016-01-01
Lignocellulosic biomass has been introduced as a promising resource for alternative fuels and chemicals because of its abundance and complement for petroleum resources. Biomass is a complex biopolymer and its compositional and structural characteristics largely vary depending on its species as well as growth environments. Because of complexity and variety of biomass, understanding its physicochemical characteristics is a key for effective biomass utilization. Characterization of biomass does not only provide critical information of biomass during pretreatment and bioconversion, but also give valuable insights on how to utilize the biomass. For better understanding biomass characteristics, good grasp and proper selection of analytical methods are necessary. This chapter introduces existing analytical approaches that are widely employed for biomass characterization during biomass pretreatment and conversion process. Diverse analytical methods using Fourier transform infrared (FTIR) spectroscopy, gel permeation chromatography (GPC), and nuclear magnetic resonance (NMR) spectroscopy for biomass characterization are reviewed. In addition, biomass accessibility methods by analyzing surface properties of biomass are also summarized in this chapter.
An accurate method of extracting fat droplets in liver images for quantitative evaluation
NASA Astrophysics Data System (ADS)
Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2015-03-01
The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.
Analytical method for distribution of metallic gasket contact stress
NASA Astrophysics Data System (ADS)
Feng, Xiu; Gu, Boqing; Wei, Long; Sun, Jianjun
2008-11-01
Metallic gasket seals have been widely used in chemical and petrochemical plants. The failure of sealing system will lead to enormous pecuniary loss, serious environment pollution and personal injury accident. The failure of sealing systems is mostly caused not by the strength of flanges or bolts but by the leakage of the connections. The leakage behavior of bolted flanged connections is related to the gasket contact stress. In particular, the non-uniform distribution of this stress in the radial direction caused by the flange rotational flexibility has a major influence on the tightness of bolted flanged connections. In this paper, based on Warters method and considering the operating pressure, the deformation of the flanges is analyzed theoretically, and the formula for calculating the angle of rotation of the flanges is derived, based on which and the mechanical property of the gasket material, the method for calculating the gasket contact stresses is put forward. The maximum stress at the gasket outer flank calculated by the analytical method is lower than that obtained by numerical simulation, but the mean stresses calculated by the two methods are nearly the same. The analytical method presented in this paper can be used as an engineering method for designing the metallic gasket connections.
Liquid propellant rocket engine combustion simulation with a time-accurate CFD method
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.
1993-01-01
Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.
Analytical Method for Measuring Cosmogenic (35)S in Natural Waters.
Urióstegui, Stephanie H; Bibby, Richard K; Esser, Bradley K; Clark, Jordan F
2015-06-16
Cosmogenic sulfur-35 in water as dissolved sulfate ((35)SO4) has successfully been used as an intrinsic hydrologic tracer in low-SO4, high-elevation basins. Its application in environmental waters containing high SO4 concentrations has been limited because only small amounts of SO4 can be analyzed using current liquid scintillation counting (LSC) techniques. We present a new analytical method for analyzing large amounts of BaSO4 for (35)S. We quantify efficiency gains when suspending BaSO4 precipitate in Inta-Gel Plus cocktail, purify BaSO4 precipitate to remove dissolved organic matter, mitigate interference of radium-226 and its daughter products by selection of high purity barium chloride, and optimize LSC counting parameters for (35)S determination in larger masses of BaSO4. Using this improved procedure, we achieved counting efficiencies that are comparable to published LSC techniques despite a 10-fold increase in the SO4 sample load. (35)SO4 was successfully measured in high SO4 surface waters and groundwaters containing low ratios of (35)S activity to SO4 mass demonstrating that this new analytical method expands the analytical range of (35)SO4 and broadens the utility of (35)SO4 as an intrinsic tracer in hydrologic settings. PMID:25981756
Development of A High Throughput Method Incorporating Traditional Analytical Devices
White, C. C.; Embree, E.; Byrd, W. E; Patel, A. R.
2004-01-01
A high-throughput (high throughput is the ability to process large numbers of samples) and companion informatics system has been developed and implemented. High throughput is defined as the ability to autonomously evaluate large numbers of samples, while an informatics system provides the software control of the physical devices, in addition to the organization and storage of the generated electronic data. This high throughput system includes both an ultra-violet and visible light spectrometer (UV-Vis) and a Fourier transform infrared spectrometer (FTIR) integrated with a multi sample positioning table. This method is designed to quantify changes in polymeric materials occurring from controlled temperature, humidity and high flux UV exposures. The integration of the software control of these analytical instruments within a single computer system is presented. Challenges in enhancing the system to include additional analytical devices are discussed. PMID:27366626
An analytical method for regional dental manpower training.
Mulvey, P J; Foley, W J; Schneider, D P
1978-07-01
This paper presents an analytical method for dental manpower planning for use by Health Systems Agencies. The planning methods discard geopolitical boundaries in favor of Dental Service Areas (DSA). A method for defining DSAs by aggregating Minor Civil Divisions based on current population mobility and current distribution of dentists is presented. The Dental Manpower Balance Model (DMBM) is presented to calculate shortages (or surpluses) of dentists. This model uses sociodemographic data to calculate the demand for dental services and age adjusted productivity measures to calculate the effective supply of dentists. A case study for the HSA region in Northeastern New York is presented. The case study demonstrates that, although the planning methods are quite simple, they are more flexible and produce more sensitive results than the normative ratio method of manpower planning. PMID:10308627
Comparison of analytical methods for calculation of wind loads
NASA Technical Reports Server (NTRS)
Minderman, Donald J.; Schultz, Larry L.
1989-01-01
The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.
A new analytical method for groundwater recharge and discharge estimation
NASA Astrophysics Data System (ADS)
Liang, Xiuyu; Zhang, You-Kuan
2012-07-01
SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.
NASA Astrophysics Data System (ADS)
Yamamoto, Makoto; Haseyama, Miki
A method for accurate scene segmentation using two kinds of directed graph obtained by object matching and audio features is proposed. Generally, in audiovisual materials, such as broadcast programs and movies, there are repeated appearances of similar shots that include frames of the same background, object or place, and such shots are included in a single scene. Many scene segmentation methods based on this idea have been proposed; however, since they use color information as visual features, they cannot provide accurate scene segmentation results if the color features change in different shots for which frames include the same object due to camera operations such as zooming and panning. In order to solve this problem, scene segmentation by the proposed method is realized by using two novel approaches. In the first approach, object matching is performed between two frames that are each included in different shots. By using these matching results, repeated appearances of shots for which frames include the same object can be successfully found and represented as a directed graph. The proposed method also generates another directed graph that represents the repeated appearances of shots with similar audio features in the second approach. By combined use of these two directed graphs, degradation of scene segmentation accuracy, which results from using only one kind of graph, can be avoided in the proposed method and thereby accurate scene segmentation can be realized. Experimental results performed by applying the proposed method to actual broadcast programs are shown to verify the effectiveness of the proposed method.
Accurate time propagation method for the coupled Maxwell and Kohn-Sham equations
NASA Astrophysics Data System (ADS)
Li, Yonghui; He, Shenglai; Russakoff, Arthur; Varga, Kálmán
2016-08-01
An accurate method for time propagation of the coupled Maxwell and time-dependent Kohn-Sham (TDKS) equation is presented. The new approach uses a simultaneous fourth-order Runge-Kutta-based propagation of the vector potential and the Kohn-Sham orbitals. The approach is compared to the conventional fourth-order Taylor propagation and predictor-corrector methods. The calculations show several computational and numerical advantages, including higher computational performance, greater stability, better accuracy, and faster convergence.
Accurate determination of specific heat at high temperatures using the flash diffusivity method
NASA Technical Reports Server (NTRS)
Vandersande, J. W.; Zoltan, A.; Wood, C.
1989-01-01
The flash diffusivity method of Parker et al. (1961) was used to measure accurately the specific heat of test samples simultaneously with thermal diffusivity, thus obtaining the thermal conductivity of these materials directly. The accuracy of data obtained on two types of materials (n-type silicon-germanium alloys and niobium), was + or - 3 percent. It is shown that the method is applicable up to at least 1300 K.
Accurate time propagation method for the coupled Maxwell and Kohn-Sham equations.
Li, Yonghui; He, Shenglai; Russakoff, Arthur; Varga, Kálmán
2016-08-01
An accurate method for time propagation of the coupled Maxwell and time-dependent Kohn-Sham (TDKS) equation is presented. The new approach uses a simultaneous fourth-order Runge-Kutta-based propagation of the vector potential and the Kohn-Sham orbitals. The approach is compared to the conventional fourth-order Taylor propagation and predictor-corrector methods. The calculations show several computational and numerical advantages, including higher computational performance, greater stability, better accuracy, and faster convergence. PMID:27627419
An Effective Method to Accurately Calculate the Phase Space Factors for β - β - Decay
Neacsu, Andrei; Horoi, Mihai
2016-01-01
Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.
Back, Patricia; Matthijssens, Filip; Vanfleteren, Jacques R; Braeckman, Bart P
2012-04-01
Because superoxide is involved in various physiological processes, many efforts have been made to improve its accurate quantification. We optimized and validated a superoxide-specific and -sensitive detection method. The protocol is based on fluorescence detection of the superoxide-specific hydroethidine (HE) oxidation product, 2-hydroxyethidium. We established a method for the quantification of superoxide production in isolated mitochondria without the need for acetone extraction and purification chromatography as described in previous studies.
Selectivity in analytical chemistry: two interpretations for univariate methods.
Dorkó, Zsanett; Verbić, Tatjana; Horvai, George
2015-01-01
Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity.
Gaussian Analytic Centroiding method of star image of star tracker
NASA Astrophysics Data System (ADS)
Wang, Haiyong; Xu, Ershuai; Li, Zhifeng; Li, Jingjin; Qin, Tianmu
2015-11-01
The energy distribution of an actual star image coincides with the Gaussian law statistically in most cases, so the optimized processing algorithm about star image centroiding should be constructed also by following Gaussian law. For a star image spot covering a certain number of pixels, the marginal distribution of the gray accumulation on rows and columns are shown and analyzed, based on which the formulas of Gaussian Analytic Centroiding method (GAC) are deduced, and the robustness is also promoted due to the inherited filtering effect of gray accumulation. Ideal reference star images are simulated by the PSF (point spread function) with integral form. Precision and speed tests for the Gaussian Analytic formulas are conducted under three scenarios of Gaussian radius (0.5, 0.671, 0.8 pixel), The simulation results show that the precision of GAC method is better than that of the other given algorithms when the Gaussian radius is not bigger than 5 × 5 pixel window, a widely used parameter. Above all, the algorithm which consumes the least time is still the novel GAC method. GAC method helps to promote the comprehensive performance in the attitude determination of a star tracker.
Organic analysis and analytical methods development: FY 1995 progress report
Clauss, S.A.; Hoopes, V.; Rau, J.
1995-09-01
This report describes the status of organic analyses and developing analytical methods to account for the organic components in Hanford waste tanks, with particular emphasis on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-103 (Tank 103-SY). The analytical data are to serve as an example of the status of methods development and application. Samples of the convective and nonconvective layers from Tank 103-SY were analyzed for total organic carbon (TOC). The TOC value obtained for the nonconvective layer using the hot persulfate method was 10,500 {mu}g C/g. The TOC value obtained from samples of Tank 101-SY was 11,000 {mu}g C/g. The average value for the TOC of the convective layer was 6400 {mu}g C/g. Chelator and chelator fragments in Tank 103-SY samples were identified using derivatization. gas chromatography/mass spectrometry (GC/MS). Organic components were quantified using GC/flame ionization detection. Major components in both the convective and nonconvective-layer samples include ethylenediaminetetraacetic acid (EDTA), nitrilotriacetic acid (NTA), succinic acid, nitrosoiminodiacetic acid (NIDA), citric acid, and ethylenediaminetriacetic acid (ED3A). Preliminary results also indicate the presence of C16 and C18 carboxylic acids in the nonconvective-layer sample. Oxalic acid was one of the major components in the nonconvective layer as determined by derivatization GC/flame ionization detection.
Analytical methods for kinetic studies of biological interactions: A review.
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S
2015-09-10
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies.
Evolution of microbiological analytical methods for dairy industry needs
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
Analytical methods for kinetic studies of biological interactions: A review.
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S
2015-09-10
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721
ANALYTICAL METHODS FOR KINETIC STUDIES OF BIOLOGICAL INTERACTIONS: A REVIEW
Zheng, Xiwei; Bi, Cong; Li, Zhao; Podariu, Maria; Hage, David S.
2015-01-01
The rates at which biological interactions occur can provide important information concerning the mechanism and behavior of these processes in living systems. This review discusses several analytical methods that can be used to examine the kinetics of biological interactions. These techniques include common or traditional methods such as stopped-flow analysis and surface plasmon resonance spectroscopy, as well as alternative methods based on affinity chromatography and capillary electrophoresis. The general principles and theory behind these approaches are examined, and it is shown how each technique can be utilized to provide information on the kinetics of biological interactions. Examples of applications are also given for each method. In addition, a discussion is provided on the relative advantages or potential limitations of each technique regarding its use in kinetic studies. PMID:25700721
Analytical methods for toxic gases from thermal degradation of polymers
NASA Technical Reports Server (NTRS)
Hsu, M.-T. S.
1977-01-01
Toxic gases evolved from the thermal oxidative degradation of synthetic or natural polymers in small laboratory chambers or in large scale fire tests are measured by several different analytical methods. Gas detector tubes are used for fast on-site detection of suspect toxic gases. The infrared spectroscopic method is an excellent qualitative and quantitative analysis for some toxic gases. Permanent gases such as carbon monoxide, carbon dioxide, methane and ethylene, can be quantitatively determined by gas chromatography. Highly toxic and corrosive gases such as nitrogen oxides, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and sulfur dioxide should be passed into a scrubbing solution for subsequent analysis by either specific ion electrodes or spectrophotometric methods. Low-concentration toxic organic vapors can be concentrated in a cold trap and then analyzed by gas chromatography and mass spectrometry. The limitations of different methods are discussed.
Evolution of microbiological analytical methods for dairy industry needs.
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards.
Using an analytical geometry method to improve tiltmeter data presentation
Su, W.-J.
2000-01-01
The tiltmeter is a useful tool for geologic and geotechnical applications. To obtain full benefit from the tiltmeter, easy and accurate data presentations should be used. Unfortunately, the most commonly used method for tilt data reduction now may yield inaccurate and low-resolution results. This article describes a simple, accurate, and high-resolution approach developed at the Illinois State Geological Survey for data reduction and presentation. The orientation of tiltplates is determined first by using a trigonometric relationship, followed by a matrix transformation, to obtain the true amount of rotation change of the tiltplate at any given time. The mathematical derivations used for the determination and transformation are then coded into an integrated PC application by adapting the capabilities of commercial spreadsheet, database, and graphics software. Examples of data presentation from tiltmeter applications in studies of landfill covers, characterizations of mine subsidence, and investigations of slope stability are also discussed.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and
Analytical methods for human biomonitoring of pesticides. A review.
Yusa, Vicent; Millet, Maurice; Coscolla, Clara; Roca, Marta
2015-09-01
Biomonitoring of both currently-used and banned-persistent pesticides is a very useful tool for assessing human exposure to these chemicals. In this review, we present current approaches and recent advances in the analytical methods for determining the biomarkers of exposure to pesticides in the most commonly used specimens, such as blood, urine, and breast milk, and in emerging non-invasive matrices such as hair and meconium. We critically discuss the main applications for sample treatment, and the instrumental techniques currently used to determine the most relevant pesticide biomarkers. We finally look at the future trends in this field.
Flue gas desulfurization (FGD) chemistry and analytical methods handbook
Noblett, J.G.; Burke, J.M.
1990-08-01
The purpose of this handbook is to provide a comprehensive guide to sampling, analytical, and physical test methods essential to the operation, maintenance, and understanding of flue gas desulfurization (FGD) system chemistry. EPRI sponsored the first edition of this three-volume report in response to the needs of electric utility personnel responsible for establishing and operating commercial FGD analytical laboratories. The second, revised editions of Volumes 1 and 2 were prompted by the results of research into various non-standard aspects of FGD system chemistry. Volume 1 of the handbook explains FGD system chemistry in the detail necessary to understand how the processes operate and how process performance indicators can be used to optimize system operation. Volume 2 includes 63 physical-testing and chemical-analysis methods for reagents, slurries, and solids, and information on the applicability of individual methods to specific FGD systems. Volume 3 contains instructions for FGD solution chemistry computer program designated by EPRI as FGDLIQEQ. Executable on IBM-compatible personal computers, this program calculates the concentrations (activities) of chemical species (ions) in scrubber liquor and can calculate driving forces for important chemical reactions such as S0{sub 2} absorption and calcium sulfite and sulfate precipitation. This program and selected chemical analyses will help an FGD system operator optimize system performance, prevent many potential process problems, and define solutions to existing problems. 22 refs., 17 figs., 28 tabs.
Performance of analytical methods for tomographic gamma scanning
Prettyman, T.H.; Mercer, D.J.
1997-06-01
The use of gamma-ray computerized tomography for nondestructive assay of radioactive materials has led to the development of specialized analytical methods. Over the past few years, Los Alamos has developed and implemented a computer code, called ARC-TGS, for the analysis of data obtained by tomographic gamma scanning (TGS). ARC-TGS reduces TGS transmission and emission tomographic data, providing the user with images of the sample contents, the activity or mass of selected radionuclides, and an estimate of the uncertainty in the measured quantities. The results provided by ARC-TGS can be corrected for self-attenuation when the isotope of interest emits more than one gamma-ray. In addition, ARC-TGS provides information needed to estimate TGS quantification limits and to estimate the scan time needed to screen for small amounts of radioactivity. In this report, an overview of the analytical methods used by ARC-TGS is presented along with an assessment of the performance of these methods for TGS.
Wong, William W; Clarke, Lucinda L
2012-11-01
Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to hydrogen gas (H(2)) for mass spectrometric analysis are labor intensive, require special reagent, and involve memory effect and potential isotope fractionation. The objective of this study was to determine the accuracy and precision of a platinum catalyzed H(2)-water equilibration method for stable hydrogen isotope ratio measurements. Time to reach isotopic equilibrium, day-to-day and week-to-week reproducibility, accuracy, and precision of stable hydrogen isotope ratio measurements by the H(2)-water equilibration method were assessed using a Thermo DELTA V Advantage continuous-flow isotope ratio mass spectrometer. It took 3 h to reach isotopic equilibrium. The day-to-day and week-to-week measurements on water and urine samples with natural abundance and enriched levels of deuterium were highly reproducible. The method was accurate to within 2.8 (o)/oo and reproducible to within 4.0 (o)/oo based on analysis of international references. All the outcome variables, whether in urine samples collected in 10 doubly labeled water studies or plasma samples collected in 26 body water studies, did not differ from those obtained using the reference zinc reduction method. The method produced highly accurate estimation on ad libitum energy intakes, body composition, and water turnover rates. The method greatly reduces the analytical cost and could easily be adopted by laboratories equipped with a continuous-flow isotope ratio mass spectrometer.
Wong, William W.; Clarke, Lucinda L.
2012-01-01
Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to hydrogen gas (H2) for mass spectrometric analysis are labor intensive, require special reagent, and involve memory effect and potential isotope fractionation. The objective of this study was to determine the accuracy and precision of a platinum catalyzed H2-water equilibration method for stable hydrogen isotope ratio measurements. Time to reach isotopic equilibrium, day-to-day and week-to-week reproducibility, accuracy, and precision of stable hydrogen isotope ratio measurements by the H2-water equilibration method were assessed using a Thermo DELTA V Advantage continuous-flow isotope ratio mass spectrometer. It took 3 h to reach isotopic equilibrium. The day-to-day and week-to-week measurements on water and urine samples with natural abundance and enriched levels of deuterium were highly reproducible. The method was accurate to within 2.8 o/oo and reproducible to within 4.0 o/oo based on analysis of international references. All the outcome variables, whether in urine samples collected in 10 doubly labeled water studies or plasma samples collected in 26 body water studies, did not differ from those obtained using the reference zinc reduction method. The method produced highly accurate estimation on ad libitum energy intakes, body composition, and water turnover rates. The method greatly reduces the analytical cost and could easily be adopted by laboratories equipped with a continuous-flow isotope ratio mass spectrometer. PMID:23014490
Experimental validation of an analytical method of calculating photon distributions
Wells, R.G.; Celler, A.; Harrop, R.
1996-12-31
We have developed a method for analytically calculating photon distributions in SPECT projections. This method models primary photon distributions as well as first and second order Compton scattering and Rayleigh scattering. It uses no free fitting parameters and so the projections produced are completely determined by the characteristics of the SPECT camera system, the energy of the isotope, an estimate of the source distribution and an attenuation map of the scattering object. The method was previously validated by comparison with Monte Carlo simulations and we are now verifying its accuracy with respect to phantom experiments. We have performed experiments using a Siemens MS3 SPECT camera system for a point source (2mm in diameter) within a homogeneous water bath and a small spherical source (1cm in diameter) within both a homogeneous water cylinder and a non-homogeneous medium consisting of air and water. Our technique reproduces well the distribution of photons in the experimentally acquired projections.
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
NASA Astrophysics Data System (ADS)
Tandogan, Asli; Radyushkin, Anatoly V.
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x > 1/2. For a purely flat DA, the evolution is governed by an overall (x\\bar {x})t dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1 - 2x|2t appears due to a jump at x = 1/2. A good convergence was observed in the t ≲ 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
An analytical sensitivity method for use in integrated aeroservoelastic aircraft design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-01
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. PMID:26363946
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-01
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. PMID:27498635
Using analytic network process for evaluating mobile text entry methods.
Ocampo, Lanndon A; Seva, Rosemary R
2016-01-01
This paper highlights a preference evaluation methodology for text entry methods in a touch keyboard smartphone using analytic network process (ANP). Evaluation of text entry methods in literature mainly considers speed and accuracy. This study presents an alternative means for selecting text entry method that considers user preference. A case study was carried out with a group of experts who were asked to develop a selection decision model of five text entry methods. The decision problem is flexible enough to reflect interdependencies of decision elements that are necessary in describing real-life conditions. Results showed that QWERTY method is more preferred than other text entry methods while arrangement of keys is the most preferred criterion in characterizing a sound method. Sensitivity analysis using simulation of normally distributed random numbers under fairly large perturbation reported the foregoing results reliable enough to reflect robust judgment. The main contribution of this paper is the introduction of a multi-criteria decision approach in the preference evaluation of text entry methods.
A New Cation-Exchange Method for Accurate Field Speciation of Hexavalent Chromium
Ball, James W.; McCleskey, R. Blaine
2003-01-01
A new cation-exchange method for field speciation of Cr(VI) has been developed to meet present stringent regulatory standards and to overcome the limitations of existing methods. The new method allows measurement of Cr(VI) concentrations as low as 0.05 micrograms per liter, storage of samples for at least several weeks prior to analysis, and use of readily available analytical instrumentation. The sensitivity, accuracy, and precision of the determination in waters over the pH range of 2 to 11 and Fe concentrations up to 1 milligram per liter are equal to or better than existing methods such as USEPA method 218.6. Time stability of preserved samples is a significant advantage over the 24-hour time constraint specified for USEPA method 218.6.
A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows
NASA Astrophysics Data System (ADS)
Bijleveld, H. A.; Veldman, A. E. P.
2014-12-01
A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.
Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E
2015-01-01
Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860
Stanley, Jeffrey R; Adkins, Joshua N; Slysz, Gordon W; Monroe, Matthew E; Purvine, Samuel O; Karpievitch, Yuliya V; Anderson, Gordon A; Smith, Richard D; Dabney, Alan R
2011-08-15
Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, because this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referenced as Statistical Tools for AMT Tag Confidence (STAC). STAC additionally provides a uniqueness probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download, as both a command line and a Windows graphical application.
Fast and accurate numerical method for predicting gas chromatography retention time.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-08-01
Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.
Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method
NASA Astrophysics Data System (ADS)
Kruglyakov, M.; Geraskin, A.; Kuvshinov, A.
2016-11-01
We present a novel, open source 3-D MT forward solver based on a method of integral equations (IE) with contracting kernel. Special attention in the solver is paid to accurate calculations of Green's functions and their integrals which are cornerstones of any IE solution. The solver supports massive parallelization and is able to deal with highly detailed and contrasting models. We report results of a 3-D numerical experiment aimed at analyzing the accuracy and scalability of the code.
Friedmann-Lemaitre cosmologies via roulettes and other analytic methods
NASA Astrophysics Data System (ADS)
Chen, Shouxin; Gibbons, Gary W.; Yang, Yisong
2015-10-01
In this work a series of methods are developed for understanding the Friedmann equation when it is beyond the reach of the Chebyshev theorem. First it will be demonstrated that every solution of the Friedmann equation admits a representation as a roulette such that information on the latter may be used to obtain that for the former. Next the Friedmann equation is integrated for a quadratic equation of state and for the Randall-Sundrum II universe, leading to a harvest of a rich collection of new interesting phenomena. Finally an analytic method is used to isolate the asymptotic behavior of the solutions of the Friedmann equation, when the equation of state is of an extended form which renders the integration impossible, and to establish a universal exponential growth law.
A novel unified coding analytical method for Internet of Things
NASA Astrophysics Data System (ADS)
Sun, Hong; Zhang, JianHong
2013-08-01
This paper presents a novel unified coding analytical method for Internet of Things, which abstracts out the `displacement goods' and `physical objects', and expounds the relationship thereof. It details the item coding principles, establishes a one-to-one relationship between three-dimensional spatial coordinates of points and global manufacturers, can infinitely expand, solves the problem of unified coding in production phase and circulation phase with a novel unified coding method, and further explains how to update the item information corresponding to the coding in stages of sale and use, so as to meet the requirement that the Internet of Things can carry out real-time monitoring and intelligentized management to each item.
NASA Astrophysics Data System (ADS)
Lowry, Thomas; Li, Shu-Guang
2005-02-01
Difficulty in solving the transient advection-diffusion equation (ADE) stems from the relationship between the advection derivatives and the time derivative. For a solution method to be viable, it must account for this relationship by being accurate in both space and time. This research presents a unique method for solving the time-dependent ADE that does not discretize the derivative terms but rather solves the equation analytically in the space-time domain. The method is computationally efficient and numerically accurate and addresses the common limitations of numerical dispersion and spurious oscillations that can be prevalent in other solution methods. The method is based on the improved finite analytic (IFA) solution method [Lowry TS, Li S-G. A characteristic based finite analytic method for solving the two-dimensional steady-state advection-diffusion equation. Water Resour Res 38 (7), 10.1029/2001WR000518] in space coupled with a Laplace transformation in time. In this way, the method has no Courant condition and maintains accuracy in space and time, performing well even at high Peclet numbers. The method is compared to a hybrid method of characteristics, a random walk particle tracking method, and an Eulerian-Lagrangian Localized Adjoint Method using various degrees of flow-field heterogeneity across multiple Peclet numbers. Results show the IFALT method to be computationally more efficient while producing similar or better accuracy than the other methods.
GenoSets: Visual Analytic Methods for Comparative Genomics
Cain, Aurora A.; Kosara, Robert; Gibas, Cynthia J.
2012-01-01
Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest. PMID:23056299
GenoSets: visual analytic methods for comparative genomics.
Cain, Aurora A; Kosara, Robert; Gibas, Cynthia J
2012-01-01
Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.
An analytical method for computing atomic contact areas in biomolecules.
Mach, Paul; Koehl, Patrice
2013-01-15
We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc.
An analytic method to compute star cluster luminosity statistics
NASA Astrophysics Data System (ADS)
da Silva, Robert L.; Krumholz, Mark R.; Fumagalli, Michele; Fall, S. Michael
2014-03-01
The luminosity distribution of the brightest star clusters in a population of galaxies encodes critical pieces of information about how clusters form, evolve and disperse, and whether and how these processes depend on the large-scale galactic environment. However, extracting constraints on models from these data is challenging, in part because comparisons between theory and observation have traditionally required computationally intensive Monte Carlo methods to generate mock data that can be compared to observations. We introduce a new method that circumvents this limitation by allowing analytic computation of cluster order statistics, i.e. the luminosity distribution of the Nth most luminous cluster in a population. Our method is flexible and requires few assumptions, allowing for parametrized variations in the initial cluster mass function and its upper and lower cutoffs, variations in the cluster age distribution, stellar evolution and dust extinction, as well as observational uncertainties in both the properties of star clusters and their underlying host galaxies. The method is fast enough to make it feasible for the first time to use Markov chain Monte Carlo methods to search parameter space to find best-fitting values for the parameters describing cluster formation and disruption, and to obtain rigorous confidence intervals on the inferred values. We implement our method in a software package called the Cluster Luminosity Order-Statistic Code, which we have made publicly available.
Comparison of four [sup 90]Sr groundwater analytical methods
Scarpitta, S.; Odin-McCabe, J.; Gaschott, R.; Meier, A.; Klug, E. . Analytical Services Lab.)
1999-06-01
Data are presented for 45 Long Island groundwater samples each measured for [sup 90]Sr using four different analytical methods. [sup 90]Sr levels were first established by two New York State certified laboratories, one of which used the US Environmental Protection Agency Radioactive Strontium in Drinking Water Method 905.0. Three of the [sup 90]Sr methods evaluated at Brookhaven National Laboratory can reduce analysis time by more than 50%. They were (a) an Environmental Measurements Laboratory Cerenkov technique and (b) two commercially available products that utilize strontium-specific crown-ethers supported on either a resin or membrane disk. Method independent inter-laboratory bias was < 12% based on [sup 90]Sr results obtained using both US Department of Energy/Environmental Measurements Laboratory and US EPA/National Environmental Radiation Laboratory samples of known activity concentration. Brookhaven National Laboratory prepared a National Institute of Standards and Technology traceable [sup 90]Sr tap-water sample used to quantify test method biases. With gas proportional or liquid scintillation counting, minimum detectable levels (MDLs) of 37 Bq m[sup [minus]3] (1 pCi L[sup [minus]1]) wee achievable for both crown-ether methods using a 1-L processed sample beta counted for 1 h.
[Comparison of intestinal bacteria composition identified by various analytical methods].
Fujisawa, Tomohiko
2014-01-01
Many different kinds of bacteria are normally found in the intestines of healthy humans and animals. To study the ecology and function of these intestinal bacteria, the culture method was fundamental until recent years, and suitable agar plates such as non-selective agar plates and several selective agar plates have been developed. Furthermore, the roll-tube, glove box, and plate-in-bottle methods have also been developed for the cultivation of fastidious anaerobes that predominantly colonize the intestine. Until recently, the evaluation of functional foods such as pre- and probiotics was mainly done using culture methods, and many valuable data were produced. On the other hand, genomic analysis such as the fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), clone-library, denaturing gradient gel electrophoresis (DGGE), temperature gradient gel electrophoresis (TGGE), terminal-restriction fragment length polymorphism (T-RFLP) methods, and metagenome analysis have been used for the investigation of intestinal microbiota in recent years. The identification of bacteria is done by investigation of the phenotypic characteristics in culture methods, while rRNA genes are used as targets in genomic analysis. Here, I compare the fecal bacteria identified by various analytical methods.
MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION
The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...
Comparison of methods for accurate end-point detection of potentiometric titrations
NASA Astrophysics Data System (ADS)
Villela, R. L. A.; Borges, P. P.; Vyskočil, L.
2015-01-01
Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.
A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors
Hwang, Beomsoo; Jeon, Doyoung
2015-01-01
In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074
Hwang, Beomsoo; Jeon, Doyoung
2015-04-09
In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.
Deng, Yan; Zhou, Bin; Xing, Chao; Zhang, Rong
2014-10-17
A novel multifrequency excitation (MFE) method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE) method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.
A second order accurate embedded boundary method for the wave equation with Dirichlet data
Kreiss, H O; Petersson, N A
2004-03-02
The accuracy of Cartesian embedded boundary methods for the second order wave equation in general two-dimensional domains subject to Dirichlet boundary conditions is analyzed. Based on the analysis, we develop a numerical method where both the solution and its gradient are second order accurate. We avoid the small-cell stiffness problem without sacrificing the second order accuracy by adding a small artificial term to the Dirichlet boundary condition. Long-time stability of the method is obtained by adding a small fourth order dissipative term. Several numerical examples are provided to demonstrate the accuracy and stability of the method. The method is also used to solve the two-dimensional TM{sub z} problem for Maxwell's equations posed as a second order wave equation for the electric field coupled to ordinary differential equations for the magnetic field.
Deng, Yan; Zhou, Bin; Xing, Chao; Zhang, Rong
2014-01-01
A novel multifrequency excitation (MFE) method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE) method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes. PMID:25330052
21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Procedure for announcing analytical methods for...-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a) FDA may issue an order announcing a specific analytical method or methods for the quantification...
Accurate near-field calculation in the rigorous coupled-wave analysis method
NASA Astrophysics Data System (ADS)
Weismann, Martin; Gallagher, Dominic F. G.; Panoiu, Nicolae C.
2015-12-01
The rigorous coupled-wave analysis (RCWA) is one of the most successful and widely used methods for modeling periodic optical structures. It yields fast convergence of the electromagnetic far-field and has been adapted to model various optical devices and wave configurations. In this article, we investigate the accuracy with which the electromagnetic near-field can be calculated by using RCWA and explain the observed slow convergence and numerical artifacts from which it suffers, namely unphysical oscillations at material boundaries due to the Gibbs phenomenon. In order to alleviate these shortcomings, we also introduce a mathematical formulation for accurate near-field calculation in RCWA, for one- and two-dimensional straight and slanted diffraction gratings. This accurate near-field computational approach is tested and evaluated for several representative test-structures and configurations in order to illustrate the advantages provided by the proposed modified formulation of the RCWA.
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.
Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H
2016-09-01
Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618
Polymeric vehicles for topical delivery and related analytical methods.
Cho, Heui Kyoung; Cho, Jin Hun; Jeong, Seong Hoon; Cho, Dong Chul; Yeum, Jeong Hyun; Cheong, In Woo
2014-04-01
Recently a variety of polymeric vehicles, such as micelles, nanoparticles, and polymersomes, have been explored and some of them are clinically used to deliver therapeutic drugs through skin. In topical delivery, the polymeric vehicles as drug carrier should guarantee non-toxicity, long-term stability, and permeation efficacy for drugs, etc. For the development of the successful topical delivery system, it is of importance to develop the polymeric vehicles of well-defined intrinsic properties, such as molecular weights, HLB, chemical composition, topology, specific ligand conjugation and to investigate the effects of the properties on drug permeation behavior. In addition, the role of polymeric vehicles must be elucidated in in vitro and in vivo analyses. This article describes some important features of polymeric vehicles and corresponding analytical methods in topical delivery even though the application span of polymers has been truly broad in the pharmaceutical fields.
Application of analytical methods in authentication and adulteration of honey.
Siddiqui, Amna Jabbar; Musharraf, Syed Ghulam; Choudhary, M Iqbal; Rahman, Atta-Ur-
2017-02-15
Honey is synthesized from flower nectar and it is famous for its tremendous therapeutic potential since ancient times. Many factors influence the basic properties of honey including the nectar-providing plant species, bee species, geographic area, and harvesting conditions. Quality and composition of honey is also affected by many other factors, such as overfeeding of bees with sucrose, harvesting prior to maturity, and adulteration with sugar syrups. Due to the complex nature of honey, it is often challenging to authenticate the purity and quality by using common methods such as physicochemical parameters and more specialized procedures need to be developed. This article reviews the literature (between 2000 and 2016) on the use of analytical techniques, mainly NMR spectroscopy, for authentication of honey, its botanical and geographical origin, and adulteration by sugar syrups. NMR is a powerful technique and can be used as a fingerprinting technique to compare various samples. PMID:27664687
Analytical Failure Prediction Method Developed for Woven and Braided Composites
NASA Technical Reports Server (NTRS)
Min, James B.
2003-01-01
Historically, advances in aerospace engine performance and durability have been linked to improvements in materials. Recent developments in ceramic matrix composites (CMCs) have led to increased interest in CMCs to achieve revolutionary gains in engine performance. The use of CMCs promises many advantages for advanced turbomachinery engine development and may be especially beneficial for aerospace engines. The most beneficial aspects of CMC material may be its ability to maintain its strength to over 2500 F, its internal material damping, and its relatively low density. Ceramic matrix composites reinforced with two-dimensional woven and braided fabric preforms are being considered for NASA s next-generation reusable rocket turbomachinery applications (for example, see the preceding figure). However, the architecture of a textile composite is complex, and therefore, the parameters controlling its strength properties are numerous. This necessitates the development of engineering approaches that combine analytical methods with limited testing to provide effective, validated design analyses for the textile composite structures development.
White, C.M.; Avery, M.; Blanton, W.; Hilpert, L.; Jackson, L.; Junk, G.; Maskarinec, M.; Paule, R.C.; Raphaelian, L.; Richard, J.
1983-10-01
An analytical method has been developed for analysis of organic compounds in aqueous leachates of fossil fuel solid wastes. The method has been evaluated using two synthetic leachates as well as bulk and small-scale leachates of SRC-II vacuum still bottoms at the participating laboratories. Under the conditions of these tests, the method worked well for most analytes; however, n-hexanoic acid, 4-aminobiphenyl, 1,4-naphtoquinone, and 1-nephthylamine were not determined accurately or precisely by the method. Other analytes of interest are benzanthracene, o-cresol, phenanthrene, carbazole, naphthalene, phenol, n-tetradecane, 2-naphthol, dibenzothiophene, quinoline, acenaphthylene, 2-picoline, fluoranthrene, 2,3,4,5-tetrachlorobiphenyl (standard), 2-fluorophenol (standard), n-octacosane (standard), and azulene (standard). 7 references, 22 figures, 26 tables.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
Accurate Wind Characterization in Complex Terrain Using the Immersed Boundary Method
Lundquist, K A; Chow, F K; Lundquist, J K; Kosovic, B
2009-09-30
This paper describes an immersed boundary method (IBM) that facilitates the explicit resolution of complex terrain within the Weather Research and Forecasting (WRF) model. Two different interpolation methods, trilinear and inverse distance weighting, are used at the core of the IBM algorithm. Functional aspects of the algorithm's implementation and the accuracy of results are considered. Simulations of flow over a three-dimensional hill with shallow terrain slopes are preformed with both WRF's native terrain-following coordinate and with both IB methods. Comparisons of flow fields from the three simulations show excellent agreement, indicating that both IB methods produce accurate results. However, when ease of implementation is considered, inverse distance weighting is superior. Furthermore, inverse distance weighting is shown to be more adept at handling highly complex urban terrain, where the trilinear interpolation algorithm breaks down. This capability is demonstrated by using the inverse distance weighting core of the IBM to model atmospheric flow in downtown Oklahoma City.
The Finite Analytic Method for steady and unsteady heat transfer problems
NASA Technical Reports Server (NTRS)
Chen, C.-J.; Li, P.
1980-01-01
A new numerical method called the Finite Analytical Method for solving partial differential equations is introduced. The basic idea of the finite analytic method is the incorporation of the local analytic solution in obtaining the numerical solution of the problem. The finite analytical method first divides the total region of the problem into small subregions in which local analytic solutions are obtained. Then an algebraic equation is derived from the local analytic solution for each subregion relating an interior nodal value at a point P in the subregion to its neighboring nodal values. The assembly of all the local analytic solutions thus provides the finite-analytic numerical solution of the problem. In this paper the finite analytic method is illustrated in solving steady and unsteady heat transfer problems.
[Analytical methods for control of foodstuffs made from bioengineered plants].
Chernysheva, O N; Sorokina, E Iu
2013-01-01
Foodstuffs made by modern biotechnology are requiring for special control. Analytical methods used for these purposes are being constantly perfected. When choosing a strategy for the analysis, several factors have to be assessed: specificity, sensitivity, practically of the method and time efficiency. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. Protein detection methods are based mainly on ELISA. The specific detection of a novel protein synthesized by gene introduced during transformation constitutes an alternative approach for the identification of GMO. The genetic modification is not always specifically directed at the production of a novel protein and does not always result in protein expression levels sufficient for detection purposes. In addition, some proteins may be expressed only in specific parts of the plant or expressed at different levels in distinct parts of plant. As DNA is a rather stable molecule relative to proteins, it is preferred target for any kind of sample. These methods are more sensitive and specific than protein detection methods. PCR-based test can be categorized into several levels of specificity. The least specific methods are commonly called "screening methods" and relate to target DNA elements, such as promoters and terminators that are present in many different GMOs. For routine screening purpose regulatory elements 35S promoter, derived from the Cauliflower Mosaic Virus and the NOS terminator, derived from the nopaline synthase gene of Agrobacterium tumefaciens, are used as target sequences. The second level is "gene-specific methods". These methods target a part of the DNA harbouring the active gene associated with the specific genetic modification. The highest specificity is seen when the target is the unique junction found at the integration locus between the inserted DNA and the recipient genome. These are called "event-specific methods". For a
[Analytical methods for control of foodstuffs made from bioengineered plants].
Chernysheva, O N; Sorokina, E Iu
2013-01-01
Foodstuffs made by modern biotechnology are requiring for special control. Analytical methods used for these purposes are being constantly perfected. When choosing a strategy for the analysis, several factors have to be assessed: specificity, sensitivity, practically of the method and time efficiency. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. Protein detection methods are based mainly on ELISA. The specific detection of a novel protein synthesized by gene introduced during transformation constitutes an alternative approach for the identification of GMO. The genetic modification is not always specifically directed at the production of a novel protein and does not always result in protein expression levels sufficient for detection purposes. In addition, some proteins may be expressed only in specific parts of the plant or expressed at different levels in distinct parts of plant. As DNA is a rather stable molecule relative to proteins, it is preferred target for any kind of sample. These methods are more sensitive and specific than protein detection methods. PCR-based test can be categorized into several levels of specificity. The least specific methods are commonly called "screening methods" and relate to target DNA elements, such as promoters and terminators that are present in many different GMOs. For routine screening purpose regulatory elements 35S promoter, derived from the Cauliflower Mosaic Virus and the NOS terminator, derived from the nopaline synthase gene of Agrobacterium tumefaciens, are used as target sequences. The second level is "gene-specific methods". These methods target a part of the DNA harbouring the active gene associated with the specific genetic modification. The highest specificity is seen when the target is the unique junction found at the integration locus between the inserted DNA and the recipient genome. These are called "event-specific methods". For a
Anderson, Oscar A.
2006-08-06
The well-known Kapchinskij-Vladimirskij (KV) equations are difficult to solve in general, but the problem is simplified for the matched-beam case with sufficient symmetry. They show that the interdependence of the two KV equations is eliminated, so that only one needs to be solved--a great simplification. They present an iterative method of solution which can potentially yield any desired level of accuracy. The lowest level, the well-known smooth approximation, yields simple, explicit results with good accuracy for weak or moderate focusing fields. The next level improves the accuracy for high fields; they previously showed how to maintain a simple explicit format for the results. That paper used expansion in a small parameter to obtain the second level. The present paper, using straightforward iteration, obtains equations of first, second, and third levels of accuracy. For a periodic lattice with beam matched to lattice, they use the lattice and beam parameters as input and solve for phase advances and envelope waveforms. They find excellent agreement with numerical solutions over a wide range of beam emittances and intensities.
A new high-order accurate continuous Galerkin method for linear elastodynamics problems
NASA Astrophysics Data System (ADS)
Idesman, Alexander V.
2007-07-01
A new high-order accurate time-continuous Galerkin (TCG) method for elastodynamics is suggested. The accuracy of the new implicit TCG method is increased by a factor of two in comparison to that of the standard TCG method and is one order higher than the accuracy of the standard time-discontinuous Galerkin (TDG) method at the same number of degrees of freedom. The new method is unconditionally stable and has controllable numerical dissipation at high frequencies. An iterative predictor/multi-corrector solver that includes the factorization of the effective mass matrix of the same dimension as that of the mass matrix for the second-order methods is developed for the new TCG method. A new strategy combining numerical methods with small and large numerical dissipation is developed for elastodynamics. Simple numerical tests show a significant reduction in the computation time (by 5 25 times) for the new TCG method in comparison to that for second-order methods, and the suppression of spurious high-frequency oscillations.
Method of Analytic Evolution of Flat Distribution Amplitudes in QCD
Asli Tandogan, Anatoly V. Radyushkin
2011-11-01
A new analytical method of performing ERBL evolution is described. The main goal is to develop an approach that works for distribution amplitudes that do not vanish at the end points, for which the standard method of expansion in Gegenbauer polynomials is inefficient. Two cases of the initial DA are considered: a purely flat DA, given by the same constant for all x, and an antisymmetric DA given by opposite constants for x < 1/2 or x > 1/2. For a purely flat DA, the evolution is governed by an overall (x (1-x)){sup t} dependence on the evolution parameter t times a factor that was calculated as an expansion in t. For an antisymmetric flat DA, an extra overall factor |1-2x|{sup 2t} appears due to a jump at x = 1/2. A good convergence was observed in the t {approx}< 1/2 region. For larger t, one can use the standard method of the Gegenbauer expansion.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
NASA Astrophysics Data System (ADS)
Simmons, Daniel; Cools, Kristof; Sewell, Phillip
2016-11-01
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or regulatory criteria. (b) FDA may require the development of an acceptable analytical method for the... such an acceptable analytical method, the agency will publish notice of that requirement in the...
21 CFR 530.40 - Safe levels and availability of analytical methods.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and availability of analytical methods... Safe levels and availability of analytical methods. (a) In accordance with § 530.22, the following safe... accordance with § 530.22, the following analytical methods have been accepted by FDA:...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 6 2010-04-01 2010-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical,...
A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns
NASA Astrophysics Data System (ADS)
Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2004-05-01
Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.
NASA Astrophysics Data System (ADS)
Li, Yafeng; Zhang, Ning; Zhou, Yueming; Wang, Jianing; Zhang, Yiming; Wang, Jiyun; Xiong, Caiqiao; Chen, Suming; Nie, Zongxiu
2013-09-01
Accurate mass information is of great importance in the determination of unknown compounds. An effective and easy-to-control internal mass calibration method will dramatically benefit accurate mass measurement. Here we reported a simple induced dual-nanospray internal calibration device which has the following three advantages: (1) the two sprayers are in the same alternating current field; thus both reference ions and sample ions can be simultaneously generated and recorded. (2) It is very simple and can be easily assembled. Just two metal tubes, two nanosprayers, and an alternating current power supply are included. (3) With the low-flow-rate character and the versatility of nanoESI, this calibration method is capable of calibrating various samples, even untreated complex samples such as urine and other biological samples with small sample volumes. The calibration errors are around 1 ppm in positive ion mode and 3 ppm in negative ion mode with good repeatability. This new internal calibration method opens up new possibilities in the determination of unknown compounds, and it has great potential for the broad applications in biological and chemical analysis.
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
A fast GNU method to draw accurate scientific illustrations for taxonomy.
Montesanto, Giuseppe
2015-01-01
Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given.
A fast GNU method to draw accurate scientific illustrations for taxonomy
Montesanto, Giuseppe
2015-01-01
Abstract Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given. PMID:26261449
Quick and accurate estimation of the elastic constants using the minimum image method
NASA Astrophysics Data System (ADS)
Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.
2015-04-01
A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.
Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.
Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan
2015-10-01
Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062
How to assess the quality of your analytical method?
Topic, Elizabeta; Nikolac, Nora; Panteghini, Mauro; Theodorsson, Elvar; Salvagno, Gian Luca; Miler, Marijana; Simundic, Ana-Maria; Infusino, Ilenia; Nordin, Gunnar; Westgard, Sten
2015-10-01
Laboratory medicine is amongst the fastest growing fields in medicine, crucial in diagnosis, support of prevention and in the monitoring of disease for individual patients and for the evaluation of treatment for populations of patients. Therefore, high quality and safety in laboratory testing has a prominent role in high-quality healthcare. Applied knowledge and competencies of professionals in laboratory medicine increases the clinical value of laboratory results by decreasing laboratory errors, increasing appropriate utilization of tests, and increasing cost effectiveness. This collective paper provides insights into how to validate the laboratory assays and assess the quality of methods. It is a synopsis of the lectures at the 15th European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Continuing Postgraduate Course in Clinical Chemistry and Laboratory Medicine entitled "How to assess the quality of your method?" (Zagreb, Croatia, 24-25 October 2015). The leading topics to be discussed include who, what and when to do in validation/verification of methods, verification of imprecision and bias, verification of reference intervals, verification of qualitative test procedures, verification of blood collection systems, comparability of results among methods and analytical systems, limit of detection, limit of quantification and limit of decision, how to assess the measurement uncertainty, the optimal use of Internal Quality Control and External Quality Assessment data, Six Sigma metrics, performance specifications, as well as biological variation. This article, which continues the annual tradition of collective papers from the EFLM continuing postgraduate courses in clinical chemistry and laboratory medicine, aims to provide further contributions by discussing the quality of laboratory methods and measurements and, at the same time, to offer continuing professional development to the attendees.
NASA Astrophysics Data System (ADS)
Amador, Davi H. T.; de Oliveira, Heibbe C. B.; Sambrano, Julio R.; Gargano, Ricardo; de Macedo, Luiz Guilherme M.
2016-10-01
A prolapse-free basis set for Eka-Actinium (E121, Z = 121), numerical atomic calculations on E121, spectroscopic constants and accurate analytical form for the potential energy curve of diatomic E121F obtained at 4-component all-electron CCSD(T) level including Gaunt interaction are presented. The results show a strong and polarized bond (≈181 kcal/mol in strength) between E121 and F, the outermost frontier molecular orbitals from E121F should be fairly similar to the ones from AcF and there is no evidence of break of periodic trends. Moreover, the Gaunt interaction, although small, is expected to influence considerably the overall rovibrational spectra.
Crovelli, Robert A.; revised by Charpentier, Ronald R.
2012-01-01
The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.
An analytical method for predicting postwildfire peak discharges
Moody, John A.
2012-01-01
An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The
An accurate method for the determination of carboxyhemoglobin in postmortem blood using GC-TCD.
Lewis, Russell J; Johnson, Robert D; Canfield, Dennis V
2004-01-01
During the investigation of aviation accidents, postmortem samples from accident victims are submitted to the FAA's Civil Aerospace Medical Institute for toxicological analysis. In order to determine if an accident victim was exposed to an in-flight/postcrash fire or faulty heating/exhaust system, the analysis of carbon monoxide (CO) is conducted. Although our laboratory predominantly uses a spectrophotometric method for the determination of carboxyhemoglobin (COHb), we consider it essential to confirm with a second technique based on a different analytical principle. Our laboratory encountered difficulties with many of our postmortem samples while employing a commonly used GC method. We believed these problems were due to elevated methemoglobin (MetHb) concentration in our specimens. MetHb does not bind CO; therefore, elevated MetHb levels will result in a loss of CO-binding capacity. Because most commonly employed GC methods determine %COHb from a ratio of unsaturated blood to CO-saturated blood, a loss of CO-binding capacity will result in an erroneously high %COHb value. Our laboratory has developed a new GC method for the determination of %COHb that incorporates sodium dithionite, which will reduce any MetHb present to Hb. Using blood controls ranging from 1% to 67% COHb, we found no statistically significant differences between %COHb results from our new GC method and our spectrophotometric method. To validate the new GC method, postmortem samples were analyzed with our existing spectrophotometric method, a GC method commonly used without reducing agent, and our new GC method with the addition of sodium dithionite. As expected, we saw errors up to and exceeding 50% when comparing the unreduced GC results with our spectrophotometric method. With our new GC procedure, the error was virtually eliminated. PMID:14987426
Pesticides in honey: A review on chromatographic analytical methods.
Tette, Patrícia Amaral Souza; Rocha Guidi, Letícia; Glória, Maria Beatriz de Abreu; Fernandes, Christian
2016-01-01
Honey is a product of high consumption due to its nutritional and antimicrobial properties. However, residues of pesticides, used in plagues' treatment in the hive or in crop fields in the neighborhoods, can compromise its quality. Therefore, determination of these contaminants in honey is essential, since the use of pesticides has increased significantly in recent decades because of the growing demand for food production. Furthermore, pesticides in honey can be an indicator of environmental contamination. As the concentration of these compounds in honey is usually at trace levels and several pesticides can be found simultaneously, the use of highly sensitive and selective techniques is required. In this context, miniaturized sample preparation approaches and liquid or gas chromatography coupled to mass spectrometry became the most important analytical techniques. In this review we present and discuss recent studies dealing with pesticide determination in honey, focusing on sample preparation and separation/detection methods as well as application of the developed methods worldwide. Furthermore, trends and future perspectives are presented. PMID:26717823
Pesticides in honey: A review on chromatographic analytical methods.
Tette, Patrícia Amaral Souza; Rocha Guidi, Letícia; Glória, Maria Beatriz de Abreu; Fernandes, Christian
2016-01-01
Honey is a product of high consumption due to its nutritional and antimicrobial properties. However, residues of pesticides, used in plagues' treatment in the hive or in crop fields in the neighborhoods, can compromise its quality. Therefore, determination of these contaminants in honey is essential, since the use of pesticides has increased significantly in recent decades because of the growing demand for food production. Furthermore, pesticides in honey can be an indicator of environmental contamination. As the concentration of these compounds in honey is usually at trace levels and several pesticides can be found simultaneously, the use of highly sensitive and selective techniques is required. In this context, miniaturized sample preparation approaches and liquid or gas chromatography coupled to mass spectrometry became the most important analytical techniques. In this review we present and discuss recent studies dealing with pesticide determination in honey, focusing on sample preparation and separation/detection methods as well as application of the developed methods worldwide. Furthermore, trends and future perspectives are presented.
Pyrrolizidine alkaloids in honey: comparison of analytical methods.
Kempf, M; Wittig, M; Reinhard, A; von der Ohe, K; Blacquière, T; Raezke, K-P; Michel, R; Schreier, P; Beuerle, T
2011-03-01
Pyrrolizidine alkaloids (PAs) are a structurally diverse group of toxicologically relevant secondary plant metabolites. Currently, two analytical methods are used to determine PA content in honey. To achieve reasonably high sensitivity and selectivity, mass spectrometry detection is demanded. One method is an HPLC-ESI-MS-MS approach, the other a sum parameter method utilising HRGC-EI-MS operated in the selected ion monitoring mode (SIM). To date, no fully validated or standardised method exists to measure the PA content in honey. To establish an LC-MS method, several hundred standard pollen analysis results of raw honey were analysed. Possible PA plants were identified and typical commercially available marker PA-N-oxides (PANOs). Three distinct honey sets were analysed with both methods. Set A consisted of pure Echium honey (61-80% Echium pollen). Echium is an attractive bee plant. It is quite common in all temperate zones worldwide and is one of the major reasons for PA contamination in honey. Although only echimidine/echimidine-N-oxide were available as reference for the LC-MS target approach, the results for both analytical techniques matched very well (n = 8; PA content ranging from 311 to 520 µg kg(-1)). The second batch (B) consisted of a set of randomly picked raw honeys, mostly originating from Eupatorium spp. (0-15%), another common PA plant, usually characterised by the occurrence of lycopsamine-type PA. Again, the results showed good consistency in terms of PA-positive samples and quantification results (n = 8; ranging from 0 to 625 µg kg(-1) retronecine equivalents). The last set (C) was obtained by consciously placing beehives in areas with a high abundance of Jacobaea vulgaris (ragwort) from the Veluwe region (the Netherlands). J. vulgaris increasingly invades countrysides in Central Europe, especially areas with reduced farming or sites with natural restorations. Honey from two seasons (2007 and 2008) was sampled. While only trace amounts of
Evaluation of FTIR-based analytical methods for the analysis of simulated wastes
Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.
1994-09-30
Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data.
Analytical methods in untargeted metabolomics: state of the art in 2015.
Alonso, Arnald; Marsal, Sara; Julià, Antonio
2015-01-01
Metabolomics comprises the methods and techniques that are used to measure the small molecule composition of biofluids and tissues, and is actually one of the most rapidly evolving research fields. The determination of the metabolomic profile - the metabolome - has multiple applications in many biological sciences, including the developing of new diagnostic tools in medicine. Recent technological advances in nuclear magnetic resonance and mass spectrometry are significantly improving our capacity to obtain more data from each biological sample. Consequently, there is a need for fast and accurate statistical and bioinformatic tools that can deal with the complexity and volume of the data generated in metabolomic studies. In this review, we provide an update of the most commonly used analytical methods in metabolomics, starting from raw data processing and ending with pathway analysis and biomarker identification. Finally, the integration of metabolomic profiles with molecular data from other high-throughput biotechnologies is also reviewed.
Analytical Methods in Untargeted Metabolomics: State of the Art in 2015
Alonso, Arnald; Marsal, Sara; Julià, Antonio
2015-01-01
Metabolomics comprises the methods and techniques that are used to measure the small molecule composition of biofluids and tissues, and is actually one of the most rapidly evolving research fields. The determination of the metabolomic profile – the metabolome – has multiple applications in many biological sciences, including the developing of new diagnostic tools in medicine. Recent technological advances in nuclear magnetic resonance and mass spectrometry are significantly improving our capacity to obtain more data from each biological sample. Consequently, there is a need for fast and accurate statistical and bioinformatic tools that can deal with the complexity and volume of the data generated in metabolomic studies. In this review, we provide an update of the most commonly used analytical methods in metabolomics, starting from raw data processing and ending with pathway analysis and biomarker identification. Finally, the integration of metabolomic profiles with molecular data from other high-throughput biotechnologies is also reviewed. PMID:25798438
3'READS+, a sensitive and accurate method for 3' end sequencing of polyadenylated RNA.
Zheng, Dinghai; Liu, Xiaochuan; Tian, Bin
2016-10-01
Sequencing of the 3' end of poly(A)(+) RNA identifies cleavage and polyadenylation sites (pAs) and measures transcript expression. We previously developed a method, 3' region extraction and deep sequencing (3'READS), to address mispriming issues that often plague 3' end sequencing. Here we report a new version, named 3'READS+, which has vastly improved accuracy and sensitivity. Using a special locked nucleic acid oligo to capture poly(A)(+) RNA and to remove the bulk of the poly(A) tail, 3'READS+ generates RNA fragments with an optimal number of terminal A's that balance data quality and detection of genuine pAs. With improved RNA ligation steps for efficiency, the method shows much higher sensitivity (over two orders of magnitude) compared to the previous version. Using 3'READS+, we have uncovered a sizable fraction of previously overlooked pAs located next to or within a stretch of adenylate residues in human genes and more accurately assessed the frequency of alternative cleavage and polyadenylation (APA) in HeLa cells (∼50%). 3'READS+ will be a useful tool to accurately study APA and to analyze gene expression by 3' end counting, especially when the amount of input total RNA is limited. PMID:27512124
3'READS+, a sensitive and accurate method for 3' end sequencing of polyadenylated RNA.
Zheng, Dinghai; Liu, Xiaochuan; Tian, Bin
2016-10-01
Sequencing of the 3' end of poly(A)(+) RNA identifies cleavage and polyadenylation sites (pAs) and measures transcript expression. We previously developed a method, 3' region extraction and deep sequencing (3'READS), to address mispriming issues that often plague 3' end sequencing. Here we report a new version, named 3'READS+, which has vastly improved accuracy and sensitivity. Using a special locked nucleic acid oligo to capture poly(A)(+) RNA and to remove the bulk of the poly(A) tail, 3'READS+ generates RNA fragments with an optimal number of terminal A's that balance data quality and detection of genuine pAs. With improved RNA ligation steps for efficiency, the method shows much higher sensitivity (over two orders of magnitude) compared to the previous version. Using 3'READS+, we have uncovered a sizable fraction of previously overlooked pAs located next to or within a stretch of adenylate residues in human genes and more accurately assessed the frequency of alternative cleavage and polyadenylation (APA) in HeLa cells (∼50%). 3'READS+ will be a useful tool to accurately study APA and to analyze gene expression by 3' end counting, especially when the amount of input total RNA is limited.
DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROTECTION
A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...
Accurate and efficient method for many-body van der Waals interactions.
Tkatchenko, Alexandre; DiStasio, Robert A; Car, Roberto; Scheffler, Matthias
2012-06-01
An efficient method is developed for the microscopic description of the frequency-dependent polarizability of finite-gap molecules and solids. This is achieved by combining the Tkatchenko-Scheffler van der Waals (vdW) method [Phys. Rev. Lett. 102, 073005 (2009)] with the self-consistent screening equation of classical electrodynamics. This leads to a seamless description of polarization and depolarization for the polarizability tensor of molecules and solids. The screened long-range many-body vdW energy is obtained from the solution of the Schrödinger equation for a system of coupled oscillators. We show that the screening and the many-body vdW energy play a significant role even for rather small molecules, becoming crucial for an accurate treatment of conformational energies for biomolecules and binding of molecular crystals. The computational cost of the developed theory is negligible compared to the underlying electronic structure calculation.
Generalized weighted ratio method for accurate turbidity measurement over a wide range.
Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying
2015-12-14
Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU.
Troeltzsch, Matthias; Liedtke, Jan; Troeltzsch, Volker; Frankenberger, Roland; Steiner, Timm; Troeltzsch, Markus
2012-10-01
Odontomas account for the largest fraction of odontogenic tumors and are frequent causes of tooth impaction. A case of a 13-year-old female patient with an odontoma-associated impaction of a mandibular molar is presented with a review of the literature. Preoperative planning involved simple and convenient methods such as clinical examination and panoramic radiography, which led to a diagnosis of complex odontoma and warranted surgical removal. The clinical diagnosis was confirmed histologically. Multidisciplinary consultation may enable the clinician to find the accurate diagnosis and appropriate therapy based on the clinical and radiographic appearance. Modern radiologic methods such as cone-beam computed tomography or computed tomography should be applied only for special cases, to decrease radiation.
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Jenny, Patrick
2013-08-01
Different simulation methods are applicable to study turbulent mixing. When applying probability density function (PDF) methods, turbulent transport, and chemical reactions appear in closed form, which is not the case in second moment closure methods (RANS). Moreover, PDF methods provide the entire joint velocity-scalar PDF instead of a limited set of moments. In PDF methods, however, a mixing model is required to account for molecular diffusion. In joint velocity-scalar PDF methods, mixing models should also account for the joint velocity-scalar statistics, which is often under appreciated in applications. The interaction by exchange with the conditional mean (IECM) model accounts for these joint statistics, but requires velocity-conditional scalar means that are expensive to compute in spatially three dimensional settings. In this work, two alternative mixing models are presented that provide more accurate PDF predictions at reduced computational cost compared to the IECM model, since no conditional moments have to be computed. All models are tested for different mixing benchmark cases and their computational efficiencies are inspected thoroughly. The benchmark cases involve statistically homogeneous and inhomogeneous settings dealing with three streams that are characterized by two passive scalars. The inhomogeneous case clearly illustrates the importance of accounting for joint velocity-scalar statistics in the mixing model. Failure to do so leads to significant errors in the resulting scalar means, variances and other statistics.
Xu, Ninghan; Bai, Benfeng; Tan, Qiaofeng; Jin, Guofan
2013-09-01
Aspect ratio, width, and end-cap factor are three critical parameters defined to characterize the geometry of metallic nanorod (NR). In our previous work [Opt. Express 21, 2987 (2013)], we reported an optical extinction spectroscopic (OES) method that can measure the aspect ratio distribution of gold NR ensembles effectively and statistically. However, the measurement accuracy was found to depend on the estimate of the width and end-cap factor of the nanorod, which unfortunately cannot be determined by the OES method itself. In this work, we propose to improve the accuracy of the OES method by applying an auxiliary scattering measurement of the NR ensemble which can help to estimate the mean width of the gold NRs effectively. This so-called optical extinction/scattering spectroscopic (OESS) method can fast characterize the aspect ratio distribution as well as the mean width of gold NR ensembles simultaneously. By comparing with the transmission electron microscopy experimentally, the OESS method shows the advantage of determining two of the three critical parameters of the NR ensembles (i.e., the aspect ratio and the mean width) more accurately and conveniently than the OES method.
Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.
Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R
2016-02-16
Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal.
Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.
Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R
2016-02-16
Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal. PMID:26752013
An analytical filter design method for guided wave phased arrays
NASA Astrophysics Data System (ADS)
Kwon, Hyu-Sang; Kim, Jin-Yeon
2016-12-01
This paper presents an analytical method for designing a spatial filter that processes the data from an array of two-dimensional guided wave transducers. An inverse problem is defined where the spatial filter coefficients are determined in such a way that a prescribed beam shape, i.e., a desired array output is best approximated in the least-squares sense. Taking advantage of the 2π-periodicity of the generated wave field, Fourier-series representation is used to derive closed-form expressions for the constituting matrix elements. Special cases in which the desired array output is an ideal delta function and a gate function are considered in a more explicit way. Numerical simulations are performed to examine the performance of the filters designed by the proposed method. It is shown that the proposed filters can significantly improve the beam quality in general. Most notable is that the proposed method does not compromise between the main lobe width and the sidelobe levels; i.e. a narrow main lobe and low sidelobes are simultaneously achieved. It is also shown that the proposed filter can compensate the effects of nonuniform directivity and sensitivity of array elements by explicitly taking these into account in the formulation. From an example of detecting two separate targets, how much the angular resolution can be improved as compared to the conventional delay-and-sum filter is quantitatively illustrated. Lamb wave based imaging of localized defects in an elastic plate using a circular array is also presented as an example of practical applications.
[A New Method of Accurately Extracting Spectral Values for Discrete Sampling Points].
Lü, Zhen-zhen; Liu, Guang-ming; Yang, Jin-song
2015-08-01
In the establishment of remote sensing information inversion model, the actual measured data of discrete sampling points and the corresponding spectrum data to pixels of remote sensing image, are used to establish the relation, thus to realize the goal of information retrieval. Accurate extraction of spectrum value is very important to establish the remote sensing inversion mode. Converting target spot layer to ROI (region of interest) and then saving the ROI as ASCII is one of the methods that researchers often used to extract the spectral values. Analyzing the coordinate and spectrum values extracted using original coordinate in ENVI, we found that the extracted and original coordinate were not inconsistent and part of spectrum values not belong to the pixel containing the sampling point. The inversion model based on the above information cannot really reflect relationship between the target properties and spectral values; so that the model is meaningless. We equally divided the pixel into four parts and summed up the law. It was found that only when the sampling points distributed in the upper left corner of pixels, the extracted values were correct. On the basis of the above methods, this paper systematically studied the principle of extraction target coordinate and spectral values, and summarized the rule. A new method for extracting spectral parameters of the pixel that sampling point located in the environment of ENVI software. Firstly, pixel sampling point coordinates for any of the four corner points were extracted by the sample points with original coordinate in ENVI. Secondly, the sampling points were judged in which partition of pixel by comparing the absolute values of difference longitude and latitude of the original and extraction coordinates. Lastly, all points were adjusted to the upper left corner of pixels by symmetry principle and spectrum values were extracted by the same way in the first step. The results indicated that the extracted spectrum
Analytical method to estimate resin cement diffusion into dentin
NASA Astrophysics Data System (ADS)
de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa
2016-05-01
This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C–O–C, 1113 cm-1) present in the cements, and the mineral content (P–O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.
Analytical methods for waste minimisation in the convenience food industry.
Darlington, R; Staikos, T; Rahimifard, S
2009-04-01
Waste creation in some sectors of the food industry is substantial, and while much of the used material is non-hazardous and biodegradable, it is often poorly dealt with and simply sent to landfill mixed with other types of waste. In this context, overproduction wastes were found in a number of cases to account for 20-40% of the material wastes generated by convenience food manufacturers (such as ready-meals and sandwiches), often simply just to meet the challenging demands placed on the manufacturer due to the short order reaction time provided by the supermarkets. Identifying specific classes of waste helps to minimise their creation, through consideration of what the materials constitute and why they were generated. This paper aims to provide means by which food industry wastes can be identified, and demonstrate these mechanisms through a practical example. The research reported in this paper investigated the various categories of waste and generated three analytical methods for the support of waste minimisation activities by food manufacturers. The waste classifications and analyses are intended to complement existing waste minimisation approaches and are described through consideration of a case study convenience food manufacturer that realised significant financial savings through waste measurement, analysis and reduction.
Analytical method to estimate resin cement diffusion into dentin
NASA Astrophysics Data System (ADS)
de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa
2016-05-01
This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.
NIOSH Manual of Analytical Methods (third edition). Fourth supplement
Not Available
1990-08-15
The NIOSH Manual of Analytical Methods, 3rd edition, was updated for the following chemicals: allyl-glycidyl-ether, 2-aminopyridine, aspartame, bromine, chlorine, n-butylamine, n-butyl-glycidyl-ether, carbon-dioxide, carbon-monoxide, chlorinated-camphene, chloroacetaldehyde, p-chlorophenol, crotonaldehyde, 1,1-dimethylhydrazine, dinitro-o-cresol, ethyl-acetate, ethyl-formate, ethylenimine, sodium-fluoride, hydrogen-fluoride, cryolite, sodium-hexafluoroaluminate, formic-acid, hexachlorobutadiene, hydrogen-cyanide, hydrogen-sulfide, isopropyl-acetate, isopropyl-ether, isopropyl-glycidyl-ether, lead, lead-oxide, maleic-anhydride, methyl-acetate, methyl-acrylate, methyl-tert-butyl ether, methyl-cellosolve-acetate, methylcyclohexanol, 4,4'-methylenedianiline, monomethylaniline, monomethylhydrazine, nitric-oxide, p-nitroaniline, phenyl-ether, phenyl-ether-biphenyl mixture, phenyl-glycidyl-ether, phenylhydrazine, phosphine, ronnel, sulfuryl-fluoride, talc, tributyl-phosphate, 1,1,2-trichloro-1,2,2-trifluoroethane, trimellitic-anhydride, triorthocresyl-phosphate, triphenyl-phosphate, and vinyl-acetate.
A highly accurate method for the determination of mass and center of mass of a spacecraft
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Trubert, M. R.; Egwuatu, A.
1978-01-01
An extremely accurate method for the measurement of mass and the lateral center of mass of a spacecraft has been developed. The method was needed for the Voyager spacecraft mission requirement which limited the uncertainty in the knowledge of lateral center of mass of the spacecraft system weighing 750 kg to be less than 1.0 mm (0.04 in.). The method consists of using three load cells symmetrically located at 120 deg apart on a turntable with respect to the vertical axis of the spacecraft and making six measurements for each load cell. These six measurements are taken by cyclic rotations of the load cell turntable and of the spacecraft, about the vertical axis of the measurement fixture. This method eliminates all alignment, leveling, and load cell calibration errors for the lateral center of mass determination, and permits a statistical best fit of the measurement data. An associated data reduction computer program called MASCM has been written to implement this method and has been used for the Voyager spacecraft.
Galdames, Francisco J; Jaillet, Fabrice; Perez, Claudio A
2012-01-01
Skull stripping methods are designed to eliminate the non-brain tissue in magnetic resonance (MR) brain images. Removal of non-brain tissues is a fundamental step in enabling the processing of brain MR images. The aim of this study is to develop an automatic accurate skull stripping method based on deformable models and histogram analysis. A rough-segmentation step is used to find the optimal starting point for the deformation and is based on thresholds and morphological operators. Thresholds are computed using comparisons with an atlas, and modeling by Gaussians. The deformable model is based on a simplex mesh and its deformation is controlled by the image local gray levels and the information obtained on the gray level modeling of the rough-segmentation. Our Simplex Mesh and Histogram Analysis Skull Stripping (SMHASS) method was tested on the following international databases commonly used in scientific articles: BrainWeb, Internet Brain Segmentation Repository (IBSR), and Segmentation Validation Engine (SVE). A comparison was performed against three of the best skull stripping methods previously published: Brain Extraction Tool (BET), Brain Surface Extractor (BSE), and Hybrid Watershed Algorithm (HWA). Performance was measured using the Jaccard index (J) and Dice coefficient (κ). Our method showed the best performance and differences were statistically significant (p<0.05): J=0.904 and κ=0.950 on BrainWeb; J=0.905 and κ=0.950 on IBSR; J=0.946 and κ=0.972 on SVE.
An accurate clone-based haplotyping method by overlapping pool sequencing
Li, Cheng; Cao, Changchang; Tu, Jing; Sun, Xiao
2016-01-01
Chromosome-long haplotyping of human genomes is important to identify genetic variants with differing gene expression, in human evolution studies, clinical diagnosis, and other biological and medical fields. Although several methods have realized haplotyping based on sequencing technologies or population statistics, accuracy and cost are factors that prohibit their wide use. Borrowing ideas from group testing theories, we proposed a clone-based haplotyping method by overlapping pool sequencing. The clones from a single individual were pooled combinatorially and then sequenced. According to the distinct pooling pattern for each clone in the overlapping pool sequencing, alleles for the recovered variants could be assigned to their original clones precisely. Subsequently, the clone sequences could be reconstructed by linking these alleles accordingly and assembling them into haplotypes with high accuracy. To verify the utility of our method, we constructed 130 110 clones in silico for the individual NA12878 and simulated the pooling and sequencing process. Ultimately, 99.9% of variants on chromosome 1 that were covered by clones from both parental chromosomes were recovered correctly, and 112 haplotype contigs were assembled with an N50 length of 3.4 Mb and no switch errors. A comparison with current clone-based haplotyping methods indicated our method was more accurate. PMID:27095193
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2
Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.
1994-07-01
that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.
Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method
NASA Technical Reports Server (NTRS)
Blech, Richard A.
2002-01-01
One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.
NASA Astrophysics Data System (ADS)
Parand, K.; Rad, J. A.; Ahmadi, M.
2016-09-01
Natural convective heat transfer in porous media which is of importance in the design of canisters for nuclear waste disposal has received considerable attention during the past few decades. This paper presents a comparison between two different analytical and numerical methods, i.e. pseudospectral and Adomian decomposition methods. The pseudospectral approach makes use of the orthogonal rational Jacobi functions; this method reduces the solution of the problem to a solution of a system of algebraic equations. Numerical results are compared with each other, showing that the pseudospectral method leads to more accurate results and is applicable on similar problems.
Temperature dependent effective potential method for accurate free energy calculations of solids
NASA Astrophysics Data System (ADS)
Hellman, Olle; Steneteg, Peter; Abrikosov, I. A.; Simak, S. I.
2013-03-01
We have developed a thorough and accurate method of determining anharmonic free energies, the temperature dependent effective potential technique (TDEP). It is based on ab initio molecular dynamics followed by a mapping onto a model Hamiltonian that describes the lattice dynamics. The formalism and the numerical aspects of the technique are described in detail. A number of practical examples are given, and results are presented, which confirm the usefulness of TDEP within ab initio and classical molecular dynamics frameworks. In particular, we examine from first principles the behavior of force constants upon the dynamical stabilization of the body centered phase of Zr, and show that they become more localized. We also calculate the phase diagram for 4He modeled with the Aziz potential and obtain results which are in favorable agreement both with respect to experiment and established techniques.
NASA Astrophysics Data System (ADS)
Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.
2016-07-01
In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.
An inexpensive, accurate, and precise wet-mount method for enumerating aquatic viruses.
Cunningham, Brady R; Brum, Jennifer R; Schwenck, Sarah M; Sullivan, Matthew B; John, Seth G
2015-05-01
Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the "filter mount" method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5×10(7) viruses ml(-1). The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17×10(6) to 1.37×10(8) viruses ml(-1) when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1×10(6) viruses ml(-1)) encountered in field and laboratory samples.
An Inexpensive, Accurate, and Precise Wet-Mount Method for Enumerating Aquatic Viruses
Cunningham, Brady R.; Brum, Jennifer R.; Schwenck, Sarah M.; Sullivan, Matthew B.
2015-01-01
Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the “filter mount” method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5 × 107 viruses ml−1. The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17 × 106 to 1.37 × 108 viruses ml−1 when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1 × 106 viruses ml−1) encountered in field and laboratory samples. PMID:25710369
An inexpensive, accurate, and precise wet-mount method for enumerating aquatic viruses.
Cunningham, Brady R; Brum, Jennifer R; Schwenck, Sarah M; Sullivan, Matthew B; John, Seth G
2015-05-01
Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the "filter mount" method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5×10(7) viruses ml(-1). The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17×10(6) to 1.37×10(8) viruses ml(-1) when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1×10(6) viruses ml(-1)) encountered in field and laboratory samples. PMID:25710369
NASA Astrophysics Data System (ADS)
Zhang, Zaiyong; Wang, Wenke; Yeh, Tian-chyi Jim; Chen, Li; Wang, Zhoufeng; Duan, Lei; An, Kedong; Gong, Chengcheng
2016-06-01
In this paper, we develop a finite analytic method (FAMM), which combines flexibility of numerical methods and advantages of analytical solutions, to solve the mixed-form Richards' equation. This new approach minimizes mass balance errors and truncation errors associated with most numerical approaches. We use numerical experiments to demonstrate that FAMM can obtain more accurate numerical solutions and control the global mass balance better than modified Picard finite difference method (MPFD) as compared with analytical solutions. In addition, FAMM is superior to the finite analytic method based on head-based Richards' equation (FAMH). Besides, FAMM solutions are compared to analytical solutions for wetting and drying processes in Brindabella Silty Clay Loam and Yolo Light Clay soils. Finally, we demonstrate that FAMM yields comparable results with those from MPFD and Hydrus-1D for simulating infiltration into other different soils under wet and dry conditions. These numerical experiments further confirm the fact that as long as a hydraulic constitutive model captures general behaviors of other models, it can be used to yield flow fields comparable to those based on other models.
Zhao, Huaying; Brautigam, Chad A.; Ghirlando, Rodolfo; Schuck, Peter
2013-01-01
Significant progress in the interpretation of analytical ultracentrifugation (AUC) data in the last decade has led to profound changes in the practice of AUC, both for sedimentation velocity (SV) and sedimentation equilibrium (SE). Modern computational strategies have allowed for the direct modeling of the sedimentation process of heterogeneous mixtures, resulting in SV size-distribution analyses with significantly improved detection limits and strongly enhanced resolution. These advances have transformed the practice of SV, rendering it the primary method of choice for most existing applications of AUC, such as the study of protein self- and hetero-association, the study of membrane proteins, and applications in biotechnology. New global multi-signal modeling and mass conservation approaches in SV and SE, in conjunction with the effective-particle framework for interpreting the sedimentation boundary structure of interacting systems, as well as tools for explicit modeling of the reaction/diffusion/sedimentation equations to experimental data, have led to more robust and more powerful strategies for the study of reversible protein interactions and multi-protein complexes. Furthermore, modern mathematical modeling capabilities have allowed for a detailed description of many experimental aspects of the acquired data, thus enabling novel experimental opportunities, with important implications for both sample preparation and data acquisition. The goal of the current commentary is to supplement previous AUC protocols, Current Protocols in Protein Science 20.3 (1999) and 20.7 (2003), and 7.12 (2008), and provide an update describing the current tools for the study of soluble proteins, detergent-solubilized membrane proteins and their interactions by SV and SE. PMID:23377850
Analytic Method to Estimate Particle Acceleration in Flux Ropes
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
Extracting accurate strain measurements in bone mechanics: A critical review of current methods.
Grassi, Lorenzo; Isaksson, Hanna
2015-10-01
Osteoporosis related fractures are a social burden that advocates for more accurate fracture prediction methods. Mechanistic methods, e.g. finite element models, have been proposed as a tool to better predict bone mechanical behaviour and strength. However, there is little consensus about the optimal constitutive law to describe bone as a material. Extracting reliable and relevant strain data from experimental tests is of fundamental importance to better understand bone mechanical properties, and to validate numerical models. Several techniques have been used to measure strain in experimental mechanics, with substantial differences in terms of accuracy, precision, time- and length-scale. Each technique presents upsides and downsides that must be carefully evaluated when designing the experiment. Moreover, additional complexities are often encountered when applying such strain measurement techniques to bone, due to its complex composite structure. This review of literature examined the four most commonly adopted methods for strain measurements (strain gauges, fibre Bragg grating sensors, digital image correlation, and digital volume correlation), with a focus on studies with bone as a substrate material, at the organ and tissue level. For each of them the working principles, a summary of the main applications to bone mechanics at the organ- and tissue-level, and a list of pros and cons are provided. PMID:26099201
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia
2016-01-01
The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (BiologTM) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole
Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia
2016-01-01
The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (Biolog(TM)) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole
An accurate conservative level set/ghost fluid method for simulating turbulent atomization
Desjardins, Olivier Moureau, Vincent; Pitsch, Heinz
2008-09-10
This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case.
A Monte Carlo Method for Making the SDSS u-Band Magnitude More Accurate
NASA Astrophysics Data System (ADS)
Gu, Jiayin; Du, Cuihua; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu
2016-10-01
We develop a new Monte Carlo-based method to convert the Sloan Digital Sky Survey (SDSS) u-band magnitude to the south Galactic Cap of the u-band Sky Survey (SCUSS) u-band magnitude. Due to the increased accuracy of SCUSS u-band measurements, the converted u-band magnitude becomes more accurate compared with the original SDSS u-band magnitude, in particular at the faint end. The average u-magnitude error (for both SDSS and SCUSS) of numerous main-sequence stars with 0.2\\lt g-r\\lt 0.8 increases as the g-band magnitude becomes fainter. When g = 19.5, the average magnitude error of the SDSS u is 0.11. When g = 20.5, the average SDSS u error rises to 0.22. However, at this magnitude, the average magnitude error of the SCUSS u is just half as much as that of the SDSS u. The SDSS u-band magnitudes of main-sequence stars with 0.2\\lt g-r\\lt 0.8 and 18.5\\lt g\\lt 20.5 are converted, therefore the maximum average error of the converted u-band magnitudes is 0.11. The potential application of this conversion is to derive a more accurate photometric metallicity calibration from SDSS observations, especially for the more distant stars. Thus, we can explore stellar metallicity distributions either in the Galactic halo or some stream stars.
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
A colorimetric-based accurate method for the determination of enterovirus 71 titer.
Pourianfar, Hamid Reza; Javadi, Arman; Grollo, Lara
2012-12-01
The 50 % tissue culture infectious dose (TCID50) is still one of the most commonly used techniques for estimating virus titers. However, the traditional TCID50 assay is time consuming, susceptible to subjective errors and generates only quantal data. Here, we describe a colorimetric-based approach for the titration of Enterovirus 71 (EV71) using a modified method for making virus dilutions. In summary, the titration of EV71 using MTT or MTS staining with a modified virus dilution method decreased the time of the assay and eliminated the subjectivity of observational results, improving accuracy, reproducibility and reliability of virus titration, in comparison with the conventional TCID50 approach (p < 0.01). In addition, the results provided evidence that there was better correlation between a plaquing assay and our approach when compared to the traditional TCID50 approach. This increased accuracy also improved the ability to predict the number of virus plaque forming units present in a solution. These improvements could be of use for any virological experimentation, where a quick accurate titration of a virus capable of causing cell destruction is required or a sensible estimation of the number of viral plaques based on TCID50 of a virus is desired.
Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.
2011-07-15
High-throughput proteomics is rapidly evolving to require high mass measurement accuracy for a variety of different applications. Increased mass measurement accuracy in bottom-up proteomics specifically allows for an improved ability to distinguish and characterize detected MS features, which may in turn be identified by, e.g., matching to entries in a database for both precursor and fragmentation mass identification methods. Many tools exist with which to score the identification of peptides from LC-MS/MS measurements or to assess matches to an accurate mass and time (AMT) tag database, but these two calculations remain distinctly unrelated. Here we present a statistical method, Statistical Tools for AMT tag Confidence (STAC), which extends our previous work incorporating prior probabilities of correct sequence identification from LC-MS/MS, as well as the quality with which LC-MS features match AMT tags, to evaluate peptide identification confidence. Compared to existing tools, we are able to obtain significantly more high-confidence peptide identifications at a given false discovery rate and additionally assign confidence estimates to individual peptide identifications. Freely available software implementations of STAC are available in both command line and as a Windows graphical application.
Accurate computation of surface stresses and forces with immersed boundary methods
NASA Astrophysics Data System (ADS)
Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim
2016-09-01
Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.
A new method of accurate hand- and arm-tracking for small primates
NASA Astrophysics Data System (ADS)
Schaffelhofer, S.; Scherberger, H.
2012-04-01
The investigation of grasping movements in cortical motor areas depends heavily on the measurement of hand kinematics. Currently used methods for small primates need either a large number of sensors or provide insufficient accuracy. Here, we present both a novel glove based on electromagnetic tracking sensors that can operate at a rate of 100 Hz and a new modeling method that allows to monitor 27 degrees of freedom (DOF) of the hand and arm using only seven sensors. A rhesus macaque was trained to wear the glove while performing precision and power grips during a delayed grasping task in the dark without noticeable hindrance. During five recording sessions all 27 joint angles and their positions could be tracked reliably. Furthermore, the field generator did not interfere with electrophysiological recordings below 1 kHz and did not affect single-cell separation. Measurements with the glove proved to be accurate during static and dynamic testing (mean absolute error below 2° and 3°, respectively). This makes the glove a suitable solution for characterizing electrophysiological signals with respect to hand grasping and in particular for brain-machine interface applications.
Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers
NASA Astrophysics Data System (ADS)
Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.
2013-09-01
Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, M.W.; George, W.A.
1987-07-07
A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, Mark W.; George, William A.
1987-01-01
A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.
MASCG: Multi-Atlas Segmentation Constrained Graph method for accurate segmentation of hip CT images.
Chu, Chengwen; Bai, Junjie; Wu, Xiaodong; Zheng, Guoyan
2015-12-01
This paper addresses the issue of fully automatic segmentation of a hip CT image with the goal to preserve the joint structure for clinical applications in hip disease diagnosis and treatment. For this purpose, we propose a Multi-Atlas Segmentation Constrained Graph (MASCG) method. The MASCG method uses multi-atlas based mesh fusion results to initialize a bone sheetness based multi-label graph cut for an accurate hip CT segmentation which has the inherent advantage of automatic separation of the pelvic region from the bilateral proximal femoral regions. We then introduce a graph cut constrained graph search algorithm to further improve the segmentation accuracy around the bilateral hip joint regions. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 15-fold cross validation. When the present approach was compared to manual segmentation, an average surface distance error of 0.30 mm, 0.29 mm, and 0.30 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. A further look at the bilateral hip joint regions demonstrated an average surface distance error of 0.16 mm, 0.21 mm and 0.20 mm for the acetabulum, the left femoral head, and the right femoral head, respectively.
An Accurate Method for Measuring Airplane-Borne Conformal Antenna's Radar Cross Section
NASA Astrophysics Data System (ADS)
Guo, Shuxia; Zhang, Lei; Wang, Yafeng; Hu, Chufeng
2016-09-01
The airplane-borne conformal antenna attaches itself tightly with the airplane skin, so the conventional measurement method cannot determine the contribution of the airplane-borne conformal antenna to its radar cross section (RCS). This paper uses the 2D microwave imaging to isolate and extract the distribution of the reflectivity of the airplane-borne conformal antenna. It obtains the 2D spatial spectra of the conformal antenna through the wave spectral transform between the 2D spatial image and the 2D spatial spectrum. After the interpolation from the rectangular coordinate domain to the polar coordinate domain, the spectral domain data for the variation of the scatter of the conformal antenna with frequency and angle is obtained. The experimental results show that the measurement method proposed in this paper greatly enhances the airplane-borne conformal antenna's RCS measurement accuracy, essentially eliminates the influences caused by the airplane skin and more accurately reveals the airplane-borne conformal antenna's RCS scatter properties.
Accurate method to study static volume-pressure relationships in small fetal and neonatal animals.
Suen, H C; Losty, P D; Donahoe, P K; Schnitzer, J J
1994-08-01
We designed an accurate method to study respiratory static volume-pressure relationships in small fetal and neonatal animals on the basis of Archimedes' principle. Our method eliminates the error caused by the compressibility of air (Boyle's law) and is sensitive to a volume change of as little as 1 microliters. Fetal and neonatal rats during the period of rapid lung development from day 19.5 of gestation (term = day 22) to day 3.5 postnatum were studied. The absolute lung volume at a transrespiratory pressure of 30-40 cmH2O increased 28-fold from 0.036 +/- 0.006 (SE) to 0.994 +/- 0.042 ml, the volume per gram of lung increased 14-fold from 0.39 +/- 0.07 to 5.59 +/- 0.66 ml/g, compliance increased 12-fold from 2.3 +/- 0.4 to 27.3 +/- 2.7 microliters/cmH2O, and specific compliance increased 6-fold from 24.9 +/- 4.5 to 152.3 +/- 22.8 microliters.cmH2O-1.g lung-1. This technique, which allowed us to compare changes during late gestation and the early neonatal period in small rodents, can be used to monitor and evaluate pulmonary functional changes after in utero pharmacological therapies in experimentally induced abnormalities such as pulmonary hypoplasia, surfactant deficiency, and congenital diaphragmatic hernia. PMID:8002489
Dewey, Steven Clifford; Whetstone, Zachary David; Kearfott, Kimberlee Jane
2011-06-01
When characterizing environmental radioactivity, whether in the soil or within concrete building structures undergoing remediation or decommissioning, it is highly desirable to know the radionuclide depth distribution. This is typically modeled using continuous analytical expressions, whose forms are believed to best represent the true source distributions. In situ gamma ray spectroscopic measurements are combined with these models to fully describe the source. Currently, the choice of analytical expressions is based upon prior experimental core sampling results at similar locations, any known site history, or radionuclide transport models. This paper presents a method, employing multiple in situ measurements at a single site, for determining the analytical form that best represents the true depth distribution present. The measurements can be made using a variety of geometries, each of which has a different sensitivity variation with source spatial distribution. Using non-linear least squares numerical optimization methods, the results can be fit to a collection of analytical models and the parameters of each model determined. The analytical expression that results in the fit with the lowest residual is selected as the most accurate representation. A cursory examination is made of the effects of measurement errors on the method. PMID:21482447
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
Methods for accurate estimation of net discharge in a tidal channel
Simpson, M.R.; Bland, R.
2000-01-01
Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... parameter that may be comprised of a number of substances. Examples of such analytes include temperature... requirements of paragraph (b)(2) of this section are met. (ii) If the characteristics of a wastewater matrix... must be dechlorinated prior to the addition of such salts. (iii) If the......
An analytical method for the measurement of nonviable bioaerosols.
Menetrez, M Y; Foarde, K K; Ensor, D S
2001-10-01
Exposures from indoor environments are a major issue for evaluating total long-term personal exposures to the fine fraction (<2.5 microm in aerodynamic diameter) of particulate matter (PM). It is widely accepted in the indoor air quality (IAQ) research community that biocontamination is one of the important indoor air pollutants. Major indoor air biocontaminants include mold, bacteria, dust mites, and other antigens. Once the biocontaminants or their metabolites become airborne, IAQ could be significantly deteriorated. The airborne biocontaminants or their metabolites can induce irritational, allergic, infectious, and chemical responses in exposed individuals. Biocontaminants, such as some mold spores or pollen grains, because of their size and mass, settle rapidly within the indoor environment. Over time they may become nonviable and fragmented by the process of desiccation. Desiccated nonviable fragments of organisms are common and can be toxic or allergenic, depending upon the specific organism or organism component. Once these smaller and lighter fragments of biological PM become suspended in air, they have a greater tendency to stay suspended. Although some bioaerosols have been identified, few have been quantitatively studied for their prevalence within the total indoor PM with time, or for their affinity to penetrate indoors. This paper describes a preliminary research effort to develop a methodology for the measurement of nonviable biologically based PM, analyzing for mold and ragweed antigens and endotoxins. The research objectives include the development of a set of analytical methods and the comparison of impactor media and sample size, and the quantification of the relationship between outdoor and indoor levels of bioaerosols. Indoor and outdoor air samples were passed through an Andersen nonviable cascade impactor in which particles from 0.2 to 9.0 microm were collected and analyzed. The presence of mold, ragweed, and endotoxin was found in all eight
Laser: a Tool for Optimization and Enhancement of Analytical Methods
Preisler, Jan
1997-01-01
In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p
Lanman, Richard B; Mortimer, Stefanie A; Zill, Oliver A; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A; Divers, Stephen G; Hoon, Dave S B; Kopetz, E Scott; Lee, Jeeyun; Nikolinakos, Petros G; Baca, Arthur M; Kermani, Bahram G; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital Sequencing™ is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient's cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing.
Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C
2016-08-24
Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid. PMID:27496995
Fragoso, Wallace; Allegrini, Franco; Olivieri, Alejandro C
2016-08-24
Generalized analytical sensitivity (γ) is proposed as a new figure of merit, which can be estimated from a multivariate calibration data set. It can be confidently applied to compare different calibration methodologies, and helps to solve literature inconsistencies on the relationship between classical sensitivity and prediction error. In contrast to the classical plain sensitivity, γ incorporates the noise properties in its definition, and its inverse is well correlated with root mean square errors of prediction in the presence of general noise structures. The proposal is supported by studying simulated and experimental first-order multivariate calibration systems with various models, namely multiple linear regression, principal component regression (PCR) and maximum likelihood PCR (MLPCR). The simulations included instrumental noise of different types: independently and identically distributed (iid), correlated (pink) and proportional noise, while the experimental data carried noise which is clearly non-iid.
Sappenfield, William M; Peck, Magda G; Gilbert, Carol S; Haynatzka, Vera R; Bryant, Thomas
2010-11-01
The Perinatal Periods of Risk (PPOR) methods provide the necessary framework and tools for large urban communities to investigate feto-infant mortality problems. Adapted from the Periods of Risk model developed by Dr. Brian McCarthy, the six-stage PPOR approach includes epidemiologic methods to be used in conjunction with community planning processes. Stage 2 of the PPOR approach has three major analytic parts: Analytic Preparation, which involves acquiring, preparing, and assessing vital records files; Phase 1 Analysis, which identifies local opportunity gaps; and Phase 2 Analyses, which investigate the opportunity gaps to determine likely causes of feto-infant mortality and to suggest appropriate actions. This article describes the first two analytic parts of PPOR, including methods, innovative aspects, rationale, limitations, and a community example. In Analytic Preparation, study files are acquired and prepared and data quality is assessed. In Phase 1 Analysis, feto-infant mortality is estimated for four distinct perinatal risk periods defined by both birthweight and age at death. These mutually exclusive risk periods are labeled Maternal Health and Prematurity, Maternal Care, Newborn Care, and Infant Health to suggest primary areas of prevention. Disparities within the study community are identified by comparing geographic areas, subpopulations, and time periods. Excess mortality numbers and rates are estimated by comparing the study population to an optimal reference population. This excess mortality is described as the opportunity gap because it indicates where communities have the potential to make improvement.
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. PMID:25636165
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development.
Hanford environmental analytical methods: Methods as of March 1990. Volume 3, Appendix A2-I
Goheen, S.C.; McCulloch, M.; Daniel, J.L.
1993-05-01
This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually.
NASA Astrophysics Data System (ADS)
Papp, P.; Matejčík, Š.; Mach, P.; Urban, J.; Paidarová, I.; Horáček, J.
2013-06-01
The method of analytic continuation in the coupling constant (ACCC) in combination with use of the statistical Padé approximation is applied to the determination of resonance energy and width of some amino acids and formic acid dimer. Standard quantum chemistry codes provide accurate data which can be used for analytic continuation in the coupling constant to obtain the resonance energy and width of organic molecules with a good accuracy. The obtained results are compared with the existing experimental ones.
NASA Astrophysics Data System (ADS)
Bozkaya, Uǧur; Sherrill, C. David
2013-08-01
Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N6) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm-1) is fortuitously even better than that of CCSD(T) (50 cm-1), while the MAEs of CEPA(0) (184 cm-1) and CCSD (84 cm-1) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol-1, which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol-1), and comparing to MP2 (7.7 kcal mol-1) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is only 0.1 kcal
Bozkaya, Uğur; Sherrill, C David
2013-08-01
Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N(6)) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm(-1)) is fortuitously even better than that of CCSD(T) (50 cm(-1)), while the MAEs of CEPA(0) (184 cm(-1)) and CCSD (84 cm(-1)) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol(-1), which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol(-1)), and comparing to MP2 (7.7 kcal mol(-1)) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is
Beets, Caryn; Dubery, Ian
2011-12-15
Camalexin is a phytoalexin of Arabidopsis thaliana and an important component of inducible defenses. Accurate quantification of low concentrations suffers from interference by structurally related metabolites. A. thaliana plants were induced with silver nitrate and camalexin was extracted using methanol and identified and quantified by (i) TLC as a blue fluorescent band, (ii) microtiter plate-based fluorescence spectroscopy, (iii) GC on a midpolar column coupled to flame ionization detection, (iv) C(18) HPLC coupled to a photodiode detector, and (v) UPLC coupled to a mass spectrometer detector. Standard curves over the range of 0.1-15 μg ml(-1) gave R(2) values from 0.996 to 0.999. The different methods were compared and evaluated for their ability to detect and quantify increasing concentrations (<0.4-8 μgg(-1) FW) of camalexin. Each of the techniques presented advantages and disadvantages with regard to accuracy, precision, interference, analytical sensitivity, and limits of detection. TLC is a good qualitative technique for the identification of camalexin and fluorescence spectroscopy is subject to quenching when performed on crude extracts. Comparable results were obtained with GC-FID, HPLC-PDA, and UPLC-MS, with UPLC-MS having the added advantage of short analysis times and detection based on accurate mass. PMID:21910963
Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav
2013-10-28
We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Method for accurate sizing of pulmonary vessels from 3D medical images
NASA Astrophysics Data System (ADS)
O'Dell, Walter G.
2015-03-01
Detailed characterization of vascular anatomy, in particular the quantification of changes in the distribution of vessel sizes and of vascular pruning, is essential for the diagnosis and management of a variety of pulmonary vascular diseases and for the care of cancer survivors who have received radiation to the thorax. Clinical estimates of vessel radii are typically based on setting a pixel intensity threshold and counting how many "On" pixels are present across the vessel cross-section. A more objective approach introduced recently involves fitting the image with a library of spherical Gaussian filters and utilizing the size of the best matching filter as the estimate of vessel diameter. However, both these approaches have significant accuracy limitations including mis-match between a Gaussian intensity distribution and that of real vessels. Here we introduce and demonstrate a novel approach for accurate vessel sizing using 3D appearance models of a tubular structure along a curvilinear trajectory in 3D space. The vessel branch trajectories are represented with cubic Hermite splines and the tubular branch surfaces represented as a finite element surface mesh. An iterative parameter adjustment scheme is employed to optimally match the appearance models to a patient's chest X-ray computed tomography (CT) scan to generate estimates for branch radii and trajectories with subpixel resolution. The method is demonstrated on pulmonary vasculature in an adult human CT scan, and on 2D simulated test cases.
TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams
Gelover, E; Wang, D; Hill, P; Flynn, R; Hyer, D
2014-06-15
Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS. Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.
Sonoluminescence Spectroscopy as a Promising New Analytical Method
NASA Astrophysics Data System (ADS)
Yurchenko, O. I.; Kalinenko, O. S.; Baklanov, A. N.; Belov, E. A.; Baklanova, L. V.
2016-03-01
The sonoluminescence intensity of Cs, Ru, K, Na, Li, Sr, In, Ga, Ca, Th, Cr, Pb, Mn, Ag, and Mg salts in aqueous solutions of various concentrations was investigated as a function of ultrasound frequency and intensity. Techniques for the determination of these elements in solutions of table salt and their own salts were developed. It was shown that the proposed analytical technique gave results at high concentrations with better metrological characteristics than atomic-absorption spectroscopy because the samples were not diluted.
Fast, accurate and easy-to-pipeline methods for amplicon sequence processing
NASA Astrophysics Data System (ADS)
Antonielli, Livio; Sessitsch, Angela
2016-04-01
Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.
NASA Astrophysics Data System (ADS)
Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.
2015-10-01
The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.
Method for Operating a Sensor to Differentiate Between Analytes in a Sample
Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J
1998-07-28
Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.
NASA Astrophysics Data System (ADS)
RazaviToosi, S. L.; Samani, J. M. V.
2016-03-01
Watersheds are considered as hydrological units. Their other important aspects such as economic, social and environmental functions play crucial roles in sustainable development. The objective of this work is to develop methodologies to prioritize watersheds by considering different development strategies in environmental, social and economic sectors. This ranking could play a significant role in management to assign the most critical watersheds where by employing water management strategies, best condition changes are expected to be accomplished. Due to complex relations among different criteria, two new hybrid fuzzy ANP (Analytical Network Process) algorithms, fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and fuzzy max-min set methods are used to provide more flexible and accurate decision model. Five watersheds in Iran named Oroomeyeh, Atrak, Sefidrood, Namak and Zayandehrood are considered as alternatives. Based on long term development goals, 38 water management strategies are defined as subcriteria in 10 clusters. The main advantage of the proposed methods is its ability to overcome uncertainty. This task is accomplished by using fuzzy numbers in all steps of the algorithms. To validate the proposed method, the final results were compared with those obtained from the ANP algorithm and the Spearman rank correlation coefficient is applied to find the similarity in the different ranking methods. Finally, the sensitivity analysis was conducted to investigate the influence of cluster weights on the final ranking.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
40 CFR 141.852 - Analytical methods and laboratory certification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Standard Methods Online 9221 B.1, B.2-99 2 3 Presence-Absence (P-A) Coliform Test Standard Methods 9221 D.1... Method 1604 2 m-ColiBlue24® Test 2 4 Chromocult 2 4 Enzyme Substrate Methods Colilert® Standard Methods... MI mediumm-ColiBlue24® Test 2,4 EPA Method 1604 2 Chromocult 2 4 Enzyme Substrate Methods...
PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS
Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...
NASA Astrophysics Data System (ADS)
De Vuyst, Florian
2004-01-01
This exploratory work tries to present first results of a novel approach for the numerical approximation of solutions of hyperbolic systems of conservation laws. The objective is to define stable and "reasonably" accurate numerical schemes while being free from any upwind process and from any computation of derivatives or mean Jacobian matrices. That means that we only want to perform flux evaluations. This would be useful for "complicated" systems like those of two-phase models where solutions of Riemann problems are hard, see impossible to compute. For Riemann or Roe-like solvers, each fluid model needs the particular computation of the Jacobian matrix of the flux and the hyperbolicity property which can be conditional for some of these models makes the matrices be not R-diagonalizable everywhere in the admissible state space. In this paper, we rather propose some numerical schemes where the stability is obtained using convexity considerations. A certain rate of accuracy is also expected. For that, we propose to build numerical hybrid fluxes that are convex combinations of the second-order Lax-Wendroff scheme flux and the first-order modified Lax-Friedrichs scheme flux with an "optimal" combination rate that ensures both minimal numerical dissipation and good accuracy. The resulting scheme is a central scheme-like method. We will also need and propose a definition of local dissipation by convexity for hyperbolic or elliptic-hyperbolic systems. This convexity argument allows us to overcome the difficulty of nonexistence of classical entropy-flux pairs for certain systems. We emphasize the systematic feature of the method which can be fastly implemented or adapted to any kind of systems, with general analytical or data-tabulated equations of state. The numerical results presented in the paper are not superior to many existing state-of-the-art numerical methods for conservation laws such as ENO, MUSCL or central scheme of Tadmor and coworkers. The interest is rather
Simple and accessible analytical methods for the determination of mercury in soil and coal samples.
Park, Chul Hee; Eom, Yujin; Lee, Lauren Jong-Eun; Lee, Tai Gyu
2013-09-01
Simple and accessible analytical methods compared to conventional methods such as US EPA Method 7471B and ASTM-D6414 for the determination of mercury (Hg) in soil and coal samples are proposed. The new methods are consisted of fewer steps without the Hg oxidizing step consequently eliminating a step necessary to reduce excess oxidant. In the proposed methods, a Hg extraction is an inexpensive and accessible step utilizing a disposable test tube and a heating block instead of an expensive autoclave vessel and a specially-designed microwave. Also, a common laboratory vacuum filtration was used for the extracts instead of centrifugation. As for the optimal conditions, first, best acids for extracting Hg from soil and coal samples was investigated using certified reference materials (CRMs). Among common laboratory acids (HCl, HNO3, H2SO4, and aqua regia), aqua regia was most effective for the soil CRM whereas HNO3 was for the coal CRM. Next, the optimal heating temperature and time for Hg extraction were evaluated. The most effective Hg extraction was obtained at 120°C for 30min for soil CRM and at 70°C for 90min for coal CRM. Further tests using selected CRMs showed that all the measured values were within the allowable certification range. Finally, actual soil and coal samples were analyzed using the new methods and the US EPA Method 7473. The relative standard deviation values of 1.71-6.55% for soil and 0.97-12.11% for coal samples were obtained proving that the proposed methods were not only simple and accessible but also accurate.
40 CFR 141.852 - Analytical methods and laboratory certification.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Standard Methods Online 9221 B.1, B.2-99.2 3 Presence-Absence (P-A) Coliform Test Standard Methods 9221 D.1...-ColiBlue24® Test 2 4 Chromocult 2 4 Enzyme Substrate Methods Colilert® Standard Methods 9223 B (20th ed...-ColiBlue24® Test 2 4 EPA Method 1604 2 Chromocult 2 4 Enzyme Substrate Methods Colilert®...
Technology Transfer Automated Retrieval System (TEKTRAN)
The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. However, th...
Technology Transfer Automated Retrieval System (TEKTRAN)
The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. For phytochem...
Rozet, E; Ziemons, E; Marini, R D; Boulanger, B; Hubert, Ph
2012-11-01
Dissolution tests are key elements to ensure continuing product quality and performance. The ultimate goal of these tests is to assure consistent product quality within a defined set of specification criteria. Validation of an analytical method aimed at assessing the dissolution profile of products or at verifying pharmacopoeias compliance should demonstrate that this analytical method is able to correctly declare two dissolution profiles as similar or drug products as compliant with respect to their specifications. It is essential to ensure that these analytical methods are fit for their purpose. Method validation is aimed at providing this guarantee. However, even in the ICHQ2 guideline there is no information explaining how to decide whether the method under validation is valid for its final purpose or not. Are the entire validation criterion needed to ensure that a Quality Control (QC) analytical method for dissolution test is valid? What acceptance limits should be set on these criteria? How to decide about method's validity? These are the questions that this work aims at answering. Focus is made to comply with the current implementation of the Quality by Design (QbD) principles in the pharmaceutical industry in order to allow to correctly defining the Analytical Target Profile (ATP) of analytical methods involved in dissolution tests. Analytical method validation is then the natural demonstration that the developed methods are fit for their intended purpose and is not any more the inconsiderate checklist validation approach still generally performed to complete the filing required to obtain product marketing authorization. PMID:23084050
Analytical Energy Gradients for Excited-State Coupled-Cluster Methods
NASA Astrophysics Data System (ADS)
Wladyslawski, Mark; Nooijen, Marcel
The equation-of-motion coupled-cluster (EOM-CC) and similarity transformed equation-of-motion coupled-cluster (STEOM-CC) methods have been firmly established as accurate and routinely applicable extensions of single-reference coupled-cluster theory to describe electronically excited states. An overview of these methods is provided, with emphasis on the many-body similarity transform concept that is the key to a rationalization of their accuracy. The main topic of the paper is the derivation of analytical energy gradients for such non-variational electronic structure approaches, with an ultimate focus on obtaining their detailed algebraic working equations. A general theoretical framework using Lagrange's method of undetermined multipliers is presented, and the method is applied to formulate the EOM-CC and STEOM-CC gradients in abstract operator terms, following the previous work in [P.G. Szalay, Int. J. Quantum Chem. 55 (1995) 151] and [S.R. Gwaltney, R.J. Bartlett, M. Nooijen, J. Chem. Phys. 111 (1999) 58]. Moreover, the systematics of the Lagrange multiplier approach is suitable for automation by computer, enabling the derivation of the detailed derivative equations through a standardized and direct procedure. To this end, we have developed the SMART (Symbolic Manipulation and Regrouping of Tensors) package of automated symbolic algebra routines, written in the Mathematica programming language. The SMART toolkit provides the means to expand, differentiate, and simplify equations by manipulation of the detailed algebraic tensor expressions directly. The Lagrangian multiplier formulation establishes a uniform strategy to perform the automated derivation in a standardized manner: A Lagrange multiplier functional is constructed from the explicit algebraic equations that define the energy in the electronic method; the energy functional is then made fully variational with respect to all of its parameters, and the symbolic differentiations directly yield the explicit
The field analytical screening program (FASP) polychlorinated biphenyl (PCB) method uses a temperature-programmable gas chromatograph (GC) equipped with an electron capture detector (ECD) to identify and quantify PCBs. Gas chromatography is an EPA-approved method for determi...
A time-accurate implicit method for chemical non-equilibrium flows at all speeds
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun
1992-01-01
A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.
Analytical methods for the determination of carbon tetrachloride in soils.
Alvarado, J. S.; Spokas, K.; Taylor, J.
1999-06-01
Improved methods for the determination of carbon tetrachloride are described. These methods incorporate purge-and-trap concentration of heated dry samples, an improved methanol extraction procedure, and headspace sampling. The methods minimize sample pretreatment, accomplish solvent substitution, and save time. The methanol extraction and headspace sampling procedures improved the method detection limits and yielded better sensitivity, good recoveries, and good performance. Optimization parameters are shown. Results obtained with these techniques are compared for soil samples from contaminated sites.
NASA Astrophysics Data System (ADS)
Cai, Can-Ying; Zeng, Song-Jun; Liu, Hong-Rong; Yang, Qi-Bin
2008-05-01
A completely different formulation for simulation of the high order Laue zone (HOLZ) diffractions is derived. It refers to the new method, i.e. the Taylor series (TS) method. To check the validity and accuracy of the TS method, we take polyvinglidene fluoride (PVDF) crystal as an example to calculate the exit wavefunction by the conventional multi-slice (CMS) method and the TS method. The calculated results show that the TS method is much more accurate than the CMS method and is independent of the slice thicknesses. Moreover, the pure first order Laue zone wavefunction by the TS method can reflect the major potential distribution of the first reciprocal plane.
Team mental models: techniques, methods, and analytic approaches.
Langan-Fox, J; Code, S; Langfield-Smith, K
2000-01-01
Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.
Device and method for accurately measuring concentrations of airborne transuranic isotopes
McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.
1996-09-03
An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.
Device and method for accurately measuring concentrations of airborne transuranic isotopes
McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.
1996-01-01
An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy. PMID:20174722
Oftedal, O T; Eisert, R; Barrell, G K
2014-01-01
Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing
Oftedal, O T; Eisert, R; Barrell, G K
2014-01-01
Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing
Application of an analytical method for solution of thermal hydraulic conservation equations
Fakory, M.R.
1995-09-01
An analytical method has been developed and applied for solution of two-phase flow conservation equations. The test results for application of the model for simulation of BWR transients are presented and compared with the results obtained from application of the explicit method for integration of conservation equations. The test results show that with application of the analytical method for integration of conservation equations, the Courant limitation associated with explicit Euler method of integration was eliminated. The results obtained from application of the analytical method (with large time steps) agreed well with the results obtained from application of explicit method of integration (with time steps smaller than the size imposed by Courant limitation). The results demonstrate that application of the analytical approach significantly improves the numerical stability and computational efficiency.
Tank 48H Waste Composition and Results of Investigation of Analytical Methods
Walker , D.D.
1997-04-02
This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.
Manual of analytical methods for the Industrial Hygiene Chemistry Laboratory
Greulich, K.A.; Gray, C.E.
1991-08-01
This Manual is compiled from techniques used in the Industrial Hygiene Chemistry Laboratory of Sandia National Laboratories in Albuquerque, New Mexico. The procedures are similar to those used in other laboratories devoted to industrial hygiene practices. Some of the methods are standard; some, modified to suit our needs; and still others, developed at Sandia. The authors have attempted to present all methods in a simple and concise manner but in sufficient detail to make them readily usable. It is not to be inferred that these methods are universal for any type of sample, but they have been found very reliable for the types of samples mentioned.
Analytic method for calculating properties of random walks on networks
NASA Technical Reports Server (NTRS)
Goldhirsch, I.; Gefen, Y.
1986-01-01
A method for calculating the properties of discrete random walks on networks is presented. The method divides complex networks into simpler units whose contribution to the mean first-passage time is calculated. The simplified network is then further iterated. The method is demonstrated by calculating mean first-passage times on a segment, a segment with a single dangling bond, a segment with many dangling bonds, and a looplike structure. The results are analyzed and related to the applicability of the Einstein relation between conductance and diffusion.
Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš
2015-01-01
Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122
Base flow separation: A comparison of analytical and mass balance methods
NASA Astrophysics Data System (ADS)
Lott, Darline A.; Stewart, Mark T.
2016-04-01
Base flow is the ground water contribution to stream flow. Many activities, such as water resource management, calibrating hydrological and climate models, and studies of basin hydrology, require good estimates of base flow. The base flow component of stream flow is usually determined by separating a stream hydrograph into two components, base flow and runoff. Analytical methods, mathematical functions or algorithms used to calculate base flow directly from discharge, are the most widely used base flow separation methods and are often used without calibration to basin or gage-specific parameters other than basin area. In this study, six analytical methods are compared to a mass balance method, the conductivity mass-balance (CMB) method. The base flow index (BFI) values for 35 stream gages are obtained from each of the seven methods with each gage having at least two consecutive years of specific conductance data and 30 years of continuous discharge data. BFI is cumulative base flow divided by cumulative total discharge over the period of record of analysis. The BFI value is dimensionless, and always varies from 0 to 1. Areas of basins used in this study range from 27 km2 to 68,117 km2. BFI was first determined for the uncalibrated analytical methods. The parameters of each analytical method were then calibrated to produce BFI values as close to the CMB derived BFI values as possible. One of the methods, the power function (aQb + cQ) method, is inherently calibrated and was not recalibrated. The uncalibrated analytical methods have an average correlation coefficient of 0.43 when compared to CMB-derived values, and an average correlation coefficient of 0.93 when calibrated with the CMB method. Once calibrated, the analytical methods can closely reproduce the base flow values of a mass balance method. Therefore, it is recommended that analytical methods be calibrated against tracer or mass balance methods.
EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD
Three extraction methods and two detection techniques for determining pesticides in baby food were evaluated. The extraction techniques examined were supercritical fluid extraction (SFE), enhanced solvent extraction (ESE), and solid phase extraction (SPE). The detection techni...
Analytical method for promoting process capability of shock absorption steel.
Sung, Wen-Pei; Shih, Ming-Hsiang; Chen, Kuen-Suan
2003-01-01
Mechanical properties and low cycle fatigue are two factors that must be considered in developing new type steel for shock absorption. Process capability and process control are significant factors in achieving the purpose of research and development programs. Often-used evaluation methods failed to measure process yield and process centering; so this paper uses Taguchi loss function as basis to establish an evaluation method and the steps for assessing the quality of mechanical properties and process control of an iron and steel manufacturer. The establishment of this method can serve the research and development and manufacturing industry and lay a foundation in enhancing its process control ability to select better manufacturing processes that are more reliable than decision making by using the other commonly used methods.
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
NASA Astrophysics Data System (ADS)
Daeppen, W.
1980-11-01
In the free energy method statistical mechanical models are used to construct a free energy function of the plasma. The equilibrium composition for given temperature and density is found where the free energy is a minimum. Until now the free energy could not be expressed analytically, because the contributions from the partially degenerate electrons and from the inner degrees of freedom of the bound particles had to be evaluated numerically. In the present paper further simplifications are made to obtain an analytic expression for the free energy. Thus the minimum is rapidly found using a second order algorithm, whereas until now numerical first order derivatives and a steepest- descent method had to be used. Consequently time-consuming computations are avoided and the analytical version of the free energy method has successfully been incorporated into the stellar evolution programmes at Geneva Observatory. No use of thermodynamical tables is made, either. Although some accuracy is lost by the simplified analytical expression, the main advantages of the free energy method over simple ideal-gas and Sacha-equation subprogrammes (as used in the stellar programmes mentioned) are still kept. The relative errors of the simplifications made here are estimated and they are shown not to exceed 10% altogether. Densities up to those encountered in low-mass main-sequence stars can be treated within the region of validity of the method. Higher densities imply less accurate results. Nonetheless they are consistent so that they cannot disturb the numerical integration of the equilibrium equation in the stellar evolution model. The input quantities of the free energy method presented here are either temperature and density or temperature and pressure, the latter require a rapid numerical Legendre transformation which has been developed here.
Teaching Analytical Method Development in an Undergraduate Instrumental Analysis Course
ERIC Educational Resources Information Center
Lanigan, Katherine C.
2008-01-01
Method development and assessment, central components of carrying out chemical research, require problem-solving skills. This article describes a pedagogical approach for teaching these skills through the adaptation of published experiments and application of group-meeting style discussions to the curriculum of an undergraduate instrumental…
ERIC Educational Resources Information Center
Beare, R. A.
2008-01-01
Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…
Gonzo, Elio Emilio; Wuertz, Stefan; Rajal, Veronica B
2012-07-01
We present a novel analytical approach to describe biofilm processes considering continuum variation of both biofilm density and substrate effective diffusivity. A simple perturbation and matching technique was used to quantify biofilm activity using the steady-state diffusion-reaction equation with continuum variable substrate effective diffusivity and biofilm density, along the coordinate normal to the biofilm surface. The procedure allows prediction of an effectiveness factor, η, defined as the ratio between the observed rate of substrate utilization (reaction rate with diffusion resistance) and the rate of substrate utilization without diffusion limitation. Main assumptions are that (i) the biofilm is a continuum, (ii) substrate is transferred by diffusion only and is consumed only by microorganisms at a rate according to Monod kinetics, (iii) biofilm density and substrate effective diffusivity change in the x direction, (iv) the substrate concentration above the biofilm surface is known, and (v) the substratum is impermeable. With this approach one can evaluate, in a fast and efficient way, the effect of different parameters that characterize a heterogeneous biofilm and the kinetics of the rate of substrate consumption on the behavior of the biological system. Based on a comparison of η profiles the activity of a homogeneous biofilm could be as much as 47.8% higher than that of a heterogeneous biofilm, under the given conditions. A comparison of η values estimated for first order kinetics and η values obtained by numerical techniques showed a maximum deviation of 1.75% in a narrow range of modified Thiele modulus values. When external mass transfer resistance, is also considered, a global effectiveness factor, η(0) , can be calculated. The main advantage of the approach lies in the analytical expression for the calculation of the intrinsic effectiveness factor η and its implementation in a computer program. For the test cases studied convergence was
Gonzo, Elio Emilio; Wuertz, Stefan; Rajal, Veronica B
2012-07-01
We present a novel analytical approach to describe biofilm processes considering continuum variation of both biofilm density and substrate effective diffusivity. A simple perturbation and matching technique was used to quantify biofilm activity using the steady-state diffusion-reaction equation with continuum variable substrate effective diffusivity and biofilm density, along the coordinate normal to the biofilm surface. The procedure allows prediction of an effectiveness factor, η, defined as the ratio between the observed rate of substrate utilization (reaction rate with diffusion resistance) and the rate of substrate utilization without diffusion limitation. Main assumptions are that (i) the biofilm is a continuum, (ii) substrate is transferred by diffusion only and is consumed only by microorganisms at a rate according to Monod kinetics, (iii) biofilm density and substrate effective diffusivity change in the x direction, (iv) the substrate concentration above the biofilm surface is known, and (v) the substratum is impermeable. With this approach one can evaluate, in a fast and efficient way, the effect of different parameters that characterize a heterogeneous biofilm and the kinetics of the rate of substrate consumption on the behavior of the biological system. Based on a comparison of η profiles the activity of a homogeneous biofilm could be as much as 47.8% higher than that of a heterogeneous biofilm, under the given conditions. A comparison of η values estimated for first order kinetics and η values obtained by numerical techniques showed a maximum deviation of 1.75% in a narrow range of modified Thiele modulus values. When external mass transfer resistance, is also considered, a global effectiveness factor, η(0) , can be calculated. The main advantage of the approach lies in the analytical expression for the calculation of the intrinsic effectiveness factor η and its implementation in a computer program. For the test cases studied convergence was
Advanced and In Situ Analytical Methods for Solar Fuel Materials.
Chan, Candace K; Tüysüz, Harun; Braun, Artur; Ranjan, Chinmoy; La Mantia, Fabio; Miller, Benjamin K; Zhang, Liuxian; Crozier, Peter A; Haber, Joel A; Gregoire, John M; Park, Hyun S; Batchellor, Adam S; Trotochaud, Lena; Boettcher, Shannon W
2016-01-01
In situ and operando techniques can play important roles in the development of better performing photoelectrodes, photocatalysts, and electrocatalysts by helping to elucidate crucial intermediates and mechanistic steps. The development of high throughput screening methods has also accelerated the evaluation of relevant photoelectrochemical and electrochemical properties for new solar fuel materials. In this chapter, several in situ and high throughput characterization tools are discussed in detail along with their impact on our understanding of solar fuel materials.
NASA Astrophysics Data System (ADS)
Cen, Wei; Hoppe, Ralph; Gu, Ning
2016-09-01
In this paper, we proposed a method to numerically determinate 3-dimensional thermal response due to electromagnetic exposure quickly and accurately. Due to the stability criterion the explicit finite-difference time-domain (FDTD) method works fast only if the spatial step is not set very small. In this paper, the semi-implicit Crank-Nicholson method for time domain discretization with unconditional time stability is proposed, where the idea of fractional steps method was utilized in 3-dimension so that an efficient numerical implementation is obtained. Compared with the explicit FDTD, with similar numerical precision, the proposed method takes less than 1/200 of the execution time.
EVALUATION OF ANALYTICAL REPORTING ERRORS GENERATED AS DESCRIBED IN SW-846 METHOD 8261A
SW-846 Method 8261A incorporates the vacuum distillation of analytes from samples, and their recoveries are characterized by internal standards. The internal standards measure recoveries with confidence intervals as functions of physical properties. The frequency the calculate...
The Superfund Innovative Technology Evaluation (SITE) Program evaluates new technologies to assess their effectiveness. This bulletin summarizes results from the 1993 SITE demonstration of the Field Analytical Screening Program (FASP) Pentachlorophenol (PCP) Method to determine P...
Shen, Yan; Lou, Shuqin; Wang, Xin
2014-03-20
The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters.
Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua
2015-09-01
Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed. PMID:26753274
A time-accurate implicit method for chemically reacting flows at all Mach numbers
NASA Technical Reports Server (NTRS)
Withington, J. P.; Yang, V.; Shuen, J. S.
1991-01-01
The objective of this work is to develop a unified solution algorithm capable of treating time-accurate chemically reacting flows at all Mach numbers, ranging from molecular diffusion velocities to supersonic speeds. A rescaled pressure term is used in the momentum equation to circumvent the singular behavior of pressure at low Mach numbers. A dual time-stepping integration procedure is established. The system eigenvalues become well behaved and have the same order of magnitude, even in the very low Mach number regime. The computational efficiency for moderate and high speed flow is competitive with the conventional density-based scheme. The capabilities of the algorithm are demonstrated by applying it to selected model problems including nozzle flows and flame dynamics.
A Vocal-Based Analytical Method for Goose Behaviour Recognition
Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole
2012-01-01
Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037
A vocal-based analytical method for goose behaviour recognition.
Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole
2012-01-01
Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system.
Abate-Pella, Daniel; Freund, Dana M.; Ma, Yan; Simón-Manso, Yamil; Hollender, Juliane; Broeckling, Corey D.; Huhman, David V.; Krokhin, Oleg V.; Stoll, Dwight R.; Hegeman, Adrian D.; Kind, Tobias; Fiehn, Oliver; Schymanski, Emma L.; Prenni, Jessica E.; Sumner, Lloyd W.; Boswell, Paul G.
2015-01-01
Identification of small molecules by liquid chromatography-mass spectrometry (LC-MS) can be greatly improved if the chromatographic retention information is used along with mass spectral information to narrow down the lists of candidates. Linear retention indexing remains the standard for sharing retention data across labs, but it is unreliable because it cannot properly account for differences in the experimental conditions used by various labs, even when the differences are relatively small and unintentional. On the other hand, an approach called “retention projection” properly accounts for many intentional differences in experimental conditions, and when combined with a “back-calculation” methodology described recently, it also accounts for unintentional differences. In this study, the accuracy of this methodology is compared with linear retention indexing across eight different labs. When each lab ran a test mixture under a range of multi-segment gradients and flow rates they selected independently, retention projections averaged 22-fold more accurate for uncharged compounds because they properly accounted for these intentional differences, which were more pronounced in steep gradients. When each lab ran the test mixture under nominally the same conditions, which is the ideal situation to reproduce linear retention indices, retention projections still averaged 2-fold more accurate because they properly accounted for many unintentional differences between the LC systems. To the best of our knowledge, this is the most successful study to date aiming to calculate (or even just to reproduce) LC gradient retention across labs, and it is the only study in which retention was reliably calculated under various multi-segment gradients and flow rates chosen independently by labs. PMID:26292625
Analytical methods of laser spectroscopy for biomedical applications
NASA Astrophysics Data System (ADS)
Martyshkin, Dmitri V.
Different aspects of the application of laser spectroscopy in biomedical research have been considered. A growing demand for molecular sensing techniques in biomedical and environmental research has led the introduction of existing spectroscopic techniques, as well as development of new methods. The applications of laser-induced fluorescence, Raman scattering, cavity ring-down spectroscopy, and laser-induced breakdown spectroscopy for the monitoring of superoxide dismutase (SOD) and hemoglobin levels, the study of the characteristics of light-curing dental restorative materials, and the environmental monitoring of levels of toxic metal ion is presented. The development of new solid-state tunable laser sources based on color center crystals for these applications is presented as well.
A simple parallel analytical method of prenatal screening.
Li, Ding; Yang, Hao; Zhang, Wen-Hong; Pan, Hao; Wen, Dong-Qing; Han, Feng-Chan; Guo, Hui-Fang; Wang, Xiao-Ming; Yan, Xiao-Jun
2006-01-01
Protein microarray has progressed rapidly in the past few years, but it is still hard to popularize it in many developing countries or small hospitals owing to the technical expertise required in practice. We developed a cheap and easy-to-use protein microarray based on dot immunogold filtration assay for parallel analysis of ToRCH-related antibodies including Toxoplasma gondii, rubella virus, cytomegalovirus and herpes simplex virus type 1 and 2 in sera of pregnant women. It does not require any expensive instruments and the assay results can be clearly recognized by the naked eye. We analyzed 186 random sera of outpatients at the gynecological department with our microarray and commercial ELISA kit, and the results showed there was no significant difference between the two detection methods. Validated by clinical application, the microarray is easy to use and has a unique advantage in cost and time. It is more suitable for mass prenatal screening or epidemiological screening than the ELISA format.
Analytical Chemistry Laboratory (ACL) procedure compendium. Volume 4, Organic methods
Not Available
1993-08-01
This interim notice covers the following: extractable organic halides in solids, total organic halides, analysis by gas chromatography/Fourier transform-infrared spectroscopy, hexadecane extracts for volatile organic compounds, GC/MS analysis of VOCs, GC/MS analysis of methanol extracts of cryogenic vapor samples, screening of semivolatile organic extracts, GPC cleanup for semivolatiles, sample preparation for GC/MS for semi-VOCs, analysis for pesticides/PCBs by GC with electron capture detection, sample preparation for pesticides/PCBs in water and soil sediment, report preparation, Florisil column cleanup for pesticide/PCBs, silica gel and acid-base partition cleanup of samples for semi-VOCs, concentrate acid wash cleanup, carbon determination in solids using Coulometrics` CO{sub 2} coulometer, determination of total carbon/total organic carbon/total inorganic carbon in radioactive liquids/soils/sludges by hot persulfate method, analysis of solids for carbonates using Coulometrics` Model 5011 coulometer, and soxhlet extraction.
Automated methods for accurate determination of the critical velocity of packed bed chromatography.
Chang, Yu-Chih; Gerontas, Spyridon; Titchener-Hooker, Nigel J
2012-01-01
Knowing the critical velocity (ucrit) of a chromatography column is an important part of process development as it allows the optimization of chromatographic flow conditions. The conventional flow step method for determining ucrit is prone to error as it depends heavily on human judgment. In this study, two automated methods for determining ucrit have been developed: the automatic flow step (AFS) method and the automatic pressure step (APS) method. In the AFS method, the column pressure drop is monitored upon application of automated incremental increases in flow velocity, whereas in the APS method the flow velocity is monitored upon application of automated incremental increases in pressure drop. The APS method emerged as the one with the higher levels of accuracy, efficiency and ease of application having the greater potential to assist defining the best operational parameters of a chromatography column.
An Accurate Solution to the Lotka-Volterra Equations by Modified Homotopy Perturbation Method
NASA Astrophysics Data System (ADS)
Chowdhury, M. S. H.; Rahman, M. M.
In this paper, we suggest a method to solve the multispecies Lotka-Voltera equations. The suggested method, which we call modified homotopy perturbation method, can be considered as an extension of the homotopy perturbation method (HPM) which is very efficient in solving a varety of differential and algebraic equations. The HPM is modified in order to obtain the approximate solutions of Lotka-Voltera equation response in a sequence of time intervals. In particular, the example of two species is considered. The accuracy of this method is examined by comparison with the numerical solution of the Runge-Kutta-Verner method. The results prove that the modified HPM is a powerful tool for the solution of nonlinear equations.
An introduction to clinical microeconomic analysis: purposes and analytic methods.
Weintraub, W S; Mauldin, P D; Becker, E R
1994-06-01
The recent concern with health care economics has fostered the development of a new discipline that is generally called clinical microeconomics. This is a discipline in which microeconomic methods are used to study the economics of specific medical therapies. It is possible to perform stand alone cost analyses, but more profound insight into the medical decision making process may be accomplished by combining cost studies with measures of outcome. This is most often accomplished with cost-effectiveness or cost-utility studies. In cost-effectiveness studies there is one measure of outcome, often death. In cost-utility studies there are multiple measures of outcome, which must be grouped together to give an overall picture of outcome or utility. There are theoretical limitations to the determination of utility that must be accepted to perform this type of analysis. A summary statement of outcome is quality adjusted life years (QALYs), which is utility time socially discounted survival. Discounting is used because people value a year of future life less than a year of present life. Costs are made up of in-hospital direct, professional, follow-up direct, and follow-up indirect costs. Direct costs are for medical services. Indirect costs reflect opportunity costs such as lost time at work. Cost estimates are often based on marginal costs, or the cost for one additional procedure of the same type. Finally an overall statistic may be generated as cost per unit increase in effectiveness, such as dollars per QALY.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:10151059
Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2006-01-01
Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P
2016-01-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course. PMID:27010238
Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.
2016-01-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course. PMID:27010238
NASA Astrophysics Data System (ADS)
Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.
2016-03-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
A method for accurate determination of terminal sequences of viral genomic RNA.
Weng, Z; Xiong, Z
1995-09-01
A combination of ligation-anchored PCR and anchored cDNA cloning techniques were used to clone the termini of the saguaro cactus virus (SCV) RNA genome. The terminal sequences of the viral genome were subsequently determined from the clones. The 5' terminus was cloned by ligation-anchored PCR, whereas the 3' terminus was obtained by a technique we term anchored cDNA cloning. In anchored cDNA cloning, an anchor oligonucleotide was prepared by phosphorylation at the 5' end, followed by addition of a dideoxynucleotide at the 3' end to block the free hydroxyl group. The 5' end of the anchor was subsequently ligated to the 3' end of SCV RNA. The anchor-ligated, chimerical viral RNA was then reverse-transcribed into cDNA using a primer complementary to the anchor. The cDNA containing the complete 3'-terminal sequence was converted into ds-cDNA, cloned, and sequenced. Two restriction sites, one within the viral sequence and one within the primer sequence, were used to facilitate cloning. The combination of these techniques proved to be an easy and accurate way to determine the terminal sequences of SCV RNA genome and should be applicable to any other RNA molecules with unknown terminal sequences. PMID:9132274
Kolin, David L.; Ronis, David; Wiseman, Paul W.
2006-01-01
We present the theory and application of reciprocal space image correlation spectroscopy (kICS). This technique measures the number density, diffusion coefficient, and velocity of fluorescently labeled macromolecules in a cell membrane imaged on a confocal, two-photon, or total internal reflection fluorescence microscope. In contrast to r-space correlation techniques, we show kICS can recover accurate dynamics even in the presence of complex fluorophore photobleaching and/or “blinking”. Furthermore, these quantities can be calculated without nonlinear curve fitting, or any knowledge of the beam radius of the exciting laser. The number densities calculated by kICS are less sensitive to spatial inhomogeneity of the fluorophore distribution than densities measured using image correlation spectroscopy. We use simulations as a proof-of-principle to show that number densities and transport coefficients can be extracted using this technique. We present calibration measurements with fluorescent microspheres imaged on a confocal microscope, which recover Stokes-Einstein diffusion coefficients, and flow velocities that agree with single particle tracking measurements. We also show the application of kICS to measurements of the transport dynamics of α5-integrin/enhanced green fluorescent protein constructs in a transfected CHO cell imaged on a total internal reflection fluorescence microscope using charge-coupled device area detection. PMID:16861272
Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC
2009-06-19
Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.
Thompson, A.P.; Swiler, L.P.; Trott, C.R.; Foiles, S.M.; Tucker, G.J.
2015-03-15
We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.
An improved method for accurate and rapid measurement of flight performance in Drosophila.
Babcock, Daniel T; Ganetzky, Barry
2014-01-01
Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810
Schmidt, P J; Emelko, M B; Thompson, M E
2013-05-01
Quantitative microbial risk assessment (QMRA) is a tool to evaluate the potential implications of pathogens in a water supply or other media and is of increasing interest to regulators. In the case of potentially pathogenic protozoa (e.g. Cryptosporidium oocysts and Giardia cysts), it is well known that the methods used to enumerate (oo)cysts in samples of water and other media can have low and highly variable analytical recovery. In these applications, QMRA has evolved from ignoring analytical recovery to addressing it in point-estimates of risk, and then to addressing variation of analytical recovery in Monte Carlo risk assessments. Often, variation of analytical recovery is addressed in exposure assessment by dividing concentration values that were obtained without consideration of analytical recovery by random beta-distributed recovery values. A simple mathematical proof is provided to demonstrate that this conventional approach to address non-constant analytical recovery in drinking water QMRA will lead to overestimation of mean pathogen concentrations. The bias, which can exceed an order of magnitude, is greatest when low analytical recovery values are common. A simulated dataset is analyzed using a diverse set of approaches to obtain distributions representing temporal variation in the oocyst concentration, and mean annual risk is then computed from each concentration distribution using a simple risk model. This illustrative example demonstrates that the bias associated with mishandling non-constant analytical recovery and non-detect samples can cause drinking water systems to be erroneously classified as surpassing risk thresholds.
ERIC Educational Resources Information Center
Hughes, Stephen W.
2005-01-01
A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…
New methods determine pour point more accurately than ASTM D-97
Khan, H.U.; Dilawar, S.V.K.; Nautiyal, S.P.; Srivastava, S.P. )
1993-11-01
A new, alternative method determines petroleum fluid pour points with [+-] 1 C. precision and better accuracy than the standard ASTM D-97 procedure. The new method measures the pour point of transparent fluids by determining wax appearance temperature (WAT). Also, pour points of waxy crude oils can be determined by measuring a flow characteristic called restart pressure.
An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor
NASA Technical Reports Server (NTRS)
Chou, M. D.
1980-01-01
The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.
A second-order accurate kinetic-theory-based method for inviscid compressible flows
NASA Technical Reports Server (NTRS)
Deshpande, Suresh M.
1986-01-01
An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.
NASA Astrophysics Data System (ADS)
Roy, Ajit K.
An analytical method capable of predicting the in-plane (hoop) and interlaminar (radial) components of effective coefficients of thermal expansion (CTE) of laminated orthotropic rings is presented. This method is based on the linear theory of elasticity assuming the plane-stress condition in the (r, theta) plane of the ring and is applicable to any aspect ratio of the ring. A comparative study of the effective CTE for thin rings indicates that although 2D lamination theory can predict the in-plane CTE quite accurately, it overpredicts the values of the interlaminar CTE by a large amount. As an example, for a thin ring made with T300/5208 laminates, the 2D theory predicts an interlaminar CTE that is 29 percent higher than that predicted by the present method.
Căruntu, Bogdan
2014-01-01
The paper presents the optimal homotopy perturbation method, which is a new method to find approximate analytical solutions for nonlinear partial differential equations. Based on the well-known homotopy perturbation method, the optimal homotopy perturbation method presents an accelerated convergence compared to the regular homotopy perturbation method. The applications presented emphasize the high accuracy of the method by means of a comparison with previous results. PMID:25003150
NASA Astrophysics Data System (ADS)
Teng, H.; Fujiwara, T.; Hoshi, T.; Sogabe, T.; Zhang, S.-L.; Yamamoto, S.
2011-04-01
The need for large-scale electronic structure calculations arises recently in the field of material physics, and efficient and accurate algebraic methods for large simultaneous linear equations become greatly important. We investigate the generalized shifted conjugate orthogonal conjugate gradient method, the generalized Lanczos method, and the generalized Arnoldi method. They are the solver methods of large simultaneous linear equations of the one-electron Schrödinger equation and map the whole Hilbert space to a small subspace called the Krylov subspace. These methods are applied to systems of fcc Au with the NRL tight-binding Hamiltonian [F. Kirchhoff , Phys. Rev. BJCOMEL1098-012110.1103/PhysRevB.63.195101 63, 195101 (2001)]. We compare results by these methods and the exact calculation and show them to be equally accurate. The system size dependence of the CPU time is also discussed. The generalized Lanczos method and the generalized Arnoldi method are the most suitable for the large-scale molecular dynamics simulations from the viewpoint of CPU time and memory size.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
An adaptive grid method for computing time accurate solutions on structured grids
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Smith, Robert E.; Eiseman, Peter R.
1991-01-01
The solution method consists of three parts: a grid movement scheme; an unsteady Euler equation solver; and a temporal coupling routine that links the dynamic grid to the Euler solver. The grid movement scheme is an algebraic method containing grid controls that generate a smooth grid that resolves the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling is performed with a grid prediction correction procedure that is simple to implement and provides a grid that does not lag the solution in time. The adaptive solution method is tested by computing the unsteady inviscid solutions for a one dimensional shock tube and a two dimensional shock vortex iteraction.
A flux monitoring method for easy and accurate flow rate measurement in pressure-driven flows.
Siria, Alessandro; Biance, Anne-Laure; Ybert, Christophe; Bocquet, Lydéric
2012-03-01
We propose a low-cost and versatile method to measure flow rate in microfluidic channels under pressure-driven flows, thereby providing a simple characterization of the hydrodynamic permeability of the system. The technique is inspired by the current monitoring method usually employed to characterize electro-osmotic flows, and makes use of the measurement of the time-dependent electric resistance inside the channel associated with a moving salt front. We have successfully tested the method in a micrometer-size channel, as well as in a complex microfluidic channel with a varying cross-section, demonstrating its ability in detecting internal shape variations.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
A Comparative Evaluation of Analytical Methods to Allocate Individual Marks from a Team Mark
ERIC Educational Resources Information Center
Nepal, Kali
2012-01-01
This study presents a comparative evaluation of analytical methods to allocate individual marks from a team mark. Only the methods that use or can be converted into some form of mathematical equations are analysed. Some of these methods focus primarily on the assessment of the quality of teamwork product (product assessment) while the others put…
ERIC Educational Resources Information Center
Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia
2008-01-01
There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... Methods AGENCY: Environmental Protection Agency (EPA). ACTION: Notice. SUMMARY: The Environmental...) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as developing a reference coliform/non-coliform library...
Is photometry an accurate and reliable method to assess boar semen concentration?
Camus, A; Camugli, S; Lévêque, C; Schmitt, E; Staub, C
2011-02-01
Sperm concentration assessment is a key point to insure appropriate sperm number per dose in species subjected to artificial insemination (AI). The aim of the present study was to evaluate the accuracy and reliability of two commercially available photometers, AccuCell™ and AccuRead™ pre-calibrated for boar semen in comparison to UltiMate™ boar version 12.3D, NucleoCounter SP100 and Thoma hemacytometer. For each type of instrument, concentration was measured on 34 boar semen samples in quadruplicate and agreement between measurements and instruments were evaluated. Accuracy for both photometers was illustrated by mean of percentage differences to the general mean. It was -0.6% and 0.5% for Accucell™ and Accuread™ respectively, no significant differences were found between instrument and mean of measurement among all equipment. Repeatability for both photometers was 1.8% and 3.2% for AccuCell™ and AccuRead™ respectively. Low differences were observed between instruments (confidence interval 3%) except when hemacytometer was used as a reference. Even though hemacytometer is considered worldwide as the gold standard, it is the more variable instrument (confidence interval 7.1%). The conclusion is that routine photometry measures of raw semen concentration are reliable, accurate and precise using AccuRead™ or AccuCell™. There are multiple steps in semen processing that can induce sperm loss and therefore increase differences between theoretical and real sperm numbers in doses. Potential biases that depend on the workflow but not on the initial photometric measure of semen concentration are discussed.
Methods for applying accurate digital PCR analysis on low copy DNA samples.
Whale, Alexandra S; Cowen, Simon; Foy, Carole A; Huggett, Jim F
2013-01-01
Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA.
Lim, Caeul; Pereira, Ligia; Shardul, Pritish; Mascarenhas, Anjali; Maki, Jennifer; Rixon, Jordan; Shaw-Saliba, Kathryn; White, John; Silveira, Maria; Gomes, Edwin; Chery, Laura; Rathod, Pradipsinh K; Duraisingh, Manoj T
2016-08-01
Even with the advances in molecular or automated methods for detection of red blood cells of interest (such as reticulocytes or parasitized cells), light microscopy continues to be the gold standard especially in laboratories with limited resources. The conventional method for determination of parasitemia and reticulocytemia uses a Miller reticle, a grid with squares of different sizes. However, this method is prone to errors if not used correctly and counts become inaccurate and highly time-consuming at low frequencies of target cells. In this report, we outline the correct guidelines to follow when using a reticle for counting, and present a new counting protocol that is a modified version of the conventional method for increased accuracy in the counting of low parasitemias and reticulocytemias. Am. J. Hematol. 91:852-855, 2016. © 2016 Wiley Periodicals, Inc. PMID:27074559
Wilcox, Rand R; Keselman, H J
2012-05-01
During the last half century hundreds of papers published in statistical journals have documented general conditions where reliance on least squares regression and Pearson's correlation can result in missing even strong associations between variables. Moreover, highly misleading conclusions can be made, even when the sample size is large. There are, in fact, several fundamental concerns related to non-normality, outliers, heteroscedasticity, and curvature that can result in missing a strong association. Simultaneously, a vast array of new methods have been derived for effectively dealing with these concerns. The paper (1) reviews why least squares regression and classic inferential methods can fail, (2) provides an overview of the many modern strategies for dealing with known problems, including some recent advances, and (3) illustrates that modern robust methods can make a practical difference in our understanding of data. Included are some general recommendations regarding how modern methods might be used.
Three-Signal Method for Accurate Measurements of Depolarization Ratio with Lidar
NASA Technical Reports Server (NTRS)
Reichardt, Jens; Baumgart, Rudolf; McGee, Thomsa J.
2003-01-01
A method is presented that permits the determination of atmospheric depolarization-ratio profiles from three elastic-backscatter lidar signals with different sensitivity to the state of polarization of the backscattered light. The three-signal method is insensitive to experimental errors and does not require calibration of the measurement, which could cause large systematic uncertainties of the results, as is the case in the lidar technique conventionally used for the observation of depolarization ratios.
a High-Accurate and Efficient Obrechkoff Five-Step Method for Undamped Duffing's Equation
NASA Astrophysics Data System (ADS)
Zhao, Deyin; Wang, Zhongcheng; Dai, Yongming; Wang, Yuan
In this paper, we present a five-step Obrechkoff method to improve the previous two-step one for a second-order initial-value problem with the oscillatory solution. We use a special structure to construct the iterative formula, in which the higher-even-order derivatives are placed at central four nodes, and show there existence of periodic solutions in it with a remarkably wide interval of periodicity, H02 ˜ 16.28. By using a proper first-order derivative (FOD) formula to make this five-step method to have two advantages (a) a very high accuracy since the local truncation error (LTE) of both the main structure and the FOD formula are the same as O (h14); (b) a high efficiency because it avoids solving a polynomial equation with degree-nine by Picard iterative. By applying the new method to the well-known problem, the nonlinear Duffing's equation without damping, we can show that our numerical solution is four to five orders higher than the one by the previous Obrechkoff two-step method and it takes only 25% of CPU time required by the previous method to fulfil the same task. By using the new method, a better "exact" solution is found by fitting, whose error tolerance is below 5×10-15, than the one widely used in the lectures, whose error tolerance is below 10-11.
Waste Tank Organic Safety Program: Analytical methods development. Progress report, FY 1994
Campbell, J.A.; Clauss, S.A.; Grant, K.E.
1994-09-01
The objectives of this task are to develop and document extraction and analysis methods for organics in waste tanks, and to extend these methods to the analysis of actual core samples to support the Waste Tank organic Safety Program. This report documents progress at Pacific Northwest Laboratory (a) during FY 1994 on methods development, the analysis of waste from Tank 241-C-103 (Tank C-103) and T-111, and the transfer of documented, developed analytical methods to personnel in the Analytical Chemistry Laboratory (ACL) and 222-S laboratory. This report is intended as an annual report, not a completed work.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori
2015-05-01
The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load. PMID:25956125
Seo, Miyeong; Kim, Byungjoo; Baek, Song-Yee
2015-07-01
Patulin, a mycotoxin produced by several molds in fruits, has been frequently detected in apple products. Therefore, regulatory bodies have established recommended maximum permitted patulin concentrations for each type of apple product. Although several analytical methods have been adopted to determine patulin in food, quality control of patulin analysis is not easy, as reliable certified reference materials (CRMs) are not available. In this study, as a part of a project for developing CRMs for patulin analysis, we developed isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC/MS/MS) as a higher-order reference method for the accurate value-assignment of CRMs. (13)C7-patulin was used as internal standard. Samples were extracted with ethyl acetate to improve recovery. For further sample cleanup with solid-phase extraction (SPE), the HLB SPE cartridge was chosen after comparing with several other types of SPE cartridges. High-performance liquid chromatography was performed on a multimode column for proper retention and separation of highly polar and water-soluble patulin from sample interferences. Sample extracts were analyzed by LC/MS/MS with electrospray ionization in negative ion mode with selected reaction monitoring of patulin and (13)C7-patulin at m/z 153→m/z 109 and m/z 160→m/z 115, respectively. The validity of the method was tested by measuring gravimetrically fortified samples of various apple products. In addition, the repeatability and the reproducibility of the method were tested to evaluate the performance of the method. The method was shown to provide accurate measurements in the 3-40 μg/kg range with a relative expanded uncertainty of around 1%.
Metwally, Fadia H; Abdelkawy, Mohammed; Naguib, Ibrahim A
2006-01-01
Three new, different, simple, sensitive, and accurate methods were developed for quantitative determination of nifuroxazide (I) and drotaverine hydrochloride (II) in a binary mixture. The first method was spectrophotometry, which allowed determination of I in the presence of II using a zero-order spectrum with an analytically useful maximum at 364.5 nm that obeyed Beer's law over a concentration range of 2-10 microg/mL with mean percentage recovery of 100.08 +/- 0.61. Determination of II in presence of I was obtained by second derivative spectrophotometry at 243.6 nm, which obeyed Beer's law over a concentration range of 2-10 microg/mL with mean recovery of 99.82 +/- 1.46%. The second method was spectrodensitometry, with which both drugs were separated on a silica gel plate using chloroform-acetone-methanol-glacial acetic acid (6 + 3 + 0.9 + 0.1) as the mobile phase and ultraviolet (UV) detection at 365 nm over a concentration range of 0.2-1 microg/band for both drugs, with mean recoveries of 99.99 +/- 0.15 and 100.00 +/- 0.34% for I and II, respectively. The third method was reversed-phase liquid chromatography using acetonitrile-water (40 + 60, v/v; adjusted to pH 2.55 with orthophosphoric acid) as the mobile phase and pentoxifylline as the internal standard at a flow rate of 1 mU/min with UV detection at 285 nm at ambient temperature over a concentration range of 2-10 microg/mL for both drugs, with mean recoveries of 100.24 +/- 1.51 and 100.08 +/- 0.78% for I and II, respectively. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulations containing the above drugs with no interference from other dosage form additives. The validity of the suggested procedures was further assessed by applying the standard addition technique which was found to be satisfactory, and the percentage recoveries obtained were in accordance with those given by the EVA Pharma reference
Accurate low-cost methods for performance evaluation of cache memory systems
NASA Technical Reports Server (NTRS)
Laha, Subhasis; Patel, Janak H.; Iyer, Ravishankar K.
1988-01-01
Methods of simulation based on statistical techniques are proposed to decrease the need for large trace measurements and for predicting true program behavior. Sampling techniques are applied while the address trace is collected from a workload. This drastically reduces the space and time needed to collect the trace. Simulation techniques are developed to use the sampled data not only to predict the mean miss rate of the cache, but also to provide an empirical estimate of its actual distribution. Finally, a concept of primed cache is introduced to simulate large caches by the sampling-based method.
Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method
NASA Technical Reports Server (NTRS)
Smith, James P.
1996-01-01
A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.
A Robust Method of Vehicle Stability Accurate Measurement Using GPS and INS
NASA Astrophysics Data System (ADS)
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-12-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) is a very practical method to get high-precision measurement data. Usually, the Kalman filter is used to fuse the data from GPS and INS. In this paper, a robust method is used to measure vehicle sideslip angle and yaw rate, which are two important parameters for vehicle stability. First, a four-wheel vehicle dynamic model is introduced, based on sideslip angle and yaw rate. Second, a double level Kalman filter is established to fuse the data from Global Positioning System and Inertial Navigation System. Then, this method is simulated on a sample vehicle, using Carsim software to test the sideslip angle and yaw rate. Finally, a real experiment is made to verify the advantage of this approach. The experimental results showed the merits of this method of measurement and estimation, and the approach can meet the design requirements of the vehicle stability controller.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Fan, Liang-Shih
2014-07-01
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge-Kutta schemes in the coupled fluid-particle interaction. The major challenge to implement high-order Runge-Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid-particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge-Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and -0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered. This finding
A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures.
Mangipudi, K R; Radisch, V; Holzer, L; Volkert, C A
2016-04-01
We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. PMID:26906523
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng
2016-01-01
An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.
NASA Astrophysics Data System (ADS)
Pandey, Siddharth; Borders, Tammie L.; Hernández, Carmen E.; Roy, Lindsay E.; Reddy, Gaddum D.; Martinez, Geo L.; Jackson, Autumn; Brown, Guenevere; Acree, William E., Jr.
1999-01-01
An undergraduate laboratory experiment is designed for the quantitative determination of quinine in tonic water samples. It is based upon direct fluorescence emission and first-derivative spectroscopic methods. Unlike other published laboratory experiments, our method exposes students to the general method of derivative spectroscopy, an important, often-used analytical technique for eliminating sample matrix and background absorbance effects and for treating overlapped spectral bands. The statistical treatment allows students to compare concentrations directly calculated from the measured fluorescence emission intensity with values obtained from the first-derivative emission spectra, to ascertain whether there is a difference between the two analytical methods. Method selection and validation are important items routinely encountered by practicing analytical chemists.
Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method
Sinha, Debalina; Pavanello, Michele
2015-08-28
The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.
A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows
Zhou, Qiang; Fan, Liang-Shih
2014-07-01
A new immersed boundary-lattice Boltzmann method (IB-LBM) is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The immersed boundary method (IBM) recently developed by Breugem (2012) [19] is adopted in the present method, development including the retraction technique, the multi-direct forcing method and the direct account of the inertia of the fluid contained within the particles. The present IB-LBM is, however, formulated with further improvement with the implementation of the high-order Runge–Kutta schemes in the coupled fluid–particle interaction. The major challenge to implement high-order Runge–Kutta schemes in the LBM is that the flow information such as density and velocity cannot be directly obtained at a fractional time step from the LBM since the LBM only provides the flow information at an integer time step. This challenge can be, however, overcome as given in the present IB-LBM by extrapolating the flow field around particles from the known flow field at the previous integer time step. The newly calculated fluid–particle interactions from the previous fractional time steps of the current integer time step are also accounted for in the extrapolation. The IB-LBM with high-order Runge–Kutta schemes developed in this study is validated by several benchmark applications. It is demonstrated, for the first time, that the IB-LBM has the capacity to resolve the translational and rotational motion of particles with the second-order accuracy. The optimal retraction distances for spheres and tubes that help the method achieve the second-order accuracy are found to be around 0.30 and −0.47 times of the lattice spacing, respectively. Simulations of the Stokes flow through a simple cubic lattice of rotational spheres indicate that the lift force produced by the Magnus effect can be very significant in view of the magnitude of the drag force when the practical rotating speed of the spheres is encountered
NASA Technical Reports Server (NTRS)
Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)
2008-01-01
A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.
NASA Technical Reports Server (NTRS)
Boughner, Robert E.
1986-01-01
A method for calculating the photodissociation rates needed for photochemical modeling of the stratosphere, which includes the effects of molecular scattering, is described. The procedure is based on Sokolov's method of averaging functional correction. The radiation model and approximations used to calculate the radiation field are examined. The approximated diffuse fields and photolysis rates are compared with exact data. It is observed that the approximate solutions differ from the exact result by 10 percent or less at altitudes above 15 km; the photolysis rates differ from the exact rates by less than 5 percent for altitudes above 10 km and all zenith angles, and by less than 1 percent for altitudes above 15 km.
Finite element method for accurate 3D simulation of plasmonic waveguides
NASA Astrophysics Data System (ADS)
Burger, Sven; Zschiedrich, Lin; Pomplun, Jan; Schmidt, Frank
2010-02-01
Optical properties of hybrid plasmonic waveguides and of low-Q cavities, formed by waveguides of finite length are investigated numerically. These structures are of interest as building-blocks of plasmon lasers. We use a time-harmonic finite-element package including a propagation-mode solver, a resonance-mode solver and a scattering solver for studying various properties of the system. Numerical convergence of all used methods is demonstrated.
Santos, Sílvia; Ungureanu, Gabriela; Boaventura, Rui; Botelho, Cidália
2015-07-15
Selenium is an essential trace element for many organisms, including humans, but it is bioaccumulative and toxic at higher than homeostatic levels. Both selenium deficiency and toxicity are problems around the world. Mines, coal-fired power plants, oil refineries and agriculture are important examples of anthropogenic sources, generating contaminated waters and wastewaters. For reasons of human health and ecotoxicity, selenium concentration has to be controlled in drinking-water and in wastewater, as it is a potential pollutant of water bodies. This review article provides firstly a general overview about selenium distribution, sources, chemistry, toxicity and environmental impact. Analytical techniques used for Se determination and speciation and water and wastewater treatment options are reviewed. In particular, published works on adsorption as a treatment method for Se removal from aqueous solutions are critically analyzed. Recent published literature has given particular attention to the development and search for effective adsorbents, including low-cost alternative materials. Published works mostly consist in exploratory findings and laboratory-scale experiments. Binary metal oxides and LDHs (layered double hydroxides) have presented excellent adsorption capacities for selenium species. Unconventional sorbents (algae, agricultural wastes and other biomaterials), in raw or modified forms, have also led to very interesting results with the advantage of their availability and low-cost. Some directions to be considered in future works are also suggested. PMID:25847169
Santos, Sílvia; Ungureanu, Gabriela; Boaventura, Rui; Botelho, Cidália
2015-07-15
Selenium is an essential trace element for many organisms, including humans, but it is bioaccumulative and toxic at higher than homeostatic levels. Both selenium deficiency and toxicity are problems around the world. Mines, coal-fired power plants, oil refineries and agriculture are important examples of anthropogenic sources, generating contaminated waters and wastewaters. For reasons of human health and ecotoxicity, selenium concentration has to be controlled in drinking-water and in wastewater, as it is a potential pollutant of water bodies. This review article provides firstly a general overview about selenium distribution, sources, chemistry, toxicity and environmental impact. Analytical techniques used for Se determination and speciation and water and wastewater treatment options are reviewed. In particular, published works on adsorption as a treatment method for Se removal from aqueous solutions are critically analyzed. Recent published literature has given particular attention to the development and search for effective adsorbents, including low-cost alternative materials. Published works mostly consist in exploratory findings and laboratory-scale experiments. Binary metal oxides and LDHs (layered double hydroxides) have presented excellent adsorption capacities for selenium species. Unconventional sorbents (algae, agricultural wastes and other biomaterials), in raw or modified forms, have also led to very interesting results with the advantage of their availability and low-cost. Some directions to be considered in future works are also suggested.
High Resolution Melting Analysis: A Rapid and Accurate Method to Detect CALR Mutations
Moreno, Melania; Torres, Laura; Santana-Lopez, Gonzalo; Rodriguez-Medina, Carlos; Perera, María; Bellosillo, Beatriz; de la Iglesia, Silvia; Molero, Teresa; Gomez-Casares, Maria Teresa
2014-01-01
Background The recent discovery of CALR mutations in essential thrombocythemia (ET) and primary myelofibrosis (PMF) patients without JAK2/MPL mutations has emerged as a relevant finding for the molecular diagnosis of these myeloproliferative neoplasms (MPN). We tested the feasibility of high-resolution melting (HRM) as a screening method for rapid detection of CALR mutations. Methods CALR was studied in wild-type JAK2/MPL patients including 34 ET, 21 persistent thrombocytosis suggestive of MPN and 98 suspected secondary thrombocytosis. CALR mutation analysis was performed through HRM and Sanger sequencing. We compared clinical features of CALR-mutated versus 45 JAK2/MPL-mutated subjects in ET. Results Nineteen samples showed distinct HRM patterns from wild-type. Of them, 18 were mutations and one a polymorphism as confirmed by direct sequencing. CALR mutations were present in 44% of ET (15/34), 14% of persistent thrombocytosis suggestive of MPN (3/21) and none of the secondary thrombocytosis (0/98). Of the 18 mutants, 9 were 52 bp deletions, 8 were 5 bp insertions and other was a complex mutation with insertion/deletion. No mutations were found after sequencing analysis of 45 samples displaying wild-type HRM curves. HRM technique was reproducible, no false positive or negative were detected and the limit of detection was of 3%. Conclusions This study establishes a sensitive, reliable and rapid HRM method to screen for the presence of CALR mutations. PMID:25068507
EEMD based pitch evaluation method for accurate grating measurement by AFM
NASA Astrophysics Data System (ADS)
Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde
2016-09-01
The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.
Synergistic effect of combining two nondestructive analytical methods for multielemental analysis.
Toh, Yosuke; Ebihara, Mitsuru; Kimura, Atsushi; Nakamura, Shoji; Harada, Hideo; Hara, Kaoru Y; Koizumi, Mitsuo; Kitatani, Fumito; Furutaka, Kazuyoshi
2014-12-16
We developed a new analytical technique that combines prompt gamma-ray analysis (PGA) and time-of-flight elemental analysis (TOF) by using an intense pulsed neutron beam at the Japan Proton Accelerator Research Complex. It allows us to obtain the results from both methods at the same time. Moreover, it can be used to quantify elemental concentrations in the sample, to which neither of these methods can be applied independently, if a new analytical spectrum (TOF-PGA) is used. To assess the effectiveness of the developed method, a mixed sample of Ag, Au, Cd, Co, and Ta, and the Gibeon meteorite were analyzed. The analytical capabilities were compared based on the gamma-ray peak selectivity and signal-to-noise ratios. TOF-PGA method showed high merits, although the capability may differ based on the target and coexisting elements. PMID:25371049
A method based on stochastic resonance for the detection of weak analytical signal.
Wu, Xiaojing; Guo, Weiming; Cai, Wensheng; Shao, Xueguang; Pan, Zhongxiao
2003-12-23
An effective method for detection of weak analytical signals with strong noise background is proposed based on the theory of stochastic resonance (SR). Compared with the conventional SR-based algorithms, the proposed algorithm is simplified by changing only one parameter to realize the weak signal detection. Simulation studies revealed that the method performs well in detection of analytical signals in very high level of noise background and is suitable for detecting signals with the different noise level by changing the parameter. Applications of the method to experimental weak signals of X-ray diffraction and Raman spectrum are also investigated. It is found that reliable results can be obtained.
Quantitative 1H NMR: Development and Potential of an Analytical Method – an Update
Pauli, Guido F.; Gödecke, Tanja; Jaki, Birgit U.; Lankin, David C.
2012-01-01
Covering the literature from mid-2004 until the end of 2011, this review continues a previous literature overview on quantitative 1H NMR (qHNMR) methodology and its applications in the analysis of natural products (NPs). Among the foremost advantages of qHNMR is its accurate function with external calibration, the lack of any requirement for identical reference materials, a high precision and accuracy when properly validated, and an ability to quantitate multiple analytes simultaneously. As a result of the inclusion of over 170 new references, this updated review summarizes a wealth of detailed experiential evidence and newly developed methodology that supports qHNMR as a valuable and unbiased analytical tool for natural product and other areas of research. PMID:22482996
NASA Astrophysics Data System (ADS)
Małolepsza, Edyta; Witek, Henryk A.; Morokuma, Keiji
2005-09-01
An optimization technique for enhancing the quality of repulsive two-body potentials of the self-consistent-charge density-functional tight-binding (SCC-DFTB) method is presented and tested. The new, optimized potentials allow for significant improvement of calculated harmonic vibrational frequencies. Mean absolute deviation from experiment computed for a group of 14 hydrocarbons is reduced from 59.0 to 33.2 cm -1 and maximal absolute deviation, from 436.2 to 140.4 cm -1. A drawback of the new family of potentials is a lower quality of reproduced geometrical and energetic parameters.
Accurate predictions of C-SO2R bond dissociation enthalpies using density functional theory methods.
Yu, Hai-Zhu; Fu, Fang; Zhang, Liang; Fu, Yao; Dang, Zhi-Min; Shi, Jing
2014-10-14
The dissociation of the C-SO2R bond is frequently involved in organic and bio-organic reactions, and the C-SO2R bond dissociation enthalpies (BDEs) are potentially important for understanding the related mechanisms. The primary goal of the present study is to provide a reliable calculation method to predict the different C-SO2R bond dissociation enthalpies (BDEs). Comparing the accuracies of 13 different density functional theory (DFT) methods (such as B3LYP, TPSS, and M05 etc.), and different basis sets (such as 6-31G(d) and 6-311++G(2df,2p)), we found that M06-2X/6-31G(d) gives the best performance in reproducing the various C-S BDEs (and especially the C-SO2R BDEs). As an example for understanding the mechanisms with the aid of C-SO2R BDEs, some primary mechanistic studies were carried out on the chemoselective coupling (in the presence of a Cu-catalyst) or desulfinative coupling reactions (in the presence of a Pd-catalyst) between sulfinic acid salts and boryl/sulfinic acid salts.
An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System.
Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide
2015-01-01
Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors' errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved. PMID:26225983
An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System
Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide
2015-01-01
Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors’ errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved. PMID:26225983